Topic
stringclasses
9 values
News_Title
stringlengths
10
120
Citation
stringlengths
18
4.58k
Paper_URL
stringlengths
27
213
News_URL
stringlengths
36
119
Paper_Body
stringlengths
11.8k
2.03M
News_Body
stringlengths
574
29.7k
DOI
stringlengths
3
169
Medicine
Discovery of anti-appetite molecule released by fibre could help tackle obesity
"The short-chain fatty acid acetate reduces appetite via a central homeostatic mechanism." Gary Frost, et al. Nature Communications 5, Article number: 3611 DOI: 10.1038/ncomms4611. Received 16 July 2013 Accepted 11 March 2014 Published 29 April 2014 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms4611
https://medicalxpress.com/news/2014-04-discovery-anti-appetite-molecule-fibre-tackle.html
Abstract Increased intake of dietary carbohydrate that is fermented in the colon by the microbiota has been reported to decrease body weight, although the mechanism remains unclear. Here we use in vivo 11 C-acetate and PET-CT scanning to show that colonic acetate crosses the blood–brain barrier and is taken up by the brain. Intraperitoneal acetate results in appetite suppression and hypothalamic neuronal activation patterning. We also show that acetate administration is associated with activation of acetyl-CoA carboxylase and changes in the expression profiles of regulatory neuropeptides that favour appetite suppression. Furthermore, we demonstrate through 13 C high-resolution magic-angle-spinning that 13 C acetate from fermentation of 13 C-labelled carbohydrate in the colon increases hypothalamic 13 C acetate above baseline levels. Hypothalamic 13 C acetate regionally increases the 13 C labelling of the glutamate–glutamine and GABA neuroglial cycles, with hypothalamic 13 C lactate reaching higher levels than the ‘remaining brain’. These observations suggest that acetate has a direct role in central appetite regulation. Introduction Obesity has reached epidemic proportions worldwide, with incidence rates above 20% in most western countries now representing a major public health burden 1 . This rise in obesity is fuelled by the mismatch between an inherited genetic predisposition for survival in environments where food supply is limited and the current obesogenic environment where reduced physical activity and excess energy intake prevails 2 . Modern food processing has resulted in the mass production of cheap, energy-dense foods that are generally high in refined sugars and fats but low in fibre 3 . It has been estimated that the Palaeolithic diet delivered >100 g per day of fibre, whereas current western intakes are only between 10 and 20 g per day 4 . Much of the fibre consumed in the Palaeolithic diet, unlike current fibre intake, would have been highly fermentable. There is a growing body of evidence that links fermentation of dietary carbohydrate (including fibre) by the colonic microbiota to positive effects on metabolism 5 . Observations by Cani et al. 6 demonstrated that rats exposed to a high-fat diet (HFD) with an increased fermentable carbohydrate (FC) component for 4 weeks had lower dietary energy intake, body weight and adiposity, suggesting that increased dietary FC leads to improved appetite and body weight regulation. FCs are thought to promote the release of the anorectic gut hormones peptide YY (PYY) and glucagon-like peptide-1 (GLP-1) (ref. 7 ). However, FC-associated weight loss in human and rodent models have failed to consistently show increases in these anorectic hormones 8 . We have recently shown in mice that dietary FC supplementation is associated with reduced energy intake, body weight, adiposity and changes in hypothalamic neuronal activation patterns, all of which were independent of changes in circulating concentrations of GLP-1 (ref. 9 ). To better understand the mechanism behind the anorectic action of FC supplementation, we investigated the direct effects of the most abundant end product of colonic FC fermentation, the short-chain fatty acid (SCFA) acetate, on the control of appetite. We present herein that acetate derived from the colonic fermentation of FC acts to directly supress appetite through central hypothalamic mechanisms involving changes in transcellular neurotransmitter cycles. This, therefore, opens up new research directions into the promotion of acetate production by colonic microbiota and therapeutic strategies for the prevention and treatment of obesity. Results The effect of FCs on body weight We fed C57BL/6 mice with HFD supplemented with either the FC inulin (HF-I) or the poorly fermentable fibre cellulose (HF-C). After 8 weeks, mice fed the HF-I diet gained significantly less weight than those fed HF-C ( Fig. 1a ) and consumed significantly less food ( Fig. 1b ). SCFA analysis of the colonic content of the former animals demonstrated a significant increase in total SCFA concentrations and especially the most abundant SCFA, acetate ( Fig. 1c ). Although others have reported that FC supplementation leads to an increase in the anorectic hormones GLP-1 and PYY 6 , our group has recently reported that FC-rich diets increase manganese-enhanced MRI (MEMRI) signal in the hypothalamus, a proxy of neuronal activation, via a mechanism that appears to be independent of changes in GLP-1 (ref. 9 ). In support of this, infusion of anorectic gut hormones have previously been shown to suppress hypothalamic MEMRI signal 10 , thereby suggesting that any increase in signal seen following FC supplementation is not PYY or GLP-1 dependent. We therefore hypothesized that there was a direct effect of SCFAs on appetite regulation, namely by acetate, given it is the most abundant SCFA produced in the colon, and that it partially avoids hepatic clearance to reach micromolar concentrations in the peripheral circulation. Figure 1: FCs reduce food intake and increase faecal SCFA concentrations. ( a ) Body weight gain and ( b ) average weekly food intake of mice fed with either a HFD supplemented with the highly fermentable fibre inulin (HF-I) or a relatively non-fermentable fibre cellulose (HF-C). Body weight gain and average food intake significantly reduced in the HF-I group, ** P <0.01, ** P <0.05 based on two-sided, unpaired Student’s t -test( n =12 per group). ( c ) Total and acetate-only faecal SCFA concentrations obtained from HF-C and HF-I fed mice ** P <0.01 based on two-sided, unpaired Student’s t -test ( n =6 per group randomly selected from the n =24 cohort). ( d ) The biodistribution of carbon-11 ( 11 C) i.v. acetate as determined using PET scanning. Image depicts a fasted mouse following i.v. infusion. Image shows uptake in the brain, liver and heart. ( e – g ) Brain, liver and heart uptake of i.v. and colon infused 11 C-acetate as expressed as a percentage of the initial dose administered. No significant differences, when compared by GEE and Mann–Whitney U -test, were observed between i.v. administrations in the fed or fasted state, but there is a slower increase when the 11 C-acetate was colonically infused ( P <0.001; n =3–4 per group). Full size image The biodistribution of acetate and uptake in the brain SCFA have traditionally been thought to be almost entirely metabolized by colonocytes and the liver 11 . However, acetate is present in the peripheral circulation, in human cerebrospinal fluid 12 as well as in the brain where it provides an important energy source for glial cells 13 . To our knowledge, there has been no previous publication investigating acetate as a potential appetite-regulating agent. We therefore sought to investigate the biodistribution of acetate administered peripherally or by administering it colonically to mimic the endogenous production from FC. To do this, we used in vivo intravenous (i.v.) and colonic infusions of 11 C-acetate and PET-CT in mice. A representative PET-CT image of a fasted mouse following i.v. 11 C-acetate infusion is shown in Fig. 1d . The liver and heart demonstrated the greatest proportion of initial 11 C-acetate dose uptake ( Fig. 1f,g ), but our data also show that up to 3% of the initial acetate tracer was taken up by the brain following i.v. infusions in both the fed and fasted states ( Fig. 1e ). Brain uptake following colonic infusion was more gradual, but it reached an equivalent level after ~20 min. Acetate reduces acute food intake We went on to test whether acetate was itself an anorectic signal using acute peripheral administration in C57BL/6 mice. Intraperitoneal (i.p.) injection of acetate (500 mg kg −1 ) produced an increase in serum acetate ( Fig. 2a ), similar to those that have been associated with suppression of free fatty acids 14 , which was associated with a significant reduction in acute food intake at both 1 and 2 h post injection ( Fig. 2b ). An acetate tolerance test demonstrated that there was no impact of i.p. acetate at 500 mg kg −1 body weight on blood glucose concentrations ( Fig. 2e ). Acute behavioural observations post injection showed that this was also not associated with aversive behaviour ( Fig. 2d ). Furthermore, at 60 min post injection, there were no significant differences in circulating concentrations of plasma PYY and GLP-1 ( Fig. 2g,h ). We encapsulated acetate into liposomes 15 . Using this method of peripheral administration of liposome-encapsulated acetate over a 4-h period, we did not detect liposomes in the hypothalamus or changes in food intake ( Fig. 2f ). We therefore hypothesized that the anorectic effects of acetate may result from a direct effect on the central nervous system. We tested this hypothesis by administering 2.5 μmol of sodium acetate directly into the third ventricle of intracerebroventricular (i.c.v.) cannulated male Wistar rats. In accordance with the anorectic effects of peripherally administered acetate, central administration of acetate into the third ventricle suppressed food intake primarily at 1–2 h post injection, resulting in reduced cumulative food intake at both at 2 and 4 h ( Fig. 2c ), although this effect was not as potent as the suppression observed after i.p. administration of acetate in mice. The short-term suppressions of feeding were as expected given the rapid hepatic metabolism and short half-life of acetate. Figure 2: Acetate administration reduces food intake. ( a ) Serum acetate concentrations of mice administered with 500 mg kg −1 i.p. acetate or saline ( n =12). ( b ) Acute food intake data showing food intake in mice following the acute i.p. administration of acetate (500 mg kg −1 ) or saline. Food Intake significantly reduced at 0–1 and 0–2 h; *** P <0.001, ** P <0.01 ( n =21–22 per group). ( c ) The effect of 2.5 μmol of sodium acetate administered into the third ventricle of cannulated rats on food intake compared with a sodium chloride control injection. Cross-over data ( n =15) compared using two-sided, paired Student’s t -test ( P =0.05, P =0.08). ( d ) Behavioural response observed in mice administered either i.p. acetate, saline or lithium chloride (positive control). There was no significant effect on behaviour between saline or acetate as compared using Kruskal–Wallis non-parametric ANOVA ( n =8). ( e ) Change in baseline blood glucose concentrations (fold change) following 500 mg kg −1 sodium acetate or saline injection in ad libitum fed mice (treatment effect P =0.6 as determined by two-way ANOVA; n =9–10 per group). ( f ) The effect of i.p. liposome-encapsulated acetate on acute food intake. No significant difference throughout based on two-sided, unpaired Student’s t -test ( n =7). ( g , h ) Effects of i.p. acetate on plasma concentrations of appetite-regulating gut peptides PYY and GLP-1. Measurements made 60 min post injection. There was no significant effect on either peptide as determined by two-sided, unpaired, Student’s t -test. Full size image Acetate induces an anorectic neuropeptide expression profile Having confirmed that acetate was an anorectic agent and was indeed taken up by the brain following colonic or i.v. administration, we next sought to confirm whether changes in hypothalamic neuronal activation that we had previously observed following dietary supplementation of FC 9 could be replicated following an acute administration of acetate. To achieve this, we again used the in vivo functional imaging technique MEMRI 9 , 10 , 16 . The regions of interest examined were as shown in Fig. 3a . Mice given an i.p. injection of an anorectic dose of acetate had a significant increase in signal intensity in the arcuate nucleus (ARC), indicative of increased neuronal activation, whereas no significant differences were observed in the ventral medial hypothalamus (VMH), the paraventricular nucleus (PVN) ( Fig. 3b–d ) or the brain stem ( Fig. 3e ). These data show an identical pattern to those seen when we fed mice a HFD supplemented with FC inulin 9 . We went on to investigate specific changes in hypothalamic neuropeptide expression following i.p. acetate administration. There was a four-fold rise in the expression of the melanocortin precursor pro-opiomelanocortin (POMC) and a potent suppression of agouti-related peptide (AgRP) expression 30 min after acetate administration ( Fig. 3f ). At 60 min, this suppression of AgRP remained. This disparate change in the expression patterns of anorectic and orexigenic neuropeptides favour a reduction in appetite and body weight, and have been previously reported by others feeding rodents FC-rich diets 17 . Figure 3: Acetate effects in the hypothalamus. ( a ) Regions of interest (ROIs) used in manganese-enhanced MRI (MEMRI). MR image showing the ROI locations in the hypothalamus from which signal intensity (SI) measurements were determined. ARC, arcuate nucleus; VMH, ventromedial hypothalamus; ArP, area postrema; NTS, nucleus of solitary tract. White bar represents 1 mm. ( b – e ) Hypothalamic neuronal activation in the ARC, VMH, the PVN and the NTS of mice following i.p. administration of acetate or a saline control as determined by MEMRI. Arrow signifies start of i.v. MnCl 2 infusion. SI is significantly increased in the ARC of acetate treated mice compared with saline-injected controls based on generalized estimated equations (GEE) and Mann–Whitney U -test; ** P <0.01 ( n =4–5 per group). ( f ) Effect of acetate on hypothalamic expression of POMC, NPY and AgRP as determined by hypothalamic qPCR. Data expressed as fold change in expression compared with saline-injected controls at 30 and 60 min post administration one-way ANOVA with post hoc Dunnett’s correction; ** P <0.01, *** P <0.001 ( n =5 per group). ( g , h ) pACC and pAMPK hypothalami content expressed in relation to β-actin control, based on two-sided, unpaired Student’s t -test; * P <0.05 ( n =5). ( i ) Immunoblots of hypothalamic pAMPK and pACC levels in mice 30 min after the i.p. injection of either saline or acetate (full blot with annotation available in Supplementary Fig. 1 ). ( j ) Proposed mechanism of acetate induced inhibition of the feeding impulse. Relatively increased hypothalamic TCA cycle activity increases ATP production, decreases AMP levels and AMPK inhibition of acetyl-CoA carboxylase (ACC), increases malonyl-CoA levels and stimulates POMC mRNA expression, and inhibits appetite. Full size image Acetate reduces hypothalamic AMPK catalytic activity Mammalian neuronal tissue is thought to rapidly convert acetate to acetyl-CoA, which then enters the Krebs cycle 13 . Accumulating evidence suggests that hypothalamic AMP-activated protein kinase (AMPK) has an important role in energy and nutrient sensing 18 . Phosphorylation of threonine 172 (T172) within the alpha subunit of AMPK directly activates the kinase, which in turn phosphorylates and inactivates acetyl-CoA carboxylase (ACC), leading to decreased levels of malonyl-CoA. Studies have shown that increased hypothalamic malonyl-CoA concentrations are associated with an increase in the expression of POMC 19 and suppression of Neuropeptide Y (NPY) and AgRP with a subsequent reduction in rodent food intake. Thus, we hypothesized that acetate administration may mediate the changes in appetite we had observed and also more specifically changes in POMC and AgRP expression. We measured the phosphorylation of key residues of AMPK (T172) and ACC (S79) in mouse hypothalamic lysates following i.p. acetate injection as a surrogate of their activities. We found a significant reduction in levels of pT172 of AMPK and pS79 of ACC ( Fig. 3g–i ), suggesting that acute acetate administration inactivated AMPK, thereby leading to increased ACC activity. Increased ACC activity has been shown to elevate malonyl-CoA that can stimulate expression of POMC and cocaine- and amphetamine-regulated transcript and decrease NPY and AgRP leading to a reduction in food intake 19 . This therefore suggests that the anorectic effects of acetate administration are mediated through a change in hypothalamic ACC and AMPK activities and downstream changes in neuropeptide expression. A schematic illustration of this relationship is shown in Fig. 3j . 13 C acetate preferentially accumulates in the hypothalamus Given the aforementioned changes in neuronal activation and neuropeptides expression, we sought to determine the hypothalamic metabolism of acetate using 13 C high-resolution magic-angle-spinning (HR-MAS). Figure 4 summarizes the results on hypothalamic metabolism of i.p. (2- 13 C) acetate or 13 C 2 acetate derived from colonic fermentation of (U- 13 C) inulin administered by gavage. 1 H-decoupled 13 C HR-MAS spectra of cerebral biopsies provide information on the operation of the tricarboxylic acid cycle (TCA) and the transcellular glutamate–glutamine and γ-amino butyric acid (GABA) cycles 20 , 21 . Here we implemented the 13 C HR-MAS approach to investigate the metabolism of labelled [2- 13 C] acetate or 13 C 2 acetate in the hypothalamus of fasted mice as well as in biopsies from the ‘remaining brain’ obtained for comparison. To this end, brain metabolism was arrested with focused microwaves immediately after (0 min) or 15 and 30 min later, following the i.p. administration of (2- 13 C) acetate or 180 min after intragastric (U- 13 C) inulin administrations, and a biopsy of the hypothalamus surgically separated from the remaining brain, investigating 13 C labelling in both samples. Figure 4a depicts 13 C HR-MAS spectra from a representative mouse hypothalamus 15 min after [2- 13 C] acetate administration, revealing extensive 13 C incorporation and indicating the most relevant resonances observed. The inset shows a representative 13 C HR-MAS spectrum (28–38 p.p.m.), obtained from the hypothalamus of a mouse, 180 min after (U- 13 C) inulin administration by gavage. The presence of clearly resolved doublets in hypothalamic glutamate C4 ( 1 J 45 =51 Hz), glutamine C4 ( 1 J 45 =49 Hz) and GABA C2 ( 1 J 12 =51 Hz) demonstrates that 13 C 2 acetate units produced by colonic fermentation of (U- 13 C) inulin have been incorporated into hypothalamic metabolism. Figure 4b shows the time course of 13 C incorporation from these substrates in acetate C2, glutamate and glutamine C4, GABA C2 and lactate C3 carbons in the hypothalamus biopsy (light grey), as well as in the remaining brain biopsy (dark grey). The 13 C HR-MAS resonance of acetate C2 increased significantly with time reaching higher intensity in the hypothalamus than in the remaining brain, revealing a heterogeneous distribution of cerebral acetate levels with a preferential accumulation of this substrate in the hypothalamus. In addition, the 13 C resonances of Glu C4, Gln C4 and GABA C2 also increased significantly with time both in intra- and extra-hypothalamic structures, supporting substantial metabolism of accumulated 13 C acetate in the cerebral TCA cycle and neuroglial glutamate–glutamine or GABA transcellular cycles ( Fig. 4c ). To investigate whether the concentrations of acetate derived from fermentation in the colon would be sufficiently high to unambiguously label these metabolites in the hypothalamus (and trigger the anorectic responses described), we used (U- 13 C) inulin. Interestingly, 13 C accumulation in these metabolites from intragastric (U- 13 C) inulin was often higher than from i.p. (2- 13 C) acetate, revealing an efficient fermentation of the inulin in the mouse colon at a level that directly influences hypothalamic metabolism. Figure 4: Effects of peripheral (2- 13 C) acetate or intragastric (U- 13 C) inulin administrations on hypothalamic and cerebral metabolism. ( a ) Representative 13 C (125.03 MHz) HR-MAS spectra (4 °C, 5 kHz) of the hypothalamus from a mouse fasted overnight, 15 min after i.p. [2- 13 C] acetate administration (500 mg kg −1 ). Inset: Representative 13 C HR-MAS spectra (28–38 p.p.m.) from the hypothalamus of an overnight-fasted mouse, 180 min after [U- 13 C] inulin administration (100 mg) by gavage. ( b ) Increases in 13 C incorporation into the acetate C2, GABA C2, Glu C4, Gln C4 and Lac C3 carbons (mean+s.d.) in the hypothalamus and remaining brain biopsies, following 0, 15, 30 min i.p. [2- 13 C] acetate (n=18) or 180 min intragastric [U- 13 C] inulin administrations * P <0.05, ** P <0.01, *** P <0.001 (n=4). ( c ) Summary of the effects of [2- 13 C] acetate or [U- 13 C] inulin administrations in hypothalamic metabolism. Extracellular [2- 13 C] or 13 C 2 acetate derived from plasma or cerebrospinal fluid are transported primarily to the astrocytes, where they are metabolized oxidatively in the TCA cycle, labelling astrocytic glutamate C4 and glutamine C4 that exchange with the corresponding neuronal pools through the glutamate–glutamine cycle, labelling GABA only in gabaergic neurons. (2- 13 C) or 13 C 2 acetate are also oxidatively recycled in the neuronal cycle, originating the Lac C3 resonance. The glutamate–glutamine and GABA cycles support glutamatergic and gabaergic neurotransmissions, the two fundamental synaptic events triggering increased Ca 2+ uptake and competitive Mn 2+ uptake (MEMRI) that determine the appetite impulse. Full size image 13 C 2 acetate from U- 13 C inulin labelled faster Glu C4, Gln C4 and GABA C2 in the hypothalamus than in the rest of the brain. In fact, no detectable 13 C incorporations in GABA C2 or Gln C4 could be found in the remaining brain biopsies, even when 13 C incorporation into the common Glu C4 precursor was similar in both regions. We believe that this is the result of the preferential incorporation of 13 C 2 acetate and its metabolism in the hypothalamus as compared with the remaining brain, suggesting that Gln C4 and GABA C2 labelling in non-hypothalamic structures would occur at later times. Notably, 13 C GABA and 13 C lactate remained at a similar level until 30 min when both 13 C GABA and 13 C lactate were higher in the hypothalamus than in the remaining brain biopsies, indicating a relative increase in gabaergic neurotransmission and oxidative 13 C lactate production from the pyruvate recycling pathway in the mouse hypothalamus under appetite stimulation 22 . In order to evaluate the contribution of potential variations in total metabolite content to the 13 C incorporation profiles, we acquired 1 H HR-MAS spectra for the same biopsies we used for the 13 C HR-MAS acquisitions, as the resonances detected by 1 H HR-MAS are known to reveal the total metabolite content. Figure 5a , shows a representative 1 H HR-MAS spectrum from a biopsy obtained from the hypothalamus, 180 min after (U- 13 C) inulin administration (black trace), with some of the most relevant metabolite resonances highlighted and the corresponding LCModel fitting superimposed (red trace). We calculated the metabolite ratios for acetate, GABA, glutamate, glutamine and lactate using the total myo -inositol or the total creatine (Cr+PCr) resonances as internal references. Our results revealed that the significantly higher 13 C incorporations in hypothalamic acetate C2, GABA C2, glutamate and glutamine C4 and Lac C3 are primarily derived from the 13 C-labelled substrates administered rather than from relative changes in total metabolite content ( Fig. 5b–f ). Notably, similar levels of total glutamine and total GABA were found in the hypothalamus and remaining brain biopsies 180 min after (U- 13 C) inulin administration ( Fig. 5c,d ). confirming that the higher 13 C incorporations detected in hypothalamic Gln C4 and GABA C2, are not derived from relative changes in the total metabolite levels between these two regions. Similar conclusions were obtained when using the total creatine (Cr+PCr) resonance as internal reference for the total metabolite content. This normalization allowed, however, for the comparison between the myo-inositol content in the hypothalamus and the remaining brain. No significant differences in Ino/(Cr+PCr) ratios were found between both regions, 15 or 30 min after (2- 13 C) acetate, or 180 min after (U- 13 C) inulin administrations. These results suggest that previously reported differences in myo-inositol content between the untreated mouse hypothalamus and hipoccampus 23 do not contribute appreciably to the present comparisons between the hypothalamus and remaining brain biopsies after (2- 13 C) acetate or (U- 13 C) inulin administrations. Figure 5: Relative changes in total metabolite content in the hypothalamus and remaining brain biopsies after (2- 13 C) acetate or (U- 13 C) inulin administrations. ( a ) 1 HR-MAS spectrum from a representative biopsy from the hypothalamus (black) and superimposed LCModel fitting (red). ( b – f ) Relative 1 H HR-MAS ratios Ac/Ino, GABA/Ino, Glu/Ino, Gln/Ino and Lac/Ino at increasing times after (2- 13 C) acetate or (U- 13 C) inulin administrations. Lac, lactate; Ac, acetate;, Glu, glutamate; Gln, glutamine; GABA, γ-amino butyric acid; Cr+PCr, creatine+phosphocreatine; Ino, myo -inositol; n.r, not resolved; CRLB, Cranmer–Rao Lower Bound <20%. Bar graphs represent the mean and standard deviation. Statistical significance was evaluated using two-tailed unpaired Student t-test. “Remaining brain” accounts for cerebral extra-hypothalamic tissue. * P <0.05, ** P <0.01. Full size image Summarizing, we conclude then that the neuronal activation in the ARC observed in Fig. 3b using MEMRI occurs because of a relative increase in acetate and glutamatergic or gabaergic neurotransmissions in the hypothalamus that would result in divalent ion uptake, in this case Mn 2+ , and therefore an increase in MEMRI signal intensity as shown in our summary schematic ( Fig. 4c ). Discussion In this paper, we provide a novel insight into the mechanism behind the well-established observation that animals fed a HFD supplemented with FC have decreased energy intake and weight gain 24 , 25 . Most of the work in this area investigating the mechanism underlying these observations has previously focused on the potential effect of FC to promote the release of the anorectic gut hormones PYY and GLP-1 (ref. 6 ). However, increased concentrations of these hormones have been difficult to reproduce in humans 26 . In previous studies from our group, we have observed that decreased food intake and diminished body weight can occur independently of change in gut hormones in mice 9 , 24 . We also observed an increase in MEMRI signal in the hypothalamic ARC, with no associated increase in signal intensity in the brain stem, the opposite to that observed following administration of the anorectic gut hormones GLP-1 and PYY 27 . Here we have demonstrated that the decrease in body weight observed in FC supplementation studies appears to be mediated, at least in part, by the SCFA acetate. Acute i.p. injection of acetate decreased energy intake over 2 h without effecting behaviour and occurred independently of changes in the peripheral concentrations of GLP-1 and PYY. It is of interest that encapsulation of acetate into liposomes, which specifically allows for peripheral delivery, abolishes the anorectic effect of acetate in mice, whereas i.c.v. administration of acetate in rats, although still anorectic, shows a milder and more delayed suppression of food intake. The lack of brain-stem-induced changes in signal intensity following i.p. acetate do not suggest a vagally mediated effect of acetate, and thus the difference in potency may be because of species variation. We also demonstrate that appetite changes are unlikely to be because of change in the availability of glucose to the hypothalamus, as acetate did not affect circulating glucose concentrations. We have also shown that acetate administered through both i.v. and colonic routes is taken up by the brain and for the first time that acetate derived from fermentation of carbohydrate in the colon is taken up by the hypothalamus in greater amounts than other brain tissue. MEMRI signal intensity patterning following acetate infusion also directly reflects that of animals fed with FC 9 . Furthermore, we have shown that peripheral acetate administration leads to increased POMC and reduced AgRP expression in the hypothalamus, suggesting that acetate has a direct effect on the hypothalamic control of appetite. Similar observations have been made when animals are fed with FC 17 . Changes in the phosphorylation of key residues of AMPK and ACC in mouse hypothalamic lysates following peripheral acetate administration also provides evidence of a potential mechanism through which the changes in neuropeptide expression might occur. Given that acetate is taken up by the hypothalamus, where it is preferentially metabolized by astrocytes because of the higher activity of their monocarboxylate transporters compared with neurons 28 , 29 , a non-neuronal mechanism may also be implicated in the observed increase in ARC activation. Using 13 C HR-MAS in whole hypothalamic tissue, we were able to demonstrate that 13 C acetate from both i.p. injections and derived from colonic fermentation incorporates into the glutamate–glutamine transcellular cycle, increasing lactate and GABA labelling, thus supporting hypothalamic glutamatergic or gabaergic neurotransmissions. The same model has been proposed to occur in acetate metabolism following alcohol ingestion 30 . The operation of the glutamine cycle involves imperative signalling through ionotropic and metabotropic postsynaptic glutamate receptors, resulting in increased intracellular Ca 2+ accumulation 22 and a concomitant MEMRI effect 31 ( Fig. 4c ). It has recently been demonstrated that leptin modulates glutamate transporter expression in hypothalamic astrocytes with an associated increase in POMC neuron excitability 32 . A similar mechanism could take place following acetate administration, potentially explaining the increase in MEMRI signal observed in ARC of animals supplemented with FC or given acetate alone. We hypothesize that the increase in hypothalamic 13 C acetate over time, as compared with rest of the brain, results in increased hypothalamic gabaergic neurotransmission and oxidative lactate production from the pyruvate recycling pathway 22 . Increased oxidative metabolism results in increased ATP production, decreasing the AMP:ATP ratio, and thereby decreasing AMPK activity and consequently reducing the inhibition of ACC. This produces an increase in malonyl-CoA, stimulates POMC neurons and predominantly gabaergic neurotransmission, thus decreasing the appetite impulse 33 . Dietary intakes of fibre in excess of 100 g per day consumed by Palaeolithic man and in some hunter gather tribes 4 suggest the human colon is adapted to large intakes of fermentable material that would lead to high-peripheral circulating acetate 34 . The fall in fibre intakes or, more specifically, a decline in the intake of carbohydrates that can be readily fermented in the colon are one such factor that seems to be an inverse correlate of obesity prevalence. Although studies have suggested that the inclusion of highly FC diets may be beneficial in terms of regulation of both appetite and body weight, in free-living individuals, compliance to such eating regimens is often low due to gastrointestinal side effects or unpalatability of FC-rich foodstuffs. The work highlighted within the paper suggests that the SCFA acetate may at least in part mediate some of the obesity-protective effects of FC-rich diets directly in the central nervous system, thus suggesting that acetate may be useful as a potential anti-obesity therapeutic. This hypothesis does not rule out that there are other non-specific effects. In conclusion, here we provide novel insight into a mechanism through which FC may mediate appetite suppression. By exploring the role of the SCFA acetate, a product of fermentation of carbohydrate in the colon, our evidence suggests that acetate derived from the colon induces an anorectic signal in the hypothalamic ARC by supporting the glutamate–glutamine transcellular cycle and leading to an increase in lactate and GABA production. These data demonstrate a previously unexplored central mechanism through which the fermentation products of FC and dietary fibre may aid in the control of body weight beyond that of energy dilution and gut hormone release. Moreover, it opens up important new possibilities for weight management as the supply of fermentable substrate to the colon (and therefore acetate production) can be modified. Methods Animals and treatments Animal experiments were performed at Imperial College London except for high resolution magic angle spinning (HMRS) experiments which were performed at Instituto de Investigaciones Biomédicas ‘Alberto Sols’ (IIB). Experiments conducted at Imperial College London were approved by Imperial College London, and all animal procedures were performed in accordance with the UK Animals Scientific Procedures Act (1986). IIB experiments were approved by the ethical committee of the Instituto de Investigaciones Biomedicas ‘Alberto Sols’ (and met the guidelines of the national (R.D. 53/2013) and the European Community (2010/62/UE) for care and management of experimental animals. Unless otherwise stated, all experiments were performed in C57BL/6 male mice (6–8 weeks old, Charles River, Margate, UK) that were single-housed under controlled temperature (21–23 o C) and light conditions (12 h light–dark cycle; lights on at 07:00 hours). The effect of HF-I versus HF-C on body weight and energy intake Mice were randomized and assigned to two different groups ( n =12): HFD with cellulose as a control (HF-C), HFD+oligofructose-enriched inulin (Synergy) (HF-I). Synergy is a fructan-based preparation containing both long- and short-chain fructo-oligosaccharides. Synergy was mixed with the HFD individually in the ratio of 1:9. The two diets were isocaloric, each contained the same total energy of 4.6 kcal g −1 , with 41.8% energy from fat. The diets were made isocaloric by the addition of cellulose. Nutritional composition of the diet is shown in Table 1 . The diets were fed ad libitum for 8 weeks to respective groups of animals. Body weights and food intake were measured three times per week. Table 1 Nutritional content of the HFD-C and HFD-I diets. Full size table Determination of acetate concentrations in serum and faeces SCFAs were determined by gas chromatography in the colonic contents and serum of mice that where freshly culled. The method used was adapted from Richardson et al. 35 Briefly, caecal contents were weighed (20–220 μg) and combined with 550 μl of PBS. Samples were vortex mixed for 1 min, centrifuged at 3,000 g for 10 min and the supernatant was collected. To 500 μl supernatant, 25 μl of internal standard and 2-ethylbutyric acid was added to give a final concentration of 5 mmol l −1 . Acids were extracted by the addition of 250 μl concentrated hydrochloric acid and 1 ml diethyl ether followed by vortex mixing for 1 min. Samples were centrifuged for 10 min at 3,000 g and the ether layer was removed and transferred to a separate capped vial. N -methyl- N -t-butyldimethylsilyltrifluoroacetamide (MTBSTFA; Sigma) was added (100 μl) before heating at 80 °C for 20 min. Gas chromatography was performed on a Hewlett Packard 5890 Series II instrument equipped with a flame ionization detector, split/splitless injector and a 10-m, 0.18 mm ID and 0.20 μm df Rtx-1 (Crossbond 100% dimethyl polysiloxane, Thames Restek UK, Ltd) capillary column. Injector and detector temperatures were 275 o C with the column temperature programmed from 63 °C for 3 min to 190 o C at 10 °C per min. Helium served as the carrier gas (head pressure 135 kPa) and injections (1 μl) were made in the split mode (50:1 split). Peak areas were recorded and all subsequent data manipulation was completed using ChemStation Software (Agilent Technologies, USA). External standards for acetate, propionate, n-butyrate, iso-butyrate, n-valerate and caproate were prepared at concentrations of 25, 12.5, 6.25, 1.25 and 0.625 mM and ethyl butyric acid was used as the internal standard at a concentration of 100 mM. Reported values were normalized according to the weight of original sample used. Serum SCFA measurement A 100–500-μl aliquot of serum was filtered through a 30-kDa micropartition system (Vivaspin RC VS02H22 filters, Sartorius Inc., Mississauga, ON, Canada) by centrifugation at 14,000 g at 4 °C for 90 min. An internal standard solution (25 μl) consisting of 100 mM ethylbutyrate and 100 mM formic acid was added to 225 μl of the protein-free filtrate supernatant in a 2-ml Hichrom vial (Agilent Technologies, South Queensferry, UK). To measure SCFA, 1 μl of each sample was injected into a 5,890 Series II GC system (HP, Crawley, UK) fitted with a NukolTM Capilllary Column (30 m × 0.53 mm × 1.0 μm, SUPELCOTM Analytical, UK) and flame ionization detector. The carrier gas, helium, was delivered at a flow rate of 14 ml min −1 . The head pressure was set at 10 p.s.i. with split injection. Run conditions were: initial temperature 60 °C, 1 min; +20 °C min −1 to 145 °C; +4 °C min −1 to 200 °C, hold 25 min. Peaks were integrated using Agilent ChemStation software (Agilent Technologies, Oxford, UK) and SCFA content quantified by single-point internal standard method. Peak identity and internal response factors were determined using a 1-mM calibration cocktail including acetate, propionate and butyrate. PET-CT analysis of acetate biodistribution Mice were fasted overnight. Animals received 11 C-acetate (~20 MBq) at the beginning of PET scan either through i.v. tail vein ( n =3) or colonically by using tubing placed 1 cm into the rectum ( n =4). At the time of the scan, animals were anaesthetized with a 2–2% isofluorane–oxygen mix. After the animals were placed in the micro PET/CT scanner (PET/CT (Siemens), a CT scan without contrast was performed, followed by the PET scan and lastly followed by a CT scan where contrast Ultravist 370 (Bayer, AG, Germany) was infused. During the scan, animals were maintained at 1–1% isofluorane–oxygen mix and body temperature of 37 °C. The CT had an X-ray source of 80 kVp and 500 μA with exposure time 200 ms and isotropic resolution of 103 μm. The scan consisted of three bed positions with a 20% overlap. The CT scan was used for anatomical data and attenuation correction purposes. The PET system consisted of 64 lutetium oxyothosilicate based detectors blocks arranged in four contiguous rings, with a crystal ring diameter of 16.1 cm and an axial extent of 12.7 cm. Each detector block was composed of a 20 × 20 array of lutetium oxyothosilicate crystals coupled to a position sensitive photomultiplier tube via a light guide. Each crystal was 10 mm long and had a cross-sectional area of 1.51 × 1.51 mm. The crystal pitch was 1.59 mm in both axial and transverse directions. Inveon Research Workplace version 3 (Siemens) was used for data analysis. Images with matrix size 12 × 128, pixel size 0.776 mm 2 and slice thickness 0.796 mm were reconstructed from the two-dimensional sinogram by two-dimensional-filtered backprojection using a ramp filter with a nyquist frequency of 0.5. Dynamic framing was used in reconstruction with 20 × 3 s frames, 8 × 30 s frames, 5 × 60 s frames and 10 × 300 s frames. Attenuation correction was also used to correct the image. Reconstructed PET images were then registered onto the CT images. PET signals on the desired region of interest were obtained over the scan period (1 h). Percentage injected dose (% ID per g) was calculated by using the following Equations (1) and (2), where; λ =0.693/ T 1/2 , T 1/2 is the half-life of the radionuclide, I t is activity at a given time where I 0 is the starting activity and t is the time passed, C PET is the contrast of the image and ID is the activity of the injected acetate. Effect of i.p. acetate on energy intake Mice were fasted overnight before receiving either 500 mg kg −1 sodium acetate dissolved in 0.9% saline ( n =22) or vehicle control (0.9% saline; n =21) pH 7.0, This dosage of acetate was similar to that used to suppress lipolysis in previous studies and was shown to be well tolerated 14 . Food intake was measured 1, 2, and 4 h post injection. Effect of i.c.v. acetate on energy intake Male Wistar rats 190–240 g (specific pathogen free; Charles River) were placed in a stereotaxic frame (David Kopf Instruments) under 0.5-2.5% isoflurane anaesthesia. A hole was drilled using a stereotactically mounted electric drill using coordinates calculated from the Rat Brain Atlas as described previously 36 , according to the coordinates of Paxinos and Watson 37 (0.8 mm posterior to the Bregma in the midline and 6.5 mm below the surface of the skull). A permanent 22-gauge stainless steel cannula (Plastics One Inc., Roanoke, Virginia, USA) projecting to the third cerebral ventricle was implanted. Dental cement was used to hold the cannula in position anchored by three stainless steel screws inserted into the skull. The skin was approximated using nylon sutures. After a 7 day recovery period, animals were handled daily and acclimatized to the injection procedure. To ensure that cannulae were correctly positioned, rats received i.c.v. 150 ng angiotensin II in a 5-μl volume via a 28-gauge stainless steel injector placed into and projecting 1 mm below the tip of the cannula. Rats were observed for a prompt drinking response. A total volume of 5 μl was injected over 1 min to conscious, freely moving rats. For feeding studies, rats ( n =15) were fasted overnight and then injected between 09:00 and 10:00 hours with sodium acetate (2.5 μmol) or an equivalent osmotic sodium chloride control (5 μl) and returned to their home cage. Food intake was measured at 1, 2 and 4 h post injection. After a 72-h washout period, rats received either the acetate or sodium chloride in a cross-over manner and food intake measured as previously described. Effect of liposome encapsulate acetate on food intake Nanoparticle design was based upon our previous studies where PEGylated liposomes were formulated for labelling and visualization of cells 38 , or specifically designed for preferential uptake in xenograft tumours 15 . Liposomes were prepared by the thin-film hydration method 39 . Liposomes were prepared with either acetate (1 M, pH 2.3) to form liposome-encapsulated acetate (LITA) nanoparticles or HEPES for use as a control. Particle size was determined using a Malvern Zetasizer (Malvern Instruments, UK). Quantification of acetate encapsulated within LITA nanoparticles was determined using 1 H nuclear magnetic resonance (NMR) spectroscopy. LITA formulation (200 μl) was scanned with addition of albumin that binds to acetate in free solution reducing the NMR signal for the compound 40 . A liposome-free control solution containing an equivalent concentration of acetate (4 mM) was also scanned with and without the addition of albumin (2 g). LITA solution was scanned following the addition of lactate (5.2 mg sodium lactate) to act as a quantitative control. Spectra were obtained using a Bruker DRX 11.74T NMR spectrometer. Spectra were analysed using MestRe-C NMR spectroscopy software (Santiago de Compostela, Spain). To assess short-term biodistribution, a liposome formulation containing an additional 0.1% of a rhodamine–lipid complex (DOPE-Rhodamine: 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl) was used to determine biodistibution through ex vivo histological analysis. Samples were collected 4 h post i.p. injection ( n =4). No significant accumulation of liposome was observed in the brain of the treated mice despite being present in the liver and heart. The effect of acetate on behaviour Mice were fasted overnight before receiving either a single i.p. injection of saline, 500 mg kg −1 acetate or 2.5 M LiCl as a positive control for aversive behaviour 41 ( n =8 per group). The behavioural patterns of each mouse were observed for 15 s every 5 min from the time of injection until 1 h post injection. Behaviours was classified using a modified version of a previously published protocol 42 . Briefly, behaviours was classified into 10 different categories: feeding, drinking, locomotion (including rearing and climbing), bed making, burrowing, grooming, still, sleeping, head down (animal in abnormal body posture: back hunched, eyes shut or half-shut and pilo erect—indicative of impaired health status) and tremors. Acetate tolerance test Glucose levels were determined from blood taken from mouse tails using a Glucometer Elite glucometer (Bayer Corp.). Experiments were performed on ad libitum fed mice at 14:00 h as previously described for insulin tolerance tests 18 but instead of insulin, animals were injected i.p. with either 500 mg kg −1 sodium acetate or saline control ( n =9–10). Blood glucose values were then determined at the times indicated. Results were expressed as fold change of initial blood glucose concentration before injections. Plasma PYY and GLP-1 hormone analysis All samples were assayed in duplicate and in a single assay to eliminate inter assay variation. Plasma PYY and GLP-1 were assessed using an established in-house radioimmunoassay as described previously 43 , 44 . Hypothalamic quantitative PCR This was carried out using the methods described by Bewick et al. 45 Quantitative reverse transcriptase PCR (RT-qPCR) was used to study the expression of the different target genes. Total RNA was extracted from the whole hypothalami using TRIZOL (Invitrogen) according to the manufacturer’s protocol. All samples were treated by DNaseI (Invitrogen) before the reverse transcription. First-strand cDNAs were prepared using 1 μg RNA and SuperScriptII Reverse Transcriptase (Invitrogen) in the presence of random hexamer and oligo(dT) primers (Promega, Charbonnières-les-Bains, France). The qPCRs were performed using the Light Cycler Fast Start DNA Master SyBR Green I kit (Roche, Meylan, France) in the presence of specific primer pairs that were selected to amplify small fragments (100–200 bp). PCR products were checked for specificity by measuring their melting temperature. Samples (in duplicate) were quantified by comparison with a standard curve obtained by dilutions of purified-specific cDNAs. Measurements of AGRP and POMC mRNA expression RNA was extracted from dissected hypothalamus using Absolutely RNA microprep kit from Stratagene (La Jolla, CA, USA). The gene transcription for AgRP and POMC in the ARC of the hypothalamus was determined using real-time RT-PCR, and results were expressed as a ratio to the expression of the constitutive gene cyclophilin. The sequences of TaqMan probes and primers for cyclophilin (GenBank accession no. M15933) were: forward primer 5′-CCCACCGTGTTCTTCGACAT; reverse primer 5′-TGCAAACAGCTCGAAGCAGA-3′; and probe 5′-CAAGGGCTCGCCATCAGCCG-3′. For AGRP, they were: forward primer 5′-TTGGCAGAGGTGCTAGATCCA-3′; reverse primer 5′-AGGACTCGTGCAGCCTTACAC-3′; and probe 5′-CGAGTCTCGTTCTCCGCGTCGC-3′. The probe and primers for POMC (assay identification no. Rn00595020_ml) were purchased from Applied Biosystems. Manganese-enhanced MRI MEMRI was performed using a 9.4-Tesla Varian INOVA imaging system (Varian Inc., USA) as described previously 46 . A fast spin-echo multi-slice sequence was applied with the following parameters: TR= 600 ms, TE=10 ms, matrix size= 256 × 192, FOV=25 × 25 mm and average=1 acquiring 46 × 0.4 mm thick slices. An array of 66 acquisitions was set up so that the 46 slices were acquired 66 times throughout the infusion. Normalized percentage enhancement in signal intensity was calculated for the ARC, VMH, PVN, periventricular nucleus (PE) and the nucleus of tractus solitarius 27 , 43 . Hypothalamic measurement of AMP kinase Animals were killed by decapitation, brains were immediately dissected and the hypothalamus was removed and snap-frozen in liquid nitrogen. Frozen tissues (~100 mg) were homogenized in 0.4 ml of ice-cold 50 mm HEPES, pH 7.4, 50 mm sodium fluoride, 5 mm sodium pyrophosphate, 250 mm sucrose, 1 mm EDTA, 1 mm dithiothreitol, 1 mm benzamidine, 1 mm trypsin inhibitor, 0.1% (w/v) phenylmethylsulfonyl fluoride using an UltraTurax homogenizer (3 × 30-s bursts). Insoluble material was removed by centrifugation and the resulting supernatant was used for immunoprecipitation of AMPK and western blot analysis. Immunoblotting and antibodies Samples were boiled in electrophoresis sample buffer and resolved on polyacrylamide gels. Proteins were transferred into PVDF membrane (Immobilon-FL, Millipore) and blocked with PBS containing 5% skimmed milk powder for 1 h. Antibodies were diluted in 5 ml high salt Tween buffer (20 mM Tris, pH 7.4, 500 mM NaCl and 0.5% Tween (v/v)), and incubated with the membrane. Primary antibodies used for immunoblotting are as follows: anti-AMPK-β1/2 (in-house) (dilutions 1:3,000–1:10,000 for blotting), anti-phospho-ACC (S79) (Cell Signalling, cat. no. 3661), anti-phospho-AMPK (PT172) (Cell Signalling, cat. no. 2535) anti-β-actin (Sigma, cat. no. A5316). Primary antibodies were detected using fluorescently linked secondary antibodies (Alexa Fluor, Invitrogen, goat anti rabbit: A21109 and goat anti mouse: A21058). These were visualized using an Odyssey infrared imager (Li-Cor Biosciences). Quantification of results was performed using Odyssey software and expressed as a ratio of the signal obtained with the phospho-specific antibody relative to the appropriate total antibody. Full-length images of cut immunoblots are shown in Supplementary Fig. 1 . The effect of acetate on hypothalamic metabolism [2- 13 C] acetate (500 mg kg −1 ) or [U- 13 C] inulin (100 mg) were administrated to 8–10-week-old C57BL/6J male mice (Charles River, Spain) after an overnight fasting (16 h) either with i.p. ( n =18) or by gavage ( n =4), respectively. All mice were anaesthetized with 4% isoflurane (3 l per min, 99% oxygen), and cerebral metabolism was arrested within 1.5 s using a high-power (5 kW) microwave fixation system (TMW-6402C, Muromachi Kikai Co. Ltd, Tokyo, Japan) immediately (0 min), 15 and 30 min after acetate, or 180 min after inulin, administrations. Fixed brains were dissected and hypothalamus and remaining brain biopsies were obtained and maintained frozen (−80 °C) until HR-MAS analyses. 13 C (125,03 MHz) and 1 H HR-MAS (500,13 MHz) spectra were acquired at 11.7 T (4 °C, 4,000 Hz rotation) with a Bruker AVANCE wide bore spectrometer equipped with a MAS accessory (Bruker Biospin, Rheinstetten, Germany). 1 H-decoupled 13 C HR-MAS spectra were acquired using a pulse-acquire sequence, with WALTZ-16 1 H decoupling applied during the acquisition and relaxation delay periods. Conditions were π/4 pulses, 8,192 or 16,384 scans (for [2- 13 C] acetate or [U- 13 C] inulin measurements, respectively), 64 k data points, 5 s recycle delay. The 13 C spectra were processed with Mestrelab ( ). Chemical shifts of the 13 C resonances were referred to the acetate C2 resonance (24.5 p.p.m.), and the 13 C incorporation was normalized to the myo-inositol C1+C3 resonance (73.2 p.p.m.), which provides a useful internal reference from which all 13 C resonances can be normalized independently of the amount of tissue 20 , 47 , introducing appropriate corrections for nuclear Overhausser enhancement and saturation. 1 H HR-MAS spectra from the same biopsies used for 13 C HR-MAS acquisitions were acquired using the Carr-Purcell-Meiboom-Gil sequence. Acquisition parameters were 5 s water pre-saturation, echo time=144 ms, t =1 ms, n =100, 10 kHz spectral width, 32 k data points and 256 scans. Metabolite concentrations were evaluated using a modified version of the LCModel processing software (Linear Combination of Model Spectra, ) 48 . Statistical tests were performed using two-tailed unpaired Student’s t -test between values in the different time points or areas. Observations that fell below the Q1-1.5 × IQR (interquartile range) or above the Q3+1.5 × IQR range (with Q 1 and Q 3 representing the upper and lower quartiles values and IQR, the difference between Q 3 and Q 1 ) were considered outliers and were not taken into account for statistical evaluations. Data analysis Analyses were performed using Graph Pad Prism (GraphPad Software, San Diego, CA, USA) or Stata (StataCorp LP, College Station, TX, USA). All data was tested for normality using Shapiro–Wilk test. Non-normally distributed data was log transformed to normalize the distribution. Comparison between groups was carried out by two-sided unpaired Student’s t -test for two groups and one-way analysis of variance (ANOVA) with post hoc Tukey or Bonferroni correction if there were more than two groups. I.c.v. cross-over acetate injection studies were compared using a two-sided paired Student’s t -test. Given the cumulative nature of both MEMRI and PET signal intensity data, differences in signal intensity profile between the regions of interest in all experimental groups were analysed using GEE and the Mann–Whitney U -test with commercial statistical software (Stata, version 9.1; StataCorp), which compare profiles for the entirety scan. All results and graphs are expressed as means±s.e.m. Results were considered statistically significant when P <0.05, with the significance level indicated as * P <0.05, ** P <0.01 and *** P <0.001 Additional information How to cite this article: Frost, G. et al. The short-chain fatty acid acetate reduces appetite via a central homeostatic mechanism. Nat. Commun. 5:3611 doi: 10.1038/ncomms4611 (2014).
(Medical Xpress)—New research has helped unpick a long-standing mystery about how dietary fibre supresses appetite. In a study led by Imperial College London and the Medical Research Council (MRC), an international team of researchers identified an anti-appetite molecule called acetate that is naturally released when we digest fibre in the gut. Once released, the acetate is transported to the brain where it produces a signal to tell us to stop eating. The research, published in Nature Communications, confirms the natural benefits of increasing the amount of fibre in our diets to control over-eating and could also help develop methods to reduce appetite. The study found that acetate reduces appetite when directly applied into the bloodstream, the colon or the brain. Dietary fibre is found in most plants and vegetables but tends to be at low levels in processed food. When fibre is digested by bacteria in our colon, it ferments and releases large amounts of acetate as a waste product. The study tracked the pathway of acetate from the colon to the brain and identified some of the mechanisms that enable it to influence appetite. lead author of the study Professor Gary Frost, from the Department of Medicine at Imperial College London, said: "The average diet in Europe today contains about 15g of fibre per day. In stone-age times we ate about 100g per day, but now we favour low-fibre ready-made meals over vegetables, pulses and other sources of fibre. Unfortunately our digestive system has not yet evolved to deal with this modern diet and this mismatch contributes to the current obesity epidemic. Our research has shown that the release of acetate is central to how fibre supresses our appetite and this could help scientists to tackle overeating." The study analysed the effects of a form of dietary fibre called inulin which comes from chicory and sugar beets and is also added to cereal bars. Using a mouse model, researchers demonstrated that mice fed on a high-fat diet with added inulin ate less and gained less weight than mice fed on a high-fat diet with no inulin. Further analysis showed that the mice fed on a diet containing inulin had a high level of acetate in their guts. Using positron emission tomography (PET) scans, the researchers tracked the acetate through the body from the colon to the liver and the heart and showed that it eventually ended up in the hypothalamus region of the brain, which controls hunger. In collaboration with Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, the researchers investigated the effects of acetate in the hypothalamus using a cutting-edge scanning technique called High Resolution Magic Angle Spinning. Professor Sebastian Cerdán from CSIC said: "This complements the PET scans and allows us to follow the metabolism of acetate in the hypothalamus. From this we could clearly see that the acetate accumulates in the hypothalamus after fibre has been digested. The acetate then triggers a series of chemical events in the hypothalamus leading to the firing of pro-opiomelanocortin neurons, which are known to supress appetite." This is the first demonstration that acetate released from dietary fibre can affect the appetite response in the brain. The research also showed that when acetate was injected into the bloodstream, the colon or the brain it reduced the amount of food eaten by mice. Co-author on the study Professor Jimmy Bell, from the MRC Clinical Sciences Centre, said: "It's exciting that we have started to really understand what lies behind fibre's natural ability to supress our appetite and identified acetate as essential to the process. In the context of the growing rates of obesity in western countries, the findings of the research could inform potential methods to prevent weight gain." Professor Gary Frost added: "The major challenge is to develop an approach that will deliver the amount of acetate needed to supress appetite but in a form that is acceptable and safe for humans. Acetate is only active for a short amount of time in the body so if we focussed on a purely acetate-based product we would need to find a way to drip-feed it and mimic its slow release in the gut. "Another option is to focus on the fibre and manipulate it so that it produces more acetate than normal and less fibre is needed to have the same effect, providing a more palatable and comfortable option than massively increasing the amount of fibre in our diet. Developing these approaches will be difficult but it's a good challenge to have and we're looking forward to researching possible ways of using acetate to address health issues around weight gain." Professor David Lomas, Chair of the MRC's Population and Systems Medicine Board, added: "It's becoming increasingly clear that the interaction between the gut and the brain plays a key role in controlling how much food we eat. Being able to influence this relationship, for example using acetate to suppress appetite, may in future lead to new, non-surgical treatments for obesity."
10.1038/ncomms4611
Medicine
Solving obesity: Could manipulating microbes offer an alternative to weight loss surgery?
Zehra Esra Ilhan et al, Temporospatial shifts in the human gut microbiome and metabolome after gastric bypass surgery, npj Biofilms and Microbiomes (2020). DOI: 10.1038/s41522-020-0122-5
http://dx.doi.org/10.1038/s41522-020-0122-5
https://medicalxpress.com/news/2020-03-obesity-microbes-alternative-weight-loss.html
Abstract Although the etiology of obesity is not well-understood, genetic, environmental, and microbiome elements are recognized as contributors to this rising pandemic. It is well documented that Roux-en-Y gastric bypass (RYGB) surgery drastically alters the fecal microbiome, but data are sparse on temporal and spatial microbiome and metabolome changes, especially in human populations. We characterized the structure and function (through metabolites) of the microbial communities in the gut lumen and structure of microbial communities on mucosal surfaces in nine morbidly obese individuals before, 6 months, and 12 months after RYGB surgery. Moreover, using a comprehensive multi-omic approach, we compared this longitudinal cohort to a previously studied cross-sectional cohort ( n = 24). In addition to the expected weight reduction and improvement in obesity-related comorbidities after RYGB surgery, we observed that the impact of surgery was much greater on fecal communities in comparison to mucosal ones. The changes in the fecal microbiome were linked to increased concentrations of branched-chain fatty acids and an overall decrease in secondary bile acid concentrations. The microbiome and metabolome data sets for this longitudinal cohort strengthen our understanding of the persistent impact of RYGB on the gut microbiome and its metabolism. Our findings highlight the importance of changes in mucosal and fecal microbiomes after RYGB surgery. The spatial modifications in the microbiome after RYGB surgery corresponded to persistent changes in fecal fermentation and bile acid metabolism, both of which are associated with improved metabolic outcomes. Introduction Roux-en-Y gastric bypass (RYGB) is an effective treatment strategy for morbid obesity and its comorbidities, such as diabetes mellitus 1 . Although the precise mechanisms leading to its success remain unclear, RYGB alters hormonal response 2 , energy metabolism 2 , and bile acid circulation 3 towards weight loss outcomes. Additionally, an increasing number of studies have shown that RYGB alters gut microbiota in humans 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . The composition of the gut microbiota shifts promptly in humans as soon as 1–3 months after surgery 4 , 5 , 10 , and those changes have been reported to persist 12 months post-surgery 5 , 7 , 10 , 12 . Additionally, a number of studies 4 , 5 , 7 , 10 , 11 , 13 , 14 , 15 have evaluated the fecal microbiota after RYGB in longitudinal cohorts. Due to the invasiveness of mucosal microbiome sampling 16 , studies of the human gut microbiome in obesity and after RYGB have relied on fecal samples 4 , 5 , 7 , 8 , which underrepresent the mucosal communities that actively interact with host immune system and epithelial cells 17 . In healthy humans, composition of mucosal and fecal microbiota varies due to differences in local environments 16 , 18 . The composition of the mucosal microbiota can drastically change in humans during dysbiosis, such as in ulcerative colitis 19 , colorectal cancer 20 , and diabetes 21 but, to our knowledge, the mucosal microbiome after RYGB in humans has not been characterized longitudinally. After RYGB, the metabolic products of the gut microbiota exert beneficial effects on host metabolism 22 . For example, butyrate and propionate, which are known to induce satiety in animals 23 and humans 24 , were in greater concentrations in post-RYGB patients compared to nonsurgical controls 6 . RYGB surgery also increased bile acid concentrations in plasma 3 , 8 , 25 and this increase has been associated with weight loss in rats following RYGB 26 . An increase in propionate and bile acids after RYGB was associated with an increase in hormone peptide tyrosine tyrosine (PYY) in humans and, hence, resolution of diabetes 27 . Finally, RYGB increased the abundance of amino acid degradation products in feces 6 , 8 . However, these molecules in connection to microbiome have not been evaluated longitudinally in pre-surgical human populations. In this study, we characterized the temporal and spatial structures of the microbiome and metabolome in humans before and after RYGB surgery, using 16S rRNA amplicon gene sequencing, gas chromatography-mass spectrometry, liquid chromatography-mass spectrometry, and nuclear magnetic resonance spectroscopy. This longitudinal multi-omic approach revealed differences between mucosal and fecal microbial communities and in fecal metabolites in morbidly obese individuals before and after RYGB surgery. Furthermore, we demonstrated comparable findings from this longitudinal cohort to those of a cross-sectional one. Results and discussion RYGB surgery induced significant weight loss We studied the microbiome and metabolome of two cohorts: longitudinal and cross-sectional populations. For the longitudinal arm of the study, we recruited nine morbidly obese pre-RYGB participants (a tenth participant dropped out after baseline measurements) and monitored their weight loss and health outcomes 6 months (RYGB-6_mo) and 12 months (RYGB-12_mo) after RYGB surgery. The study design is presented in Fig. 1a , and participant characteristics are summarized in Table 1 . We also compared this longitudinal population to a previously studied cross-sectional RYGB cohort (RYGB-CS) ( n = 24) 6 . Fig. 1: Study design and weight loss after RYGB surgery. a Study design including number of participants and sample types collected longitudinally and cross-sectionally. b Body mass index (BMI) index of participants before the surgery (pre-RYGB), 6 months (RYGB-6_mo), and 12 months (RYGB-12_mo) after the surgery . c % Excess weight loss 12 months after the surgery, 6 months after the surgery, and 6–12 months after the surgery. The box plots represent minimum, maximum, median, first quartile and third quartile values. The gray shaded box around median of RYGB-CS represents median absolute deviation. Statistical significance between the groups was tested with Wilcoxon signed-rank test and p values were corrected using the Bonferroni method. ** p < 0.01. Full size image Table 1 Participant characteristics of the longitudinal cohort. Full size table Figure 1b, c shows the short- and longer-term effects of RYGB surgery on weight loss. Percent excess weight loss (%EWL) calculations, also shown in Fig. 1c , confirm that the participants achieved the greatest weight loss during the initial 6 months and maintained the weight loss a year after the surgery. Median %EWL after 12 months (65 ± 10) was slightly lower than the median %EWL of the cross-sectional group (RYGB-CS) (73 ± 15) 6 ; however, this difference was not statistically significant ( p = 0.24). Changes in participants’ diet regimen s may have contributed to weight loss after the surgery. It is important to note that dietary intake survey was self-administered; hence, errors in completion could have occurred. Table 2 summarizes total dietary calories and dietary composition. Based on total calories reported, the morbidly obese participants (pre-RYGB) were consuming fewer calories than the normal weight (NW) participants. In the United States, it is often recommended to have morbidly obese patients lose 10% of their excess weight prior to surgery in order to minimize surgical complications 1 , even though pre-surgical weight loss has not been associated with a reduction in post-operative complications 28 . Our pre-RYGB participants were enrolled in a pre-surgery diet program, and according to the self-reported surveys, they appear to have restricted their calorie intake to achieve pre-surgical weight loss. Although the caloric intake increased by 22% at 12 months compared to 6 months after RYGB, the weight loss benefits were sustained. The dietary composition of the morbidly obese participants did not significantly change after the surgery (Wilcoxon signed-rank test, p = 0.683), although, compared to NW individuals, carbohydrates formed a smaller fraction of the diets of post-RYGB participants (Table 2 ). Our study results are consistent with prior reports that RYGB results in significant weight loss, especially during the first six months after the surgery 29 , and remain stable or continue to improve until up to one year after the surgery 30 . Table 2 Dietary composition of the samples of normal weight (NW), pre-surgical morbidly obese baseline (pre-RYGB), 6 months after surgery (RYGB-6_mo), and 12 months after surgery (RYGB-12_mo). Full size table Besides weight loss, RYGB is known to lead to resolution of many metabolic disorders, including Type II diabetes. At their baseline measurements, seven participants had high blood pressure and diabetes, eight of them had hyperlipidemia, and nine of them had degenerative osteoarthritis. After RYGB, a majority of the study participants had resolution of diabetes, hyperlipidemia, and hypertension (Table 1 ), but not arthritis. The metabolic improvements after RYGB are well known and our observations are in agreement with previous reports 31 , 32 . RYGB altered fecal and mucosal microbiome structures To detect changes in the gut microbiome after RYGB surgery, we analyzed the structure of fecal and mucosa-associated (mucosal) microbiomes of morbidly obese individuals ( n = 9) before and after surgery. Rectal mucosal samples were collected at baseline and 12 months after RYGB via unsedated flexible sigmoidoscopy. Microbial DNA was extracted from fecal and mucosal samples. We performed weighted and unweighted Unifrac 33 analyses on 16S rRNA gene sequences using the QIIME 1.9 suite 34 , and the principal component analyses (PCoA) are shown in Fig. 2 . The effects of RYGB on the microbiome were pronounced for mucosal and fecal communities (Fig. 2a, b ) for PCoA analysis based on unweighted Unifrac distances. As demonstrated by Fig. 2a , mucosal communities differed significantly on the PCo1 axis when comparing pre-RYGB group to RYGB-12_mo group ( p = 0.02). Additionally, the pre-RYGB group was significantly different than the NW group, particularly on the PCo1 axis, indicating that microbiomes of normal weight and morbidly obese individuals differed in structure. Although PCoA based on unweighted Unifrac distances demonstrated the impact of RYGB on mucosal and luminal communities, PCo1 and PCo2 explained only a fraction (up to 13%) of the variability in the data set. Even though we controlled for factors such as, age of the participants and use of pharmaceuticals, heterogeneity in the human population, and other factors that influence gut microbiota composition led to a small fraction of variability in the data set being explained by the PCo1 and PCo2. Fig. 2: Unifrac analysis of mucosal and fecal microbiome after RYGB surgery. Microbiome communities ( a mucosal) and ( b fecal) before and after RYGB surgery in comparison to NW controls based on unweighted Unifrac distances. Microbiome communities ( c mucosal) and ( d fecal) before and after RYGB surgery in comparison to NW controls based on weighted Unifrac distances. Box plots represent the median distances among the communities on PCo1 and PCo2. * indicates Mann–Whitney U -test p < 0.05 and ** indicates Mann–Whitney U -test p < 0.01. Full size image Differences in mucosal communities before and after RYGB were less apparent when weighted Unifrac (Fig. 2c ) was used to calculate the dissimilarities among the communities. With weighted Unifrac analysis, the differences for minor taxa were obscured by the great abundances of Firmicutes and Bacteroidetes phylotypes. Based on unweighted and weighted Unifrac distances, changes in the fecal microbiome appeared as soon as 6 months after the surgery, and the difference on PCo2 between pre-RYGB and RYGB-12_mo was significant ( p = 0.04) (Fig. 2b, d ). The ADONIS test was used on Unifrac distance matrices to differentiate overall differences in microbiome structure based on defined groups. The ADONIS R 2 values ranged from 0.086 to 0.133 ( p < 0.05) based on participant groups (NW, Pre-RYGB, RYGB-6_mo, and RYGB-12_mo) (Fig. 2 ). Even though these values are relatively small in terms of explaining the variation in the data set, ADONIS R 2 values were smaller than 0.03 when grouping was based on gender, diet, BMI, stool consistency, or age. The ADONIS R 2 results illustrate that our data set had high heterogeneity and variability; nevertheless, bariatric surgery had significantly greater impact on the overall microbiome structure than any of the other factors that commonly explain interpersonal variability, including diet and BMI. When the RYGB-CS samples were incorporated into the weighted and unweighted Unifrac analysis, both RYGB-6_mo and RYGB-12_mo samples clustered together with RYGB-CS samples for weighted and unweighted Unifrac, although more strongly with the unweighted Unifrac (Supplementary Fig. 1 , ADONIS R 2 = 0.2401, p = 0.003). Additionally, when fecal and mucosal samples were analyzed together (Supplementary Fig. 1 ), clustering based primarily on sample type followed by the participant groups was observed, especially based on unweighted Unifrac distances. Interestingly, some of the RYGB-12_mo mucosal samples clustered with the fecal samples, indicating that after RYGB the mucosal community structure was more similar to the fecal community structure; however, the small sample size did not allow us to assess the significance of this observation. In summary, the results for the fecal microbiome are consistent with previous reports 4 , 7 , 10 , 12 showing that fecal microbiome structure changed after RYGB, with changes sustained at least 1 year after surgery. It is imperative to characterize changes in the microbiome of the mucosal space and the feces due to their differences and physiological relevance. In the lumen, substrates are usually dietary molecules, whereas in mucosal surfaces, they are host-derived glycans 35 . Another difference is the electron acceptor at the mucosal surfaces versus the lumen 36 . Oxygen derived from the eukaryotic tissues is gradually depleted in the mucosal layer by facultative anaerobes, and, therefore, the lumen becomes anaerobic 18 . Microorganisms that live in the lumen are also affected by other host-associated factors such as transit time, frequency and composition of dietary intake, and bile acids 37 . Fecal microbiome after RYGB was similar to microbiome from a RYGB-CS cohort Figure 3a shows the relative abundances of significantly enriched or depleted genus-level phylotypes in the lumen 6 months and 1 year after RYGB surgery and in comparison to a cross-sectional cohort (RYGB-CS). The surgery significantly altered relative abundances of 24 genus-level phylotypes (Wilcoxon signed-rank test p < 0.05). The majority of enrichments or depletions of genus-level phylotypes occurred within the first 6 months after surgery and were sustained 1 year after surgery (Fig. 3a and Supplementary Fig. 2 ). The abundances of these phylotypes were significantly different in RYGB-6_mo and RYGB-12_mo groups, compared to the NW group (Supplementary Fig. 2 ) (Mann–Whitney U -test p < 0.05). One of the microbial staples of RYGB surgery is enrichment of phylotypes from Gammaproteobacteria 4 , 7 , and our analysis showed that RYGB also altered the abundance of genus-level phylotypes from other phyla, including Firmicutes , Actinobacteria, Fusobacteria, and Bacteroidetes. Fig. 3: Relative abundance of genus-level phylotypes after RYGB surgery. Heat map showing which a fecal and b mucosal microbial phylotypes were enriched or depleted after RYGB surgery. The samples were chronologically ordered based on time after surgery. The statistical significance between pre-RYGB and RYGB-12_mo groups were based on Wilcoxon signed-rank test and p values were corrected using Bonferroni method * p < 0.05. Full size image We observed an increase in the abundance of Proteobacteria phylotypes Rothia , Aggregatibacter , Granulicatella, Citrobacter , Janthinobacterium , and Klebsiella . Firmicutes had many phylotypes whose relative abundances were affected by the surgery. While many phylotypes—such as Streptococcus , Enterococcus , Lactococcus , Veillonella , and Granulicatella —were enriched, Ruminococcus , Blautia , and Roseburia were depleted after the surgery. Akkermansia from Verrucomicrobia and Adlercruetzia and Rothia from Actinobacteria also were in greater abundance after RYGB. Figure 3a also shows the relative abundance of the aforementioned phylotypes compared to a RYGB-CS group. The RYGB-CS group consisted of participants who had previously undergone RYGB, had lost at least 50% of their excess weight, and who had provided a stool sample 13–60 months after surgery; therefore, it was a more heterogeneous group by time after surgery 6 . The results seen in the RYGB-CS group paralleled those seen in the RYGB-6_mo and RYGB-12_mo groups (Supplementary Fig. 2 ). The results of the RYGB-CS group were sorted based on the time after their surgery, and we found no clustering based on time after surgery. These results support that changes in the microbiome occurred quickly after the surgery (within 6 months) and persisted in the long term (>60 months). To confirm observations on genus-level phylotypes, we performed unsupervised clustering and generated hierarchical clustering heat map based on Euclidean distances among the samples (Supplementary Fig 3 ). Samples formed five distinct clusters driven by Fusobacterium, Prevotella , Ruminococcus , Parabacteroides , Blautia , and Akkermansia . Three of the clusters were composed of only post-RYGB samples (RYGB-6_mo, RYGB_12-mo, and RYGB-CS), indicating the impact of the RYGB alone on the relative abundance of genus-level phylotypes. The sustained changes observed in the microbiome after RYGB indicate that the surgery-imposed changes to the gut environment/ecosystem were persistent and permanently affected gut microbiota in a stronger way than interpersonal variations. RYGB altered mucosal microbial communities increasing Akkermansia sp. and lactate metabolizers Our analysis of enriched or depleted phylotypes also demonstrates that RYGB surgery led to a wide spectrum of changes in the mucosal space (Fig. 3b ). Six genus-level phylotypes were significantly enriched in the mucosa after RYGB surgery: Granulicatella , Lactococcus , Streptococcus , Blautia , Dorea , and Akkermansia (Supplementary Fig. 4 ) (Wilcoxon signed-rank test p < 0.05). Relative abundances of these phylotypes also were greater in the NW mucosa, compared to the pre-RYGB mucosa. Except for Akkermansia , the microorganisms enriched post-surgery are from the Firmicutes phylum and are known to form biofilms and contribute to lactate metabolism 38 . Lactococcus , Streptococc us, and Granulicatella are lactate-producing microorganisms, whereas Dorea and Blautia are lactate oxidizers 39 . Lactate-producing Streptococcus and Lactococcus species have been used as probiotics to enhance gut epithelial barrier and integrity 40 , since lactate availability is crucial for butyrate producers and, therefore, colon epithelium health 41 . In addition to lactate producers and oxidizers, we observed an increase in the relative abundance of Akkermansia in the mucosa after RYGB surgery. Previously, a similar trend was observed in mice after RYGB 22 , and our findings confirmed this observation in humans. Animal models have demonstrated that a weak gut barrier contributes to the development of endotoxemia and inflammation, which subsequently leads to insulin resistance and an increase in adiposity 42 , 43 , 44 . Akkermansia is a known mucin degrader, and its presence has been shown to improve the gut epithelial barrier, reduce adiposity of the organs, and protect against insulin resistance and obesity in humans 45 . However, a recent study that investigated the link between Akkermansia abundance in the feces of severely obese individuals after RYGB and diabetes did not report any association between Akkermansia abundance and glucose homeostasis after RYGB 46 . Overall, our results indicate that alterations in the gastrointestinal mucosa after RYGB may contribute to an increase in mucin-degrading, lactate-producing, and lactate-oxidizing microorganisms. Post-RYGB microbiota alters the fecal metabolome Changes in the structure of the gut microbiome after RYGB surgery were reflected in the gut metabolome. Figure 4 shows PCA results for the fecal metabolomes detected by gas chromatography-mass spectrometry (GC-MS) and 1 H-NMR-based methods. Fecal water-soluble extracts were analyzed with 1 H-NMR, while lyophilized fecal matter was analyzed with GC-MS. 1 H-NMR provided mainly volatile and water-soluble compounds, whereas GC-MS identified many metabolites of the undigested nutrients and components of microbial cells. Fig. 4: Fecal metabolites detected with H-NMR and GC-MS. Principal component analysis based on metabolites detected by a GC-MS based metabolomes before and after the surgery. b Principal component analysis based on GC-MS metabolome that includes RYGB-CS group samples. c 1 H-NMR based metabolomes before and after the surgery. d Branched-chain fatty acids—isobutyrate and isovalerate—measured with NMR after RYGB surgery prospectively and retrospectively. e Concentrations of isoleucine, leucine, and valine, branched-chain amino acids (BCAA) measured with NMR after RYGB surgery prospectively and retrospectively. The box plots represent minimum, maximum, median, first quartile and third quartile values. f Predicted relative abundance of the genes that are involved in branched chain amino acid (leucine, valine, and isoleucine) degradation. KEGG ko00280 valine, leucine, isoleucine degradation pathway was used in the analysis. * indicates statistical significance between pre-RYGB and RYGB-12_mo groups based on Wilcoxon signed-rank test and p values were corrected using Bonferroni method * p < 0.05. Full size image Based on GC-MS data (Fig. 4a ), RYGB-6_mo and RYGB-12_mo fecal metabolomes clustered away from pre-RYGB metabolomes (ADONIS R 2 = 0.326, p = 0.002). Figure 4b overlays the GC-MS-based metabolomes of the RYGB-CS participants to the metabolomes of longitudinal study participants illustrated in Fig. 4a . The metabolomes of most of the RYGB-CS participants were more similar to RYGB-6_mo and RYGB-12_mo participants (Mann–Whitney U -test p < 0.05, ADONIS R 2 = 0.256, p = 0.01) than to Pre-RYGB or NW participants, supporting the observation that the impact of RYGB surgery resulted in a unique metabolic fingerprint that was preserved in the long term. Moreover, the similar clustering patterns with the metabolome (Fig. 4a, b ) and the microbiome (Fig. 2a ) strengthen the conclusion that the changes in the microbiome and metabolome after RYGB surgery were linked, persistent, and stronger than interpersonal variations. Similar to GC-MS metabolome, 1 H-NMR quantification of water-soluble metabolites showed a distinct RYGB metabolome (Fig. 4c ). The concentrations as measured by 1 H-NMR of the major SCFAs of the human gut—acetate, butyrate, and propionate (Table 3 )—were similar for the RYGB-CS and RYGB-12_mo groups. Propionate-to-acetate and butyrate-to-acetate ratios increased 6 and 12 months after the surgery, and the difference between baseline and 12-month samples was statistically significant ( p = 0.03). We previously reported a similar trend with our RYGB-CS cohort 6 . Higher butyrate- and propionate-to-acetate ratios after the surgery compared to baseline indicates a shift in microbial metabolism from acetate production to butyrate and propionate production. Butyrate and propionate have been shown to signal free fatty acid receptors and induce a satiety response in the brain of mice 47 . Shifts in microbial metabolism reflect another potential mechanism explaining how microorganisms contribute to weight loss following RYGB. Table 3 Concentrations of acetate, butyrate, and propionate normalized to dry weight g stool, along with propionate-to-acetate and butyrate-to-acetate ratios in NW, pre-RYGB, RYGB-6_mo, and RYGB-12_mo, and RYGB-CS groups. Full size table We also evaluated the concentrations of branched chain amino acids (BCAA) and their fermentation products—branched-chain fatty acids (BCFA)—before and after RYGB. As seen in Fig. 4d , the fecal concentrations of two BCFAs—isobutyrate and isovalerate—increased after surgery and this observation is consistent with previous observations 8 , 13 . The RYGB-CS and RYGB-12_mo groups had similar concentrations of these BCFAs. Therefore, we can deduce that an increase in the abundance of these BCFAs was more likely associated with RYGB. Three BCAAs—leucine, isoleucine, and valine—were at significantly lower abundance in RYGB-12_mo in comparison to NW and RYGB-CS groups (Fig. 4e , Table S1 ). Interestingly, the concentration of BCAAs poorly correlated with the amount of protein consumed by the participants (Spearman’s rank correlation coefficient < 0.335). Even though BCAA concentrations were variable, their fermentation products were always greater post-RYGB. This observation was further supported by the predicted abundances of the genes that are involved in the BCAA (valine, leucine, and isoleucine) degradation and the synthesis of BCFA production pathways as shown in Fig. 4f . The predicted abundances of BCAA degradation genes were significantly greater after RYGB and in the RYGB-CS group in comparison to the pre-RYGB group. In summary, changes in the microbiome due to RYGB surgery seemed to enhance fecal amino acid metabolism, which may have contributed to weight loss by producing BCFA that are capable of signaling free fatty acid receptors 48 . The role of BCFAs on FFA receptor signaling warrants further investigation. In addition to SCFAs and BCFAs, we analyzed a wide spectrum of other metabolites. Most of the fecal metabolites including sugars and amino acids, detected with 1 H-NMR and GC-MS, were at greater abundance in the pre-RYGB group, and their concentrations dropped 12 months after surgery (Table 4 ). The fecal metabolite concentration profiles of RYGB-12_mo and RYGB-CS groups were similar, possibly due to the altered gastrointestinal tract environment after the surgery and similarities in participant diets. However, as shown in Table 4 , besides isovalerate and isobutyrate, concentrations of xylose also increased after RYGB and were even higher than for NW controls. Greater abundance of fecal xylose after RYGB would seem to indicate that the participants adapted to more plant-based diets or that they lost some microbial hydrolytic capabilities to metabolize xylose. The (self-reported) participants’ fiber intake did not change significantly after the surgery (Table 1 ), although it was statistically lower in post-RYGB participants compared to NW participants ( p = 0.032). Table 4 Concentrations of fecal metabolites normalized to dry weight that were statistically different between pre-RYGB and RYGB-12_mo samples. Full size table RYGB surgery decreased fecal bile acid concentrations As RYGB is known to alter the bile acid metabolism 49 , 50 and contribute to remission of type 2 diabetes and weight loss 51 , we quantified seven primary and 10 secondary bile acids in fecal samples from participants before and after RYGB surgery using liquid chromatography-mass spectrometry (LC-MS). Figure 5a, b and Table S2 show primary and secondary fecal bile acids and their conjugated forms measured at baseline, 6 months, and 12 months after RYGB. Fecal concentrations of primary bile acid—cholic acid (CA)—and its glycine- and taurine-conjugated forms (TCA and GCA) were significantly lower 6 months after the surgery (CA p = 0.022, TCA p = 0.001, GCA p = 0.002), and they remained at similar concentrations 12 months after surgery. Similarly, concentrations of glycine- and taurine-conjugated forms of chenodeoxycholic acid (CDCA), GCDCA and TCDCA, dropped significantly 6 months after the surgery. Concentrations of secondary bile acids, lithocholic acid (LCA), its glycine conjugated form, GLCA, and taurodeoxycholic acid (TDCA) significantly dropped 6 months after surgery as well (LCA p = 0.02, GLCA p = 0.001, and TDCA p = 0.003). Figure 6a illustrates the conjugation and transformation reactions of primary bile acids and the resulting secondary bile acids produced by gut microbiota 52 . Our findings show that primary and secondary bile acids were significantly diminished in feces after RYGB surgery. Fig. 5: Fecal bile acids measured before and after RYGB surgery. a Fecal primary bile acid (CA cholic acid, TCA taurodeoxycholic acids, GCA glycocholic acid, GCDCA glycochenodeoxycholic acid, TCDCA taurochenodeoxycholic acid) and b fecal secondary bile acids (TDCA taurodeoxycholic acid, LCA lithocholic acid, GLCA glycolithocholic acid) that were statistically different after RYGB surgery. * indicates statistical significance between pre-RYGB and RYGB-6 and RYGB-12_mo groups based on Wilcoxon signed-rank test and p values were corrected using Bonferroni method * p < 0.05. The box plots represent minimum, maximum, median, first quartile, and third quartile values. Full size image Fig. 6: Bile acid–fecal microbiota interactions. a Bile acid transformation reactions. Orange color = primary bile acids, green = primary conjugated bile acids, blue = secondary bile acids, pink = secondary conjugated bile acids. CA cholic acid, DCA deoxycholic acid, GDCA glycodeoxycholic acid, TDCA taurodeoxycholic acid, GCA glycocholic acid, TCA taurocholic acid, CDCA chenodeoxycholic acid, LCA lithocholic acid, HCA hyacholic acid, HDCA hyodeoxycholic acid, UDCA ursodeoxycholic acid, GUDCA glycoursodeoxycholic acid, TUDCA tauroursodeoxycholic acid, THDCA taurohydroxydeoxycholic acid, GLCA glycolithocholic acid, TLCA taurolithocholic acid, GCDCA glycochenodeoxycholic acid, TCDCA taurochenodeoxycholic acid, αMCA α-muricholic acid, βMCA β-muricholic acid. b Bile acid biosynthesis genes predicted from 16S rRNA gene abundances via PICRUSt. KO numbers that were used in the prediction: K01442, K00076, K23231, K22604, K22605, K22606, K22607, K15868, K15871, K15869, K15870, K15873, K15874, and K07007. The box plots represent minimum, maximum, median, first quartile, and third quartile values. c Fecal bile acid and microbiome co-occurrence network based on Spearman’s rho correlation coefficients. Full size image In order to reveal microbial connections to bile acid metabolism, we used Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) software 53 to predict secondary bile acid biosynthesis pathway from 16S rRNA gene abundances. PICRUSt prediction of secondary bile acid biosynthesis pathway was greater after RYGB; however, these are genomic predictions and not activity measurements (Fig. 6b ). Table S2 summarizes the median concentrations of primary and secondary bile acids observed in NW and RYGB-CS groups in comparison to RYGB-12_mo and pre-RYGB groups. The concentrations measured in the RYGB-CS group were similar to the RYGB-12_mo group, which indicates that the response of surgical modification on bile acid metabolism was strong and reproducible, even if the baseline time points before the surgery are missing. Additionally, bile acid levels after RYGB groups were similar to NW participants (Table S2 ). Overall, our findings indicate that fecal concentrations of primary and secondary bile acids declined after RYGB surgery, and levels similar to those in NW individuals were maintained even years after the surgery. Fat and cholesterol intake are important factors in the production and secretion of bile acids 52 . As seen in Table 1 , the participants did not reduce the fat percentage of their diets, although they consumed fewer calories after RYGB, which leads to lower absolute amounts of fat being consumed. Lower delivery of fat to the gastrointestinal tract might have played a role in the lower concentrations of fecal primary and secondary bile acids measured in this study. Considering that gut microbiota can transform bile acids 52 , 54 and concentrations of bile acids can affect gut microbiota composition 52 , we performed co-occurrence-network analysis between fecal genus - level microbial phylotypes and bile acids. As shown in Fig. 6c , phylotypes that were enriched after RYGB, including Fusobacterium , Veillonella , Enterococcus , Akkermansia , and Streptococcus negatively correlated with various bile acids such as TDCA, LCA, TCDCA, GCA, GDCA, TCA, and TLCA. Christensenella , a strongly heritable phylotype that was also associated with lean body type 55 , was the only genus-level phylotype that negatively correlated with the secondary bile acids THDCA and UDCA. Previously, UDCA treatment have been associated with weight gain in humans 56 . On the other hand, Ruminococcus , Coprobacillus , Holdemania , Eggerthella , and Dorea positively correlated with primary and secondary bile acids. We performed the same analysis with mucosal genus-level phylotypes and bile acids (see Supplementary Fig. 5 ). We observed associations with minor taxa such as Methanobacterium and bile acids. Interestingly, Clostridium genus phylotypes negatively correlated with a number of bile acids. Additionally, UDCA, GDCA, and GUDCA were the bile acids that showed the greatest number of associations with mucosal phylotypes. Given that bile acids have been reported to modify the gut microbiome 52 , lower delivery of bile acids to the colon might have played a role on some of the microbiome compositional changes observed. Additionally, microbial bile acid metabolism can potentially have effects on host body weight and metabolism since it was previously shown that bile diversion to the small intestine can recapitulate some of metabolic benefits of the RYGB independently from the surgery 57 . Previous studies in humans reported increased levels of circulating bile acids, especially secondary bile acids, after RYGB as measured in blood plasma 25 , 51 , 58 . A recent study characterizing bile acids in the fecal samples in women after RYGB showed decreased concentrations of many bile acids 59 ; hence, our results support findings from that study. In rats, RYGB has also been shown to increase plasma bile acid concentrations and the secretion of weight-loss-associated hormones Peptide YY and Glucagon Like Peptide-1 (ref. 49 ). However, a recent study on rats demonstrated that the bile acid profiles in the intestines did not change after RYGB even though microbial profiles were significantly altered 60 . One difference among the reported human studies and ours is that our measurements were in fecal samples, whereas the others analyzed serum samples; hence, the measurements are not directly comparable. Bile acid quantification is often done in serum samples, which might reflect more physiologically relevant concentrations. However, our findings in fecal samples may lead to more profound understanding of microbial metabolism of bile acids in the gut. Further studies on the impact of microbial metabolism and gut levels of bile acids on host health are warranted. We demonstrated the impact of RYGB surgery on the gut microbiome, metabolome, and bile acid metabolism of humans studied prospectively and retrospectively. We document that changes in the human gut microbiome after RYGB in the luminal and mucosal space. The mucosal space is a critically important site for host–microbe interactions. Changes in the fecal metabolome mirrored changes in the fecal and mucosal microbiome structure, suggesting that the profile of microbial metabolism changed as a result of major physiological, environmental, and nutritional alterations affecting the gut after RYGB surgery. The delivery of bile acids to the colon diminished after surgery, potentially contributing to the altered microbiome and metabolome profiles. As a small sample size is a limitation of our study, studies with greater sample size are needed to validate our findings. Finally, results from a longitudinal cohort were consistent with observations from cross-sectional studies after RYGB surgery, supporting a dominant and persistent impact of RYGB on the intestinal microbiome. Methods Study design For the longitudinal cohort, we recruited 10 morbidly obese participants who were scheduled to undergo RYGB surgery (pre-RYGB) and 10 normal weight controls. The demographics of the study participants are included in Table 1 . Considering that RYGB cohorts are often composed of female participants 61 , our study presents a more balanced distribution of genders (see Table 1 ). In order to confirm results of cross-sectional studies with this longitudinal study, we included 24 participants (RYGB-CS) who had undergone RYGB surgery 13–60 months before the sample collection and had lost at least 50% of their excess weight. Therefore, the CS population represents long-term outcomes of RYGB surgery on gut microbiome and metabolome. The demographics of this cross-sectional population can be found in a previous publication 6 . Fecal samples collected at the specified time points (Fig. 1a ) were stored at −80 °C within 4 h of production until analyzed. Three participants did not provide fecal samples at 6 months and one did not provide a sample at 12 months. Distal sigmoid colon (25 cm from the anal verge) biopsies were collected during non-sedated flexible sigmoidoscopy following administration of a cleansing enema from 10 NW participants and 9 prospective RYGB participants before and 12 months after the surgery at Mayo Clinic, Scottsdale, Arizona, USA. The samples were instantly washed and submerged in liquid nitrogen until frozen and were kept at −80 °C until analysis. All participants filled out 4-day food diaries and food-frequency questionnaires (within 2 weeks prior to sample collection) with assistance of a dietitian and DietOrganizer software (dietorganizer.com) was used to analyze the dietary composition. DNA extraction, 16S rRNA gene sequencing, and analysis We extracted microbial DNA from feces and biopsy (mucosal) samples using MOBIO PowerSoil DNA extraction kit (MOBIO Laboratories, Carlsbald, CA, USA). We prepared sequencing libraries using the protocols from Earth Microbiome project using V4 primers with Illumina Miseq Instrument 62 . PANDAseq 63 paired reads were analyzed using QIIME 1.9 suite 34 . The details of the analysis can be found in the Supplementary Document. Briefly, OTUs were formed at 99% sequence similarity and the OTUs that contained less than 0.005% of the total number of sequences and chimeric sequences were omitted from the analysis as previously recommended 64 . We calculated alpha and beta diversity metrics of Phylogenetic Diversity Whole Tree 65 , and Unifrac 33 . Gene abundances for bile acid biosynthesis were predicted with Phylogenetic Investigation of Communities by Reconstruction of Unobserved Species (PICRUSt) software 53 . Genus-level phylotypes that significantly differed after RYGB were clustered based on Euclidean distances using ClustVis 66 . 1 H-NMR analysis of water-soluble fecal metabolites For each fecal specimen, approximately 1 g of wet weight was diluted with 20 mL of milliQ water and homogenized by vortexing for 3 min. The homogenates were centrifuged at 16,110 × g for 15 min, and the supernatants were filtered through 0.2-μm PVDF membranes (PALL Corporation). The fecal extracts were diluted with a 10% (v/v) spike of a National Institute of Standards and Technology calibrated reference solution. The resulting mixture was loaded into 3-mm NMR tubes (Bruker Inc), and NMR spectra were collected using a Varian Direct Drive 600 MHz NMR spectrometer equipped with a 5 mm triple-resonance salt-tolerant cold probe. The 1D 1 H NMR spectra of all samples were processed, assigned, and analyzed by using Chenomx NMR Suite 8.1 with quantification of metabolites based on spectral intensities relative to the internal standard and as previously described 6 . LC-MS analysis of fecal bile acids Fifty microliters of internal standard mixture (1.0 µg/mL) were spiked into 5 mg of lyophilized fecal samples and processed as described in the Supplementary document. Homogenized samples were centrifuged at 13,600 × g for 20 min and the supernatants were filtered using Acrodisc 45 µm syringe-filters. Samples were cleaned-up using a 60 mg Oasis HLB 3cc cartridge (Waters Corporation, Milford, MA), dried in vacuo, and stored at −70 °C until analysis. The extracts were analyzed with a Waters nano-Acquity UPLC system (Waters Corporation, Milford, MA). MS analysis was performed using an Agilent model 6490 triple quadrupole mass spectrometer (Agilent Technologies, Santa Clara, CA) outfitted with an in-house nano-electrospray ionization interface. The sample preparation and bile acid quantification procedures were based on the method of Humbert et al. 67 , with modifications described in the Supplementary Document. GC-MS analysis of fecal metabolites Metabolites were extracted from 10 mg of lyophilized stool samples using methanol with sonication. Extracted metabolites were completely dried in vacuo and derivatized by methoxylamination and trimethylsilyation and analyzed by GC-MS as reported previously 68 . GC-MS raw data files were processed using the Metabolite Detector software, version 2.5 beta 69 . All raw GC-MS data will be made available via the MetaboLights metabolomics data repository ( ). Statistical analyses of microbiome and metabolome data sets We used Statistical Package for Social Sciences (SPSS) and R packages 70 for all statistical analyses. The medians of the groups along with median absolute deviation values were calculated and reported. Shapiro–Wilk test was used to test normality of the data sets. For the longitudinal cohort, 16S rRNA gene relative abundance comparisons were tested with Wilcoxon signed-rank test. For cross-sectional cohort comparisons, Mann–Whitney’s U -test was used. The p values were corrected using Benjamini and Hochberg method 71 and corrected p values less than 0.05 were accepted as significant. Same tests were utilized to analyze NMR, GC-MS, and LC-MS data sets. For the LC-MS data analysis, the data were analyzed after they were log2 transformed. We performed ADONIS test 72 on microbiome and metabolome distance matrices to quantify the variation explained by defined variables based on 999 permutations. To reveal associations between bile acid concentrations and the relative abundance of taxonomic groups, we calculated Spearman’s rank correlation coefficient and accepted significance above critical values with Bonferroni corrected p values less than 0.05. Ethics approval and consent to participate All study participants provided written informed consent and all procedures were approved by the Institutional Review Boards of Mayo Clinic and Arizona State University (IRB# 10-008725). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The 16S rRNA gene sequences were deposited in the Sequence Read Archive (SRA) database (BioSample IDs = SAMN08684029-SAMN08684111).
Already considered a global epidemic, human obesity continues to be on the rise. According to the Centers for Disease Control, more than 40% of the U.S. population is considered obese. The gamut of adverse health effects associated with obesity is broad, including such devastating illnesses as type 2 diabetes, coronary artery disease, stroke, sleep apnea and certain forms of cancer. Patients often suffer depression, loss of mobility, social isolation and inability to work. With costs approaching $316 billion dollars annually in the U.S., understanding how to quell obesity will result not only in a healthier population, but could also help reduce runaway medical costs. Despite the looming need to address obesity, the causes are not well understood. Researchers generally agree that genetic and gut microbiome composition and activity are important factors in determining who is obese and who isn't. As interest and understanding of the human microbiome increases, researchers are increasingly looking to the gut for answers that can lead to new, more effective diagnostics and therapies. The trillions of microbes in the human gut perform a vast range of critical functions in the body and have even been implicated in mood and behavior. Among microbes' critical responsibilities are the micro-management of nutrients in the food we digest—one of the reasons for their central role in the regulation of body weight. In a new study, "Temporospatial shifts in the human gut microbiome and metabolome after gastric bypass surgery," recently published in npj Biofilms and Microbiomes, ASU researcher Zehra Esra Ilhan, ASU Biodesign professor Rosa Krajmalnik-Brown—and researchers from Mayo Clinic and Pacific Northwest National Laboratory, have taken another step in understanding how the gut changes after gastric bypass surgery (also known as Roux-en-Y gastric bypass surgery). "Our findings highlight the importance of changes in mucosal and fecal microbiomes that are reflected on gut metabolism after surgery," said Ilhan. The microbial changes after surgery corresponded to persistent changes in fecal fermentation and bile acid metabolism, both of which are associated with improved metabolic outcomes." In addition to the expected weight reduction and improvement in obesity-related comorbidities after gastric bypass surgery, the researchers observed that the impact of surgery is not limited to fecal communities; mucosal communities are altered as well. Changes in the microbiome were linked to increased concentrations of branched-chain fatty acids (amino acid fermentation products) and an overall decrease in primary and secondary bile acid concentrations in fecal samples. Bile is an alkaline fluid that aids digestion. "Previous bariatric surgery-microbiome studies in humans relied largely on fecal samples because sampling through the intestinal mucosal membrane requires an invasive procedure," said Ilhan, lead researcher for the study. At the time of the study, Ilhan was a doctoral student in Krajmalnik-Brown's lab. She is currently conducting research at INRAE-French National Institute for Agriculture, Food and Environment. Bariatric surgery is an operation that causes people to lose weight by making changes to the digestive system. These changes are physiological and chemical and include gastric restriction, malabsorption, bile acid metabolism, and chemical signaling. "The mucus membrane is a critically important site for host-microbe interactions. We understood that with fecal sampling, we had an underrepresented picture of how the mucosal communities actively interact with host immune system and epithelial cells," said Ilhan. (Epithelial cells are cells that line the surfaces of your body.) Although gastric bypass surgery has been successful for many patients suffering from morbid obesity, it is a serious, invasive procedure that is not without risk and expense. In addition, some patients regain the weight they have lost, perhaps because they lack the favorable microbes necessary for permanent weight loss. "Understanding the microbial behavior in the gut could potentially lead to a creating a probiotic that could replace surgery—or an improved indicator to identify the best candidates for surgery and sustained weight loss," said Krajmalnik-Brown. In the longitudinal study, subjects provided fecal samples and rectal mucosal samples. The rectal mucosal samples were collected via un-sedated flexible sigmoidoscopy at baseline, and again at 12 months post-gastric bypass surgery. Researchers analyzed microbial DNA that was extracted from the fecal and mucosal samples. Fecal metabolites were analyzed using high-throughput metabolomics approaches. A tell-tale indicator of pathology in obese patients has been found in the gut, where a markedly lower diversity of microbial communities is observed. A high diversity of gut microbes is essential to good health. In 2009, Krajmalnik-Brown's research team showed for the first time that gastric bypass surgery produced profound changes in the composition of microbial communities in the gut. The gut flora of post-surgical gastric bypass patients showed a marked difference from obese and normal weight patients. In a 2017 study, the team took another step by comparing how microbes and the metabolome (the metabolome is the total number of metabolites present within an organism, cell or tissue) change after gastric bypass surgery and lap band surgery. The 2017 study demonstrated that gastric bypass surgery caused a dramatic reorganization of the gut, which increases microbial diversity. Changes in the gut microbiota related to lap band surgery (also known as laparoscopic adjustable gastric banding) were mild and accompanying weight loss was less pronounced. Earlier studies have demonstrated that fat is reduced and weight loss triggered when germ-free mice receive a fecal transplant from mice who had undergone gastric bypass surgery. Krajmalnik-Brown's team is currently working on a project funded by the National Institutes of Health in which the main goal is to quantify the contribution of the microbiome to the host energy balance. This project is intended to move the field from associations to causality and help identify how microbes and metabolites can fight obesity. Krajmalnik-Brown is a professor in the ASU School of Sustainable Engineering and Built Environment and faculty in the School of Life Sciences. At the Biodesign Institute, she practices as part of the Swette Center for Environmental Biotechnology. Krajmalnik-Brown is also known for her research into the role of the microbiome in those with autistic spectrum disorder. Research reported in this publication was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health under Award Number R01DK090379
10.1038/s41522-020-0122-5
Biology
Indigenous tribe that worships tigers helps protect the species
Adrian A. Lopes et al. Worshiping the Tiger: Modeling Non-use Existence Values of Wildlife Spiritual Services, Environmental and Resource Economics (2020). DOI: 10.1007/s10640-020-00416-1
http://dx.doi.org/10.1007/s10640-020-00416-1
https://phys.org/news/2021-01-indigenous-tribe-worships-tigers-species.html
Abstract Several indigenous tribes around the world derive spiritual value from revering fauna and flora species. Species conservation is not a prime objective of such traditions but can be an unintended consequence. Conventional species conservation practices ignore this spiritual value and tribes are often evicted from protected areas. We use the existence value framework to develop a coupled ecological-economic model of the use and non-use, existence values of wildlife for a tribe that derives spiritual value from a wildlife population. We calibrate the model for the Biligiri Rangaswamy Temple (BRT) Tiger Reserve in India with the resident Soligas tribe who consider tigers as sacred and back out an existence value of tigers in this reserve from a tribe manager’s perspective. The model ascertains tiger population dynamics under several management scenarios. Steady-state convergence is observed under secure property rights for the Soligas. Scenarios in which they are evicted from the BRT reserve and lose their property rights yields localized tiger extinction. Finally, we generate a marginal existence value function and discuss the potential for using existence value estimates in guiding conservation policy. Working on a manuscript? Avoid the common mistakes 1 Introduction A fourth of animal and plant species are threatened with extinction (IUCN 2011) by direct threats such as overharvesting and indirect threats such as habitat loss and fragmentation (Ando and Langpap 2018 ). A similar share of carnivores is critically endangered, endangered, or vulnerable (Hilton-Taylor et al. 2009 ). Carnivores generate a variety of ecosystems services (ES): some of these are regulation ES whereby carnivores help stabilize ecosystems while others are provisioning (e.g., pelt) and cultural (e.g., recreation) ES. Some ES are derived through consumptive (e.g., trophy hunting) or non-consumptive use values (e.g., wildlife recreation). Others are derived through non-use values, such as existence values, and these are the least studied in part because they are the hardest to estimate. If non-use existence values are significant relative to use values, then public resource management decisions based on benefit–cost analyses that omit these values are not socially optimal and might lead to extinction (Alexander 2000 ). In addition, many of the ES that provide non-use values are public goods, and consequently, the conservation of the species providing them will be underprovided by private parties engaging in extractive use of the species, destruction of its habitat, or failing to maintain habitat health that is necessary for the survival of the species. Conservation policies aimed at correcting the market failure in species conservation include habitat conservation programs, policies to encourage habitat ecosystem health maintenance, and management of hunting (Ando and Langpap 2018 ). One of the most common habitat conservation policies consists of establishing protected areas. When the species of conservation interest coexists with indigenous communities, the enforcement of the protected status of the area often involves the expulsion and exclusion of these communities. The practice is controversial on human rights and ethical grounds, and its effectiveness has been mixed (Dowie 2009 ; Lele et al. 2010 ). Here, we use an ecological-economic model to show that this practice will fail to maximize the net social value derived from the resource and can be counterproductive: When communities derive existence value from the species, such as the value gained from worshiping tigers by local tribes in southwestern India, their expulsion and exclusion lead to species extinction. In contrast, securing their property rights leads to species survival. We use a case study of an indigenous tribe that worships tigers and estimate the existence values the tribe derives from worshipping the tiger. Conservationists have made claims that “tribal peoples are the best conservationists and guardians of the natural world” (Rust 2016 ). In some cases, such claims are supported by evidence. For example, the Soligas tribe in the Western Ghats of India reveres the wild Bengal tiger ( Panthera tigris ssp. tigris ) according to their customary spiritual beliefs. The NGO Survival International released data showing that the population of tigers in that region doubled between the years 2010 and 2014 and that the population growth rate was higher than the national average in India (Rust 2016 ). However, such claims and descriptive data might include confounding factors leading to this higher-than-average growth rate that might not be related to the co-existence of the species with the tribe. Moreover, there is no formal theoretical or empirical analysis in the conservation economics literature on the effect that existence values—which local populations derive from a species—have on the conservation status of that species. More generally, the literature on non-consumptive values in conservation tends to be limited to tourism use values (Skonhoft 1998 ). Although it is widely believed that non-consumptive public good values of endangered species may be considerable (Bulte and van Kooten 1999b ) and may well be greater than consumptive values (Swanson and Barbier 1992 ), no estimates of such values exist. This is in part because non-market valuation may not be appropriate to estimate the resource values of indigenous peoples who have socio-political structures and decision-making processes that are inconsistent with the referendum format of valuation surveys used to generate welfare estimates (Adamowicz et al. 1998 ). It is not surprising then that very few studies estimate existence values for endangered species as would be appropriate to do in the case of the spiritual values of the Bengal tiger. One notable exception is Zabel et al. ( 2011 ), a study that accounts for the existence values of tiger conservation in the case of co-existence with livestock. However, the existence values the authors consider represent values society at large might put on the knowledge that the species exists, which typically capture moral satisfaction, “warm glow” and ethical dimensions (Alpizar et al. 2003 ) but not spiritual values derived by the local populations from worshipping the tiger. Moreover, given the lack of functions that can be used to model such values, they focus on deriving a wide range of marginal existence values that guarantee interior solutions, as opposed to backing out existence value estimates as a function of population levels. In this paper, we provide a case study of the Soligas tribe living in the Biligiri Rangaswamy Temple (BRT) Tiger Reserve located in India’s Western Ghats. This tribe attaches significant spiritual value to the Bengal tiger. We estimate the existence values Footnote 1 that lead to a higher population of tigers when the tribe’s property rights are secured and show that exclusionary policies can lead to local species extinction. We develop a bioeconomic model that includes both extraction value and non-use existence value of the Bengal tiger. The model we propose includes a specification of an existence value function, which allows us to estimate the tribe’s existence values derived from worshipping the tiger, as opposed to identifying a wide range of society’s values as in Zabel et al. ( 2011 ). We compare conservation policies that consist of either keeping or taking away property rights from the tribe, with and without poaching fines. We find that, if taking away property rights from the tribe leads to myopic harvesting (e.g., Skonhoft and Solstad 1996 ), poaching increases as a result of insecure property rights, even in the presence of fines, which leads to localized tiger extinction. When the tribe’s property rights are safeguarded, on the other hand, existence values are realized, and the species population increases. The policy implications regarding property rights is a salient and current one because local courts can and have secured rights of indigenous communities to remain or return to their ancestral lands within a tiger reserve through enforcement of the 2006 Indian Forest Rights Act (Bhullar 2008 ). There has however been considerable uncertainty about the future implementation of land claims for forest-dwelling indigenous communities in India. In the past, several forest-dwelling tribespeople have seen their land claims rejected in the Supreme Court of India under the Forest Rights Act in circumstances where they lacked proof of possessing the land for the last three generations or more (Sengar 2019 ). Eviction from ancestral lands is an unpredictable threat that renders land title uncertain for many forest-dwelling tribes in India. In the Soligas case, Fanari ( 2019 ) describes how land rights to the BRT were taken away from them temporarily before reacquiring them in accordance with the Forest Rights Act. The poaching of endangered species—such as tigers—attracts a penalty as per the Indian Wildlife Protection Act (MOEF 2013 ). As is the case with any other tribe, the Soligas are subject to this poaching penalty in accordance with the law. In our model, we numerically estimate the existence values the Soligas tribe derives from worshipping the tiger. The numerical analysis begins with a revealed preference estimation of an existence value weight using a model calibration that represents the current situation of the Soligas wherein they currently have land rights in the BRT and are subject to a poaching penalty. We then examine alternative policy scenarios in which property rights are either secured or not, with and without poaching fines, to account for the possibility of a change in the enforcement of land claims in the future. We discuss the possible policy applications of using existence value estimates to guide species conservation policy and conservation investments. We note that the implications of our study are not limited to megafauna. The spiritual significance of flora—as in the case of sacred groves located within forests (Bhagwat et al. 2005 ; Reddy and Yosef 2016 ; Vipat and Bharucha 2014 )—would also increase their conservation value to society. The model we use, its results, and policy implications are consistent with the bioeconomic literature on existence value but depart from it because of the institutional point of view in the analysis and the type of non-consumptive values considered. Clark et al. ( 2010 ) consider the case of a social portfolio manager who optimizes the net benefits of harvesting a resource for private gain with the added public benefit of its existence value for society at large. Conrad ( 2010 ) uses a similar model to consider the case of managing a whale population to optimize the net benefits from whale harvest and whale watching that provide non-consumptive direct use values. While the model setup we use here is consistent with those of Clark et al. ( 2010 ) and Conrad ( 2010 ), its formulation from the point of view of a local population, as opposed to a social planner representing society at large, the explicit modeling of existence values, and its application to terrestrial megafauna conservation, are novel. While the message is not new—conservationists have made claims that “tribal peoples are the best conservationists and guardians of the natural world” (Rust 2016 )—the argument, within the context of a bioeconomic model, has not been widely applied to natural resources with spiritual value. The conservation economics literature investigates different policy approaches available to conserve species. According to Skonhoft ( 1998 ), providing locals with benefits from hunting and tourism can reduce incentives for illegal poaching. However, Fischer et al. ( 2011 ) find that whether benefit sharing provides conservation incentives depends on the design of the benefit shares, the size of the benefits relative to agricultural losses, and the management of hunting quotas. Bulte and Rondeau ( 2007 ) consider another policy tool, compensation payments, and find that these payments have ambiguous effects on wildlife stocks and local welfare. While they can reduce hunting effort, these payments can also provide incentives to convert wildlife habitat into agricultural land. Conservation performance payments are yet another policy tool, which consists of payments for environmental services—either monetary or in-kind payments—made by an agency to individuals or groups and are conditional on specific conservation outcomes (Albers and Ferraro 2006 ; Engel et al. 2008 ). Focusing on the case of co-existence of tiger and livestock, Zabel et al. ( 2011 ) find that conservation performance payments can generate enough incentives for livestock herders to refrain from hunting so that the carnivore population reaches its socially optimal level. We contribute to the conservation economics literature by estimating the non-use existence values that explain the patterns of non-extinction observed in the case of the Bengal tiger and show that the policy of expulsion and/or exclusion of local tribes in the name of conservation may well lead to extinction. Conversely, conservation policies which recognize that wildlife existence values lead to species survival and secure a tribe’s property rights outperform conservation policies that exclude tribes venerating a species or natural environment. The related literature lacks such existence value estimates of wildlife that provide spiritual services. More broadly, the conservation economics and non-market valuation literature lack marginal non-use benefit functions that can guide wildlife conservation policy (Eiswerth and Kooten 2009 ). Instead, the literature mostly includes analyses of traditional policies consisting of monetary or physical incentives and disincentives in the presence of wildlife-livestock or agriculture conflict. The paper is structured as follows: Sect. 2 introduces the theoretical model of non-use existence values and provide the economic rationale for its steady-state equilibrium conditions. In Sect. 3 , we apply the model to study the existence value of tigers for the Soligas tribe residing in and around the BRT Tiger Reserve. We generate an existence value weight of tigers under the current situation and proceed to estimate the existence values at the steady state tiger population that results under various hypothetical policy scenarios. Furthermore, we illustrate how the calibrated model can be used to generate a marginal existence value function for the tiger. In Sect. 4 , we discuss the policy scenario simulations resulting from the application of the bioeconomic model and conclude with pertinent conservation policy observations. 2 The Existence Value Model Clark et al. ( 2010 ) consider the case of a social portfolio manager who optimizes the net benefits of harvesting a resource for private gain with the additive public benefit of its existence value. We set up the discrete-time version of the Clark et al. optimization framework and formulate it from the standpoint of the tribe or community living around the resource as shown in Eq. ( 1 ). Footnote 2 $$\mathop {\text{maximize}}\limits_{{\left\{ {Y_{t} } \right\}_{0}^{\infty } }} W = \mathop \sum \limits_{t = 0}^{\infty } \rho^{t} \left[ {\pi \left( {Y_{t} ,X_{t} } \right) + V\left( {X_{t} } \right)} \right]$$ (1) $$\begin{aligned} {\text{subject to}}\;X_{t + 1} = X_{t} + F\left( {X_{t} } \right) - Y_{t} ,\;X_{0} > 0,\;{\text{and}}\;\mathop {\lim }\limits_{t \to \infty } \rho^{t} \lambda_{t} X_{t} = 0 \hfill \\ \pi_{Y} > 0,\;\pi_{X} > 0,\;\pi_{YY} < 0,\;\pi_{XX} \le 0,\;{\text{and}}\;\mathop {\lim }\limits_{X \to 0} V^{\prime } \left( X \right) = \infty . \hfill \\ \end{aligned}$$ The function \(\pi \left( {Y_{t} ,X_{t} } \right)\) is the private benefit minus harvest cost, where \(Y_{t}\) is the extraction and \(X_{t}\) is the resource stock. This net benefit function indicates the possibility of extraction from the resource for private gain. Harvest cost increases with \(Y_{t}\) and reduces with \(X_{t}\) —signifying that as the resource becomes scarce, it becomes more costly tsource and harvest it (Clark et al. 2010 ; Conrad 2010 ). The function \(V\left( {X_{t} } \right)\) respresents the additive existence vue derived from its public good charcteristic. As in (Clark et al. 2010 ), we assume for the existence value function that its marginal value approaches infinity as the stock is depleted, i.e., \(\mathop {\lim }\limits_{X \to 0} V'\left( X \right) = \infty\) The discount factor, \(\rho\) , is equal to \(1/\left( {1 + \delta } \right)\) , where \(\delta\) is the capital interest rate in the market. The iterative map \(X_{t + 1} = X_{t} + F\left( {X_{t} } \right) - Y_{t}\) captures the evolution of the resource stock, where \(F\left( {X_{t} } \right)\) represents the stock’s growth during period \(t\) , \(X_{0} > 0\) is the initial level of the resource stock, and \(\mathop {\lim }\limits_{t \to \infty } \rho^{t} \lambda_{t} X_{t} = 0\) is the transversality condition that, as \(t \to \infty\) , the discounted value of the resource stock becomes zero so as to ensure a maximum (Conrad 2010 ). The Lagrange expression for this infinite horizon, discrete-time problem assumes the foowing form $$L = \sum\limits_{t = 0}^{\infty } {\rho^{t} \left\{ {\pi (Y_{t} ,X_{t} ) + V(X_{t} ) + \rho \lambda_{t + 1} [X_{t} + F(X_{t} ) - Y_{t} - X_{t + 1} ]} \right\}}$$ (2) The first-order necessary conditions are accordingly derived for the control variable \(\left( {Y_{t} } \right)\) , state variable \(\left( {X_{t} } \right)\) , and co-state variable \(\left( {\rho \lambda_{t + 1} } \right)\) . $$\frac{\partial L}{{\partial Y_{t} }} = \rho^{t} \left\{ {\frac{\partial \pi \left( . \right)}{{\partial Y_{t} }} - \rho \lambda_{t + 1} } \right\} = 0$$ (3) $$\frac{\partial L}{{\partial X_{t} }} = \rho^{t} \left\{ {\frac{\partial \pi (.)}{{\partial X_{t} }} + \frac{\partial V( \cdot )}{{\partial X_{t} }} + \rho \lambda_{t + 1} \left[ {1 + \frac{\partial F( \cdot )}{{\partial X_{t} }}} \right] - \lambda_{t} } \right\} = 0$$ (4) $$\frac{\partial L}{{\partial \rho \lambda_{t + 1} }} = \rho^{t} \left\{ {X_{t} + F\left( {X_{t} } \right) - Y_{t} - X_{t + 1} } \right\} = 0$$ (5) The three first-order conditions can be solved for the steady-state equilibrium in which the three variables are unchanging with time. We carry this out by dropping the time \(\left( t \right)\) subscripts from the variables, i.e. \(Y_{t + 1} = Y_{t} = Y, X_{t + 1} = X_{t} = X, \lambda_{t + 1} = \lambda_{t} = \lambda\) $$\pi_{Y} - \rho \lambda = 0$$ (6) $$\pi_{X} + V_{X} + \rho \lambda \left[ {1 + F_{X} } \right] - \lambda = 0$$ (7) $$Y - F\left( X \right) = 0$$ (8) In steady-state, Eq. ( 6 ) implies that a harvester would extract until the marginal benefit of harvest equals the discounted shadow price of the resource in the next time period; essentially, it is the user cost of the resource. \(\lambda\) . is the value of an additional unit of the resource in period \(t\) . Equation ( 7 ) implies that if this resource is to be optimally managed, then the marginal value of the resource must equal its marginal net benefit \(\left( {\pi_{X} \left( \cdot \right) + V_{X} \left( \cdot \right)} \right)\) .plus the discounted marginal benefit of an unharvested unit of the resource that accrues in the following period \(\left( {\rho \lambda \left[ {1 + F_{X} \left( \cdot \right)} \right]} \right)\) . This total marginal benefit must reflect the value of an additional unit of the resource, \(\lambda\) . Equation ( 8 ) implies that in steady-state, the resource is neither growing nor diminishing over time or that harvest equals the growth in each \(t\) ing the ft that \(\rho = 1/\left( {1 + \delta } \right)\) and rearranging Eqs. ( 6 ), ( 7 ), and ( 8 ) we derive Eq. ( 9 ). $$\frac{{\pi_{X} \left( \cdot \right) + V_{X} \left( \cdot \right)}}{{\pi_{Y} \left( \cdot \right)}} + F_{X} \left( \cdot \right) = \delta$$ (9) Equation ( 9 ) is the well-known fundamental equation of renewable resources with the addition of the marginal benefit of existence value (Clark et al. 2010 ). The first term on the left-hand side of (9) is the ratio of the marginal value of \(X\) relative to the marginal value of \(Y\) . This is also referred to as the marginal stock effect (Conrad 2010 ). The second term is the marginal net growth rate of \(X\) . Their sum must equal the rate of interest in the capital market for optimal resource management. On substituting \(Y = F\left( X \right)\) from ( 8 ) into ( 9 ), we can solve for the pair of steady-state values \(\left( {X^{*} ,Y^{*} } \right)\) . Equation ( 9 ) would yield a curve in the \(\left( {X - Y} \right)\) space in implicit form \(\phi \left( {X, Y;\delta } \right) = 0\) . Three representative curves are shown in Fig. 1 \(\phi_{1} \left( {X, Y;\delta } \right)\) implies that extinction might be optimal (i.e. \(X_{1}^{*} = 0\) ) when either \(X\) grows too slowly, \(\delta\) is very high, or the market price is much higher than the harvest cost of the last rource unit (Conrad 2010 ). For the steady-state value \(X_{2}^{*}\) , the marginal stock effect is less th \(\delta\) since \(F^{\prime}\left( X \right) > 0\) . For the steady-state value \(X_{3}^{*}\) , the marginal stock effect is greater than \(\delta\) , and thereby it might be optimal to have a higher steady-state value of \(X\) ; this is shown below to be greater than the stock level at the maximum sustainable yield or the highest growth rate of the resource (i.e. \(X_{msy}\) ). Fig. 1 Representing the fundamental equation of renewable resources in \(\left( {X - Y} \right)\) space: existence values in \(\phi_{2} \left( {X, Y;\delta } \right)\) or \(\phi_{3} \left( {X, Y;\delta } \right)\) lead to steady states with species survival Full size image Wcan numerically solve for pairs of steady-state values \(\left( {X^{*} ,Y^{*} } \right)\) by specifying functional forms for \(\pi \left( \cdot \right)\) , \(V\left( \cdot \right)\) , and \(F\left( \cdot \right)\) , and parameterizing the mel. Once we have our \(\left( {X^{*} ,Y^{*} } \right)\) values, we can then simulate the evolution of \(X\left( t \right)\) and \(Y\left( t \right)\) over time and examine if their approach paths converge to the analytical steady-states when the initial values of resource stock and harvest are different from them, i.e., if \(X_{0} \ne X^{*}\) and \(Y_{0} \ne Y^{*}\) . In Sect. 3.1 , we will assume functional forms for \(\pi \left( \cdot \right)\) , \(V\left( \cdot \right)\) , and \(F\left( \cdot \right)\) , and calibrate the model fhe tiger population in the BRT tiger reserve in southwestern India with t resident Sigas tribe. 3 Applying the Model to a Tiger Population and a Resident Tribe In this section, we will apply the model to the Soligas tribe and the tiger population ithe BRT. Our baseline scenario is taken to represent the current situation in the BRT wrein the Soligas have community rights in accordance with the Forest Rights Act and all poaching is subject to a penalty as per the Indian Wildlife Protection Act. Given the indefiniteness with which property rights were either granted or taken away in India’s past, we shall consider hypothetical policy scenarios in the presence and absence of such rights. Furthermore, we will analyze these scenarios in the presence and absence of a poaching penalty. These hypothetical scenarios that we shall analyze will enable us to examine an array of pathways that could arise if the enforcement of land claims and the imposition of poaching penalties were to change in the future. The institutional perspective that we adopt for the baseline scenario is from the point of view of a tribe manager who maximizes the expression \({\mathbb{E}}\left[ {W^{B} } \right]\) as per Eq. ( 10 ). $$\mathop {\text{maximize}}\limits_{{\left\{ {Y_{t} } \right\}_{0}^{\infty } }} {\mathbb{E}}\left[ {W^{B} } \right] = \mathop \sum \limits_{t = 0}^{\infty } \rho^{t} \left[ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} - \left( {Y_{t} /X_{t} } \right)e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} .B + \beta \ln \left( {X_{t} } \right)} \right]$$ (10) $${\text{subject to}}\;X_{t + 1} = X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} ,\;X_{0} > 0\;{\text{given}}\;\theta \le 0,\;{\text{and}}\;\mathop {\lim }\limits_{t \to \infty } \rho^{t} \lambda_{t} X_{t} = 0.$$ We assume that \(\pi \left( {X_{t} ,Y_{t} } \right) = \left( {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} - \left( {Y_{t} /X_{t} } \right)e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} .B} \right)\) . We define \(p\) as the per unit price of the harvested resource and \(c\) as a harvest cost parameter.arvesting the tiger resource attracts a poaching penalty, and with this penalty in place, there is a chance that the harvester would be caught poaching by conservation authorities. Wume that a greater amount of illegal activity increases the chance of being caught (Copeland and Taylor 2009 ). We consider an exponential probability density function (Pishro-Nik 2014 )f the following form: \(\omega \left( {Y_{t} ,X_{t} } \right) = \left( {Y_{t} /X_{t} } \right)e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}}\) . In this probability density fction we have \(\theta \le 0\) and \(X_{t} \ge Y_{t} \ge 0\) , and it indicates that as illegal harvest increases so does the chance of being caught by conservation authorities. If the harvester is caught then a penalty of \(\$ \beta\) is imposed on her, and this enters as an expected payment that is deducted from her private net benefit. Fxistence value, we assume that \(V\left( {X_{t} } \right) = \beta \ln \left( {X_{t} } \right)\) where \(\beta\) is an existence value weight, which can be used to determine the magnitude of the marginal existence value of \(X\) . We show later in the numerical analyses that the marginal existence value increases rapidly with a decline in the tiger population. Eiswerth and Kooten ( 2009 ) note that nonlinear non-use benefit functions (such as logarithmic benefit functions) are more realistic, more consistent with empirical findings (e.g., Desvousges et al. 1992 ; Loomis and Larson 1994 ), and result in more accurate estimates of \(X^{*}\) in numerical estimations. These functions satisfy the necessary assumptions for an interior solution: \(\pi_{Y} > 0\) , \(\pi_{X} > 0\) , \(\pi_{YY} < 0\) , \(\pi_{XX} \le 0\) , and \(\mathop {\lim }\limits_{X \to 0} V'\left( X \right) = \infty\) (Clark et al. 2010 ). The tiger population grows as per a standard logistic growth function \(F\left( {X_{t} } \right) = rX_{t} \left( {1 - X_{t} /K} \right)\) , where \(r\) . is the intrinsic growth rate and \(K\) . is the environment’s carrying capacity. For the discrete-time infinite horizon optimization problem in ( 10 ), one can set up the Lagrangean \(\left( {L^{B} } \right)\) , derive the first-order necessary conditions in sy-ate, and substitute \(Y = F\left( X \right) = rX\left( {1 - X/K} \right)\) to get the fundamental equation of rewable resources, \(\left( {\phi \left( {X, F\left( X \right);\delta ,B} \right) \equiv 0} \right)\) in Eq. ( 11 ); this fundamental equation accounts f the existence value within the tribe manager’s decision framework. In contrast, Eq. ( 11.1 ) represents the fundamental equation of renewable resources in the presence of a poaching penalty but without the addition of existence value. $$L^{B} = \mathop \sum \limits_{t = 0}^{\infty } \rho^{t} \left\{ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} - \left( {Y_{t} /X_{t} } \right)e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} \cdot B + \beta \ln \left( {X_{t} } \right) + \rho \lambda_{t + 1} \left[ {X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} - X_{t + 1} } \right]} \right\}$$ $$\begin{aligned} \phi \left( \cdot \right) & \equiv \frac{c}{2}r^{2} \left( {1 - \frac{X}{K}} \right)^{2} + B.r\left( {1 - \frac{X}{K}} \right)e^{{\left( {\theta \left( {1 - r\left( {1 - X/K} \right)} \right)} \right)}} \left[ {1 - \theta r\left( {1 - X/K} \right)} \right] + \frac{\beta }{X} \\ & \quad + \left[ {p - cr\left( {1 - \frac{X}{K}} \right) - \frac{B}{X}e^{{\left( {\theta \left( {1 - r\left( {1 - X/K} \right)} \right)} \right)}} \left[ {1 - \theta rX\left( {1 - X/K} \right)} \right]} \right]\left[ {r\left( {1 - \frac{2X}{K}} \right) - \delta } \right] = 0 \\ \end{aligned}$$ (11) $$\begin{aligned} \phi \left( \cdot \right) & \equiv \frac{c}{2}r^{2} \left( {1 - \frac{X}{K}} \right)^{2} + B.r\left( {1 - \frac{X}{K}} \right)e^{{\left( {\theta \left( {1 - r\left( {1 - X/K} \right)} \right)} \right)}} \left[ {1 - \theta r\left( {1 - X/K} \right)} \right] \\ & \quad + \left[ {p - cr\left( {1 - \frac{X}{K}} \right) - \frac{B}{X}e^{{\left( {\theta \left( {1 - r\left( {1 - X/K} \right)} \right)} \right)}} \left[ {1 - \theta rX\left( {1 - X/K} \right)} \right]} \right]\left[ {r\left( {1 - \frac{2X}{K}} \right) - \delta } \right] = 0 \\ \end{aligned}$$ (11.1) In Eq ( 12 ), we consider the modalities of an alternative policy scenario where the tribe possesses land rights but there is no poaching penalty impositionWe therefore assume that \(\pi \left( {X_{t} ,Y_{t} } \right) = \left( {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} } \right)\) . The optimization problem for the tribe manager is accordingly set up as follows: $$\mathop {\text{maximize}}\limits_{{\left\{ {Y_{t} } \right\}_{0}^{\infty } }} W = \mathop \sum \limits_{t = 0}^{\infty } \rho^{t} \left[ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} + \beta \ln \left( {X_{t} } \right)} \right]$$ (12) $${\text{subject to}},\;X_{t + 1} = X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} ,\;X_{0} > 0,{\text{and}}\;\mathop {\lim }\limits_{t \to \infty } \rho^{t} \lambda_{t} X_{t} = 0.$$ We set up the Lagrangean \(\left( L \right)\) for this infinite time horizon optimization problem, derive the first-order necessary conditions in steady-state, and substitute \(Y = F\left( X \right) = rX\left( {1 - X/K} \right)\) to get the fundamental equation of renewable resources, \(\phi \left( {X, F\left( X \right);\delta } \right) \equiv 0\) in Eq. ( 13 ). The tribe manager explicitly accounts f the existence value in ( 13 ), whereas ( 13.1 ) represents the fundamental equation of renewable resources without the addition of existence value and therefore solves for the outcome of a harvester. $$L = \mathop \sum \limits_{t = 0}^{\infty } \rho^{t} \left\{ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} + \beta \ln \left( {X_{t} } \right) + \rho \lambda_{t + 1} \left[ {X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} - X_{t + 1} } \right]} \right\}$$ $$\phi \left( \cdot \right) \equiv \frac{c}{2}r^{2} \left( {1 - \frac{X}{K}} \right)^{2} + \frac{\beta }{X} + \left[ {p - cr\left( {1 - \frac{X}{K}} \right)} \right]\left[ {r\left( {1 - \frac{2X}{K}} \right) - \delta } \right] = 0$$ (13) $$\phi \left( \cdot \right) \equiv \frac{c}{2}r^{2} \left( {1 - \frac{X}{K}} \right)^{2} + \left[ {p - cr\left( {1 - \frac{X}{K}} \right)} \right]\left[ {r\left( {1 - \frac{2X}{K}} \right) - \delta } \right] = 0$$ (13.1) We can now calibrate our model for the BRT tiger reserve and numerically solve the fundamental equations of renewable resources to derive the steady-state values \(\left( {X^{*} ,Y^{*} } \right)\) under different policy scenarios. Ideally, we would have different data for each of the policy scenarios we consider so we could conduct scenario-specific model calibrations. In practice, however, poaching data are typically not available or underreported in developing countries like India; WPSI ( 2019 ) compiles limited data on tiger poaching, but that too is only a fraction of the actual number of poaching offences. Instead, we design hypothetical policy scenarios (i.e., PS2–PS4 described in Sect. 3.1 ) to represent cases that differ in whether property rights are secured and whether a policy instrument is imposed. Table 1 lists the model’s parameters as applied for the BRT Tiger Reserve and its resident Soligas tribe. Table 1 Empirical parameters for model simulation Full size table Smirnov and Dale ( 1999 ) estimate the tigers grow at an annual rate of r = 0.06. However, this value of r might be overestimating the growth rate of tigers in the wild. According to tiger census reports global tiger population increased from 3200 in the year 2010 to 3890 by the year 2016 (WWF 2016 ); these figures yield an annual growth rate of approximately 0.04. We consider an average rate of r = 0.05 to account for overestimation or underestimation. Damodaran ( 2007 ) reports wild habitat carrying capacity as 6.25 tigers per square kilometer. The BRT Tiger Reserve is 540 km 2 (KFD 2017 ), which implies a carrying capacity o \(K = \frac{540}{6.25} =\) . 86 tigers. The BRT Tiger Reserve had approximately 35 tigers around 2010 (Varma 2015 ); accordingly, we aume \(X_{0} = 35\) . Asiderable amount of planning and time goes into killing a tiger. According to investigations and reported poachers’ confessions, poaching takes between 3 and 4 weeks for a kill from planning to execution (WPSI 2010 ). Poachers sometimes use rudimentary traps to snare tigers and kill them. These traps are low-cost (approximately $3.50 per trap). One could use the daily wage rate in India to estimate the opportunity cost of poaching. The daily labor wage rate is INR272 (US$4.18) per day in India (GOI 2018 ). We use this information to arrive at a value for our harvest cost parameter, \(c = \left( {272 \times 25 + 3.5 \times 65/65} \right) \times 2 = \$ 216\) by assuming twenty-five days are devoted to a tiger poaching expedition. Footnote 3 The poacher carries the tiger skin and body parts back to the village, where he/she eventually sells it to middlemen. Poachers might receive only as little as INR1000 per tiger (Damania et al. 2003 ). Accounting for inflation (WB 2017 ), and converting to US$, we derive a poaching price value of \(\left( {1000 \times 141/61} \right)/65 = \$ 36\) per tiger. Footnote 4 We assume a rate of time preference \(\delta = 0.08\) in developing countries as used by (Zabel et al. 2009 ). Footnote 5 Any amount of poaching is considered illegal in India, and the penalty is set at INR50000 (i.e., \(B = \$ 769\) ) as per the Indian Wildlife Protection Act (MOEF 2013 ). Fstly, with the given set of parameters in Table 1 , including the reported estimate of current tiger population in the BRT Reserve (i.e. \(X_{0} = 35\) ) and growth and poaching parameters, we use the fundamental equation of renewable resources to derive the numerical steady-state values of tiger population \(\left( {X^{*} } \right)\) ., poaching harvest \(\left( {Y^{*} = F\left( {X^{*} } \right)} \right)\) ., and existence weight \(\left( \beta \right)\) . The numerical derivation uses the programmable non-linear Solver function in Excel and involves assuming initial guesses for the variable and/or parameters being estimated. Our initial guess for \(X^{ *}\) . is set at the observed population of \(X_{0} = 35\) , and we consider a wide range of initial existence weights of \(\beta\) between 150 and 900 as listed in Table 1 . Based on the policy scenarios we will discuss in Sect. 3.1 , Solver yields estimates of \(X^{*}\) , \(Y^{*} = F\left( {X^{*} } \right)\) , and \(\beta\) . Under this revealed preference estimation technique, we use the baseline policy scenario representing the current BRT situation to numerically back out the unique existence value parameter \(\left( \beta \right)\) . This existence value weight of \(\beta\) is further applied to the alternative policy scenario to simulate outcomes that might result under regime shifts. The numerical exercise will reveal if the system is already in a steady-state or not by seeing if the observed \(X_{0} = X^{*}\) econdly, we will use the iterative map \(\left[ {X_{t + 1} = X_{t} + F\left( {X_{t} } \right) - Y_{t} } \right]\) to simulate the evolution of \(X\left( {\text{t}} \right)\) and examine if its approach path converges to \(X^{ *}\) if the initial stock is not yet at the steady-state, i.e. if \(X_{0} \ne X^{*}\) . We will examine the approach paths that represent tiger population dynamics under the various policy scenarios. 3.1 Tigerr Different Policy Scenarios 3.1.1 Secured Property Rights and Poaching Penalty The first policy scenario that we consider is one where the resident Soligas tribe has secure property rights to the tiger resource, faces penalty imposition for poaching, the tribe collectively manages the resource in a way that is not myopic, and the inclusion of existence value in the optimization problem. This scenario is designed to represent the current situation in the BRT. Equation ( 11 ) can be numerically solved to derive steady-state values \(\left( {X^{*} ,Y^{*} } \right)\) for the parameters in Table 1 \(B = \$ 769\) . and \(\theta = - 5\) are the penalty function parameters where \(B\) is the penalty amount and \(\theta < 0\) yields a positive probability, i.e. \(e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} \ge 0\) . We accordingly derive \(X^{*} = 56.557\) . and \(Y^{*} = F\left( {X^{*} } \right) = 0.968\) . We use Solver at the same time to back out an existence value weight of \(\beta = 167.50\) , which yields a steady-state existence value of \(\beta \ln \left( {X^{*} } \right) = 675.92\) . Footnote 6 This numerical estimate of \(\beta\) , which is backed out using a calibration that rresents the current situation in the BRT, is essentially a revealed preference parameter tt can bused to estimate the tiger’s spiritual ecosystem service value from the perspective othe Soligas tribe manager. Using this perspective, the \(\beta\) value is unique to the Soligas, given the current situation in the BRT. Thus, \(\beta = 167.50\) would be applicable to any policy scenario, and as such would be considered a given value that represents the Soligas preferences, regardless of policy regime shifts. Aing backed out the \(\beta\) value, we will now examine if the approach path of \(X\left( t \right)\) converges to the equilibrium value, given that the initial resource stock is not at the steady-state. In order to simulate our infinite-horizon problem, we will use the concept of a final function (Conrad 2010 ). A correctly specified final function allows one to approximate the approach to a steady-state in an infinite-horizon problem by converting it to a finite-horizon problem. Let us revisit the tribe manager’s optimization problem listed in Eq. ( 10 ) and rewrite it as Eq. ( 14 ). $$\begin{aligned} {\text{maximize}}\,{\mathbb{E}}[W^{B} ] & = \sum\limits_{(t = 0)}^{(T - 1)} {p^{t} } [pY_{c} - (c/2){{Y_{t}^{2} } \mathord{\left/ {\vphantom {{Y_{t}^{2} } {X_{t} }}} \right. \kern-0pt} {X_{t} }} - ({{Y_{t} } \mathord{\left/ {\vphantom {{Y_{t} } {X_{t} }}} \right. \kern-0pt} {X_{t} }})e^{{(\theta (1 - {{Y_{t} } \mathord{\left/ {\vphantom {{Y_{t} } {X_{t} }}} \right. \kern-0pt} {X_{t} }}))}} \cdot B + \ln (X_{t} )] \\ & \quad + (P^{(T - 1)} {{\left\{ {prX_{T} (1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K}) - \frac{C}{{2X_{T} }}[rX_{T} (1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K})]^{2} - B \cdot r(1 - \frac{{X_{T} }}{K})e^{{(\theta (1 - r(1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K})))}} + \beta \ln (X_{T} ))} \right\}} \mathord{\left/ {\vphantom {{\left\{ {prX_{T} (1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K}) - \frac{C}{{2X_{T} }}[rX_{T} (1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K})]^{2} - B \cdot r(1 - \frac{{X_{T} }}{K})e^{{(\theta (1 - r(1 - {{X_{T} } \mathord{\left/ {\vphantom {{X_{T} } K}} \right. \kern-0pt} K})))}} + \beta \ln (X_{T} ))} \right\}} \delta }} \right. \kern-0pt} \delta } \\ \end{aligned}$$ (14) $${\text{subject to}}\;X_{t + 1} = X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} \;{\text{and}}\;X_{0} > 0\;{\text{given}}$$ The objective function \(W^{B}\) is the sum of the present value of net benefits over time \(t = 0, 1, 2, \ldots ,T - 1\) and some final function, \(\psi \left( {X_{T} } \right)\) , in Eq. ( 15 ): $$\psi \left( {X_{T} } \right) = \frac{{\rho^{T - 1} }}{\delta }\left\{ {prX_{T} \left( {1 - X_{T} /K} \right) - \frac{c}{{2X_{T} }}\left[ {rX_{T} \left( {1 - X_{T} /K} \right)} \right]^{2} - B \cdot r\left( {1 - \frac{{X_{T} }}{K}} \right)e^{{\left( {\theta \left( {1 - r\left( {1 - X_{T} /K} \right)} \right)} \right)}} + \beta \ln \left( {X_{T} } \right)} \right\}$$ (15) The final function can be thought of as the value of maintaining \(X_{T}\) for \(t = T, T + 1, \ldots ,\infty\) by harvesting \(Y_{t} = rX_{T} \left( {1 - X_{T} /K} \right)\) for the rest of me in steady-state. With \(Y_{t} = rX_{T} \left( {1 - X_{T} /K} \right)\) being a constant, it can be factored out of the infinite series with the present value converging to \(\psi \left( {X_{T} } \right)\) . Using the parameters in Table 1 , assuming an initial value of \(X_{0} = 35\) , \(T = 60\) years, and assigning initial value guesses for \(\left\{ {Y_{t} } \right\}_{t = 0}^{t = 59} = 0.10\) , we numerically estimate the approach paths of \(X\left( t \right)\) and \(Y\left( t \right)\) over the horizon of \(T = 60\) . years. Using Solver, we find convergence to a maximum value for the objective function, \({\mathbb{E}}\left[ {W^{B} } \right] = 8691.1\) . Moreover, we note that \(X_{t} = 56.5\) as \(t \to 60\) , which implies that the approach path of the resource stock indeed approaches the steady-state v of \(X^{*}\) as derived from Eq. ( 11 ). This approach path of \(X_{t}\) is shown in Fig. 2 . This policy result wld correspond to the implicit equation \(\phi_{2} \left( {X, Y;\delta } \right)\) or \(\phi_{3} \left( {X, Y;\delta } \right)\) in Fig. 1 that yieldedteady-state equilibriums over time. The dotted line in Fig. 2 depicts the approach path of \(X_{t}\) whout thexistence value; in this scenario the existence value, \(\beta \ln \left( {X_{t} } \right)\) , drops out of the optimization framework according to Eq. ( 11.1 ). The dotted line approach path allows one to examine the effect of excluding existence value from the tribe manger’s decision framework. Fig. 2 Approach paths of \(X\left( t \right)\) with and without existence value in the presence of secure property rights for Soligas and poaching penalties Full size image proach paths of \(X\left( t \right)\) . with and without existence value in the presence of secure property rights for Soligas and poaching penalties. 3.1.2 Exclusion Conservation Policy with a Poaching Penalty In the second policy scenario, we consider the case where the tribe’s property rights to the sacred tiger forest are taken away by authorities in the name of conservation, and poaching continues to attract a penalty. In this case, we make the assumption that insecure property rights lead to a myopic harvesting of the tiger (Skonhoft and Solstad 1996 ). The myopic harvester would accordingly treat the tiger stock \(X_{\text{t}}\) as a parameter in each \(t\) . Thereby, the harvester would maximize net benefits myopically in each \(t\) without concern for how the current period’s harvest affects the resource in the following period, i.e. \(X_{{{\text{t}} + 1}}\) . This would imply that \(\rho \lambda_{t + 1} = 0\) in the Lagrangean used for Eq. ( 11 ). The optimization problem is written as \({\mathbb{E}}\left[ {W^{B} } \right]\) in Eq. ( 16 ) and the corresponding first-order necessary condition, Eq. ( 17 ), can be numerically solved to yield \(Y_{t}\) in each \(t\) by using the parameters from Table 1 . $$\mathop {\text{maximize}}\limits_{{Y_{t} }} \;{\mathbb{E}}\left[ {W^{B} } \right] = \left\{ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} - \left( {Y_{t} /X_{t} } \right)e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} \cdot B + \beta \ln \left( {X_{t} } \right)} \right\}$$ (16) $$\frac{{\partial {\mathbb{E}}\left[ {W^{B} } \right]}}{{\partial Y_{t} }} = p - \frac{{cY_{t} }}{{X_{t} }} - B \cdot \frac{{e^{{\left( {\theta \left( {1 - Y_{t} /X_{t} } \right)} \right)}} }}{{X_{t} }}\left( {1 - \theta Y_{t} /X_{t} } \right) \equiv 0$$ (17) This first-order necessary condition can be solved numerically using Sver to equate it to zero by finding the harvest \(Y_{t}\) , with \(X_{t}\) treated as a parameter. One would note that the existence value \(\beta \ln \left( {X_{t} } \right)\) does not feature in this myopic harvester’s decision as per Eq. ( 17 ). Once again, the initial resource stock is \(X_{0} = 35\) tigers. Once Sver yields the initial harvest, \(Y_{0}\) , the iterative map \(\left[ {X_{t + 1} = X_{t} + F\left( {X_{t} } \right) - Y_{t} } \right]\) is called upon to derive \(X_{1}\) , and the exercise is repeated for \(t = 1, 2, \ldots , T\) . In the approach path shown in Fig. 3 , we observe that the resource stock \(X_{t} \to 0\) and tiger extinction in the BRT occurs in 49 years. Extinction occurs in consonance with the implicit equation, \(\phi_{1} \left( {X, Y;\delta } \right) = 0\) , that yielded localized tiger einction in the phase diagram of Fig. 1 . This implies that extinction is optimal when poaching idriven up by the harvester’s myopic behavior with a high discount rate \(\delta\) that results in \(X\) gwing slower than the offtake. Fig. 3 Approach path of \(X\left( t \right)\) without secure property rights to sacred tigers and penalty imposition Full size image 3.1.3 Sured Property Rights with no Poaching Penalty The third policy scenario that we consider is one where the resident Soligas tribe has secure property rights to the tiger resource and the tribe collectively manages the resource in a way that is not myopic as under Scenario 3.1.1, and that includes the existence value in the optimization problem. The departure from Scenario 3.1.1 is the absence of a poaching penalty. The parameter values in Table 1 are substituted into Eq. ( 13 ) to numerically solve for the steady-state values of resource stock \(\left( {X^{*} } \right)\) . and harvest \(\left( {Y^{*} } \right)\) by using Solver. Under Scenario 3.1.1, we backed out the value of \(\beta\) as \(167.50\) ; this weight is the preference parameter of the Soligas tribe as to the tiger’s existence. Using Solver, we derive the steady-state equilibrium values we derive are \(X^{*} = 56.003\) and \(Y^{*} = F\left( {X^{*} } \right) = 0.976\) , with \(\beta\) set at \(167.50\) . This situation would correspond to the implicit equation \(\phi_{2} \left( {X, Y;\delta } \right)\) or \(\phi_{3} \left( {X, Y;\delta } \right)\) in Fig. 1 that yielded steady-state equilibriums over time. The existence weight of \(\beta = 167.50\) yields a steady-state existence value of \(\beta \ln \left( {X^{*} } \right) = 674.27\) ; this value is not meaningfully different fm the 675.92 derived under Scenario 3.1.1. Next, we examine the resource’s approach path when \(X_{0} \ne X^{*}\) by using the final function method described earlier. In order to do this, we risit t tribe manager’s optimization problem listed in Eq. ( 12 ) and rewrite it as Eq. ( 8 ). $$\begin{aligned} \mathop {\text{maximize}}\limits_{{\left\{ {Y_{t} } \right\}_{0}^{T - 1} }} W^{f} & = \mathop \sum \limits_{t = 0}^{T - 1} \rho^{t} \left[ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} + \beta \ln \left( {X_{t} } \right)} \right] \\ & \quad + \rho^{T - 1} \left\{ {prX_{T} \left( {1 - X_{T} /K} \right) - \frac{c}{{2X_{T} }}\left[ {rX_{T} \left( {1 - X_{T} /K} \right)} \right]^{2} + \beta \ln \left( {X_{T} } \right)} \right\}/\delta \\ \end{aligned}$$ (18) $${\text{subject to}}\;X_{t + 1} = X_{t} + rX_{t} \left( {1 - X_{t} /K} \right) - Y_{t} \;{\text{and}}\;X_{0} > 0$$ The objective function \(W^{f}\) is the sum of the present value of net benefits over time \(t = 0, 1, 2, \ldots ,T - 1\) and a final function, Eq. ( 19 ); i.e., the val maintaining \(X_{T}\) for \(t = T, T + 1, \ldots ,\infty\) by harvesting \(Y_{t} = rX_{T} \left( {1 - X_{T} /K} \right)\) for the rest of timin steady-state. $$\psi {X(t)}= \frac{{\rho^{T - 1} }}{\delta }\left\{ {prX_{T} \left( {1 - X_{T} /K} \right) - \frac{c}{{2X_{T} }}\left[ {rX_{T} \left( {1 - X_{T} /K} \right)} \right]^{2} + \beta \ln \left( {X_{T} } \right)} \right\} .$$ (19) Using the parameters in Table 1 , assuming an initial value of \(X_{0} = 35\) , \(T = 70\) years, and assigning initial value guesses for \(\left\{ {Y_{t} } \right\}_{t = 0}^{t = 70} = 0.10\) , we numerically estimate the approach path of \(X\left( t \right)\) over the horizon of \(T = 70\) years. Using Solver and setting \(\beta = 167.50\) ., we find convergence to a maximum value for the objective function, \(W^{f} = 8691.44\) . Moreover, we note that \(X_{t} = 56.5\) as \(t \to 70\) , which implies that the approach path of the resource stock indeed approaches the steady-state value of \(X^{*}\) that was derived using Equation ( 13 ). The approach path of \(X_{t}\) is shown in Fig. 4 . The dotted line in Fig. 4 depicts the approach path of \(X_{t}\) without the existence value as derived in Eq. ( 13.1 ). Fig. 4 Approach paths of \(X\left( t \right)\) with and without existence value in the presence of secure property rights for Soligas to sacred tigers Full size image In the approach paths of Figs. 2 and 4 —under policy scenarios 3.1.1 and 3.1.3—we note that the tiger populations converge to the steady-state populations of \(X^{ *}\) with the backed-out value of \(\beta\) . More importantly, we note that the initial value of \(X_{0} =\) 35 is entirely exogenous to our simulations; this implies that even with an exogenous initial value in our model, we do observe convergence to the numerically-derived steady-state with the backed-out \(\beta\) weight. This convergence to steady state using the final function approach lends a way for one to check the reliability of the numerically backed-out existence weights and the steady-state populations. 3.1.4 Exclusion Conservation Policy with No Poaching Penalty Lastly, we will consider an exclusionary policy under which the Soligas do not have property rights in the BRT, and the harvesting of tigers is not associated with a poaching penalty. In this scenario, the tribe’s property rights to the sacred tiger forest are taken away by conservation authorities. Similar to policy scenario 3.1.2, the harvester would maximize net benefits myopically in each \(t\) without concern for how the current period’s harvest affects the resource in the following period, except that there is no poaching penalty. We would accordingly have \(\rho \lambda_{t + 1} = 0\) in the Lagrangean used for Eq. ( 13 ) and the harvester chooses \(Y_{t}\) to maximize net benefits, \(W^{np}\) , in each \(t\) as per Eq. ( 20 ). $$\mathop {\text{maximize}}\limits_{{Y_{t} }} W^{np} = \left\{ {pY_{t} - \left( {c/2} \right)Y_{t}^{2} /X_{t} + \beta \ln \left( {X_{t} } \right)} \right\} .$$ (20) The first-order necessary condition yields the optimal harvest as \(Y_{t} = \left( {p/c} \right)X_{t}\) . in each \(t\) . Note the similarity with scenario 3.1.2 under which the existence vue \(\beta \ln \left( {X_{t} } \right)\) does not feature in the myopic harvester’s decision. We use the parameters from Table 1 to derive the approach path of \(X\left( t \right)\) in this policy scenario, i.e., without secure property rhts to the sacred tiger resource. This approach path is shown in Fig. 5 with a starting value of \(X_{0} = 35\) tigers. We observe that the resource stock \(X_{t} \to 0\) and localized tiger extinctn in the BRT occurs as \(t \to 38\) years. The absence of a poaching penalty in this policy scenario hastens extinction by 11 years compared to what was observed in the approach path in Fig. 3 Likewise, extinction occurs in consonance with the implicit equation, \(\phi_{1} \left( {X, Y;\delta } \right) = 0\) , in Fig. 1 . Fig. 5 Approach path of \(X\left( t \right)\) without secure property rights to sacred tiger Full size image 3.2 Total and Marginal Existence Values of the Bengal Tiger The existence values we estimate using numerical simulations can be interpreted and used for policy guidance in similar ways to those estimated using the contingent valuation (CV) method or discrete choice experiments. Although these two non-market valuation methods can estimate non-use values, they cannot be used to estimate existence values that might include spiritual values derived by indigenous populations (Adamowicz et al. 1998 ). Because there are no other estimates of wildlife existence values related to spirituality that we can compare our estimates to, we discuss in this section how the total and marginal existence values that can be estimated using our model have similar interpretations to total and marginal willingness-to-pay (WTP) estimates in the wildlife valuation literature. We also generate a marginal existence value function for the Bengal tiger in the BRT reserve. Marginal non-use value functions are necessary for guiding conservation policy but remain unknown for any charismatic wildlife species (Eiswerth and Kooten 2009 ). Most wildlife contingent valuation studies generate total WTP estimates, as opposed to measuring the marginal benefits of increasing population numbers (Bulte and van Kooten 1999b ). The existence value we estimate in Policy Scenario 1, \(\beta \ln \left( {{\text{X}}^{ *} } \right) = {\text{US}}\$ 675.92\) , is comparable to a total willingness to pay estimate—from a CV survey—for a hypothetical conservation program that helps a tiger population grow to a steady state of \(X^{*} \cong 57\) tigers, relative to a baseline where the tiger is expected to go locally extinct. This is comparable, for instance, to the total WTP estimated by the CV method in Bandara and Tisdell ( 2005 ) for a hypothetical elephant conservation program in Sri Lanka that changes the population abundance relative to its current status. The main difference in what each study measures is that they estimate the TEV (use and non-use values) while we estimate an existence value (a non-use value) that includes the values of spiritual ecosystem services derived by the Soligas tribe from the tiger. Fewer studies measure the marginal benefits of increasing wildlife populating. Loomis and Larson ( 1994 ) used a CV survey and found that whale watching visitors are willing to pay $25.0 and $29.7 for 50% and 100% increase in gray whale population, respectively. In contrast, households were willing to pay $16.2 and $18.1 for 50% and 100% increase in the gray whale population, respectively. The authors found evidence of a diminishing marginal WTP for these two levels of population increase, which is consistent with consumer theory. In our case, marginal values of increases in the tiger population can be calculated using the backed-out existence parameter value to compute an existence value for two population levels. For example, in Policy Scenario 1, we find that the tribe’s existence value increases by $47.8 (from $579.9 to $627.7) if the population increases by 31% from its baseline value (from 32 to 42 tigers). Similarly, the tribe’s existence value increases by $82.3 (from $579.9 to $662.2) if the population increases by 62% (from 32 to 52 tigers). Consistently with testing that the marginal WTP estimates are diminishing in Loomis and Larson ( 1994 ), one can compute the marginal increases in existence values. We find that these are equal to $4.5/tiger for a 31% increase (from 32 to 42 tigers) and diminish to $4.1/tiger for a 62% increase (from 32 to 52 tigers). As with the earlier example, the comparison with the whale watching study is meant to illustrate how what we learn about existence value estimates of the tiger in the BRT reserve resonates with the valuation literature in terms of the interpretation of the estimates, with the limitation that the extant literature estimates TEV (use and non-use values) or the relative importance of non-use to use values (Bandara and Tisdell 2003 ), whereas we obtain estimates for a specific type of non-use value—those derived from the existence of the tiger in the BRT reserve for the tribe that venerates it. More generally, we can generate a marginal existence value curve for the tiger in the BRT reserve (Fig. 6 ). This marginal value function is the equivalent of a marginal WTP function from a CV study. Eiswerth and Kooten ( 2009 ) argue that such a function is more useful for policy analysis than the prevailing average and total existence value estimates, especially for species that are endangered, with low and decreasing populations. Fig. 6 Marginal existence value of the Bengal tiger in BRT (based on PS1) Full size image 4 Discussion and Conclusion The simulation of the various conservation management scenarios in our model yields several key policy results. The policy scenario of secure property rights in the presence of a poaching penalty represents the actual BRT situation in our model simulation. The Soligas tribe have secured their property rights to the reserve in accordance with the Forest Rights Act and poaching within a protected area is penalized as per the Wildlife Protection Act. A penalty of \(\$ B\) is imposed upon the harvester if she is caught by conservation authorities—with some positive probability—harvesting the resource illegally. In Fig. 2 , with secure property rights for the Soligas and \(B = \$ 769\) , our model yields convergence of the stock to its steady-state equilibrium value of \(X^{*} = 56.55\) from the initial value of \(X_{0} = 35\) . This policy scenario simulation backed out an existence value weight of \(\beta = 167.50\) , which yielded an existence value of \(\beta \ln \left( {X^{*} } \right) = {\text{US}}\;\$ 675.92\) in steady-state equilibrium. The \(\beta\) value derived in this scenario captures the tribe’s perception of the spiritual ecosystem service of tigers in the BRT using a revealed preference approach. Treating the actual situation in the BRT reserve as a baseline, we examined the potential outcomes under alternative policy scenarios. In the policy scenario depicted in Fig. 4 , where the Soligas have secure property rights in the BRT tiger reserve but no poaching penalty exists, we observed that the stock, \(X\left( t \right)\) , converges to its steady-state equilibrium value of \(X^{*} = 56\) from the initial value of \(X_{0} = 35\) . The corresponding existence value that we derived numerically was equal to \(US\$ 674.27\) in steady-state equilibrium. For this policy scenario, the existence value weight of \(\beta = 167.50\) was imposed upon it from the baseline since the tribe’s perception of the spiritual ecosystem service of tigers would presumably remain unchanged, regardless of the policy scenario. The equilibrium value of \(X^{*} = 56\) and the existence value of \(US\$ 674.27\) are slightly lower than those under the baseline scenario; this result is contingent on the \(\beta\) value that was imposed from the baseline. The expected payment of a penalty raises the stakes of harvesting the protected resource illegally, and this is demonstrated in the harvester reducing offtake in steady-state perpetuity—as listed in Table 2 . Table 2 Summary of results Full size table The public benefits of tiger spirituality for the Soligas are reinforced via a penalty policy for illegal harvest. However, one must keep in mind that the monitoring and enforcement involved in a penalty policy generate costs for conservation authorities. If achieving a sustainable tiger population is the principal aim of conservation policy, securing a resident tribe’s property rights and consequently protecting the existence value it derives from the tiger appears to work very effectively without the need for imposing a penalty policy. Steady-state stock values in these cases correspond to the phase diagrams associated with \(\phi_{2} \left( {X, Y;\delta } \right)\) or \(\phi_{3} \left( {X, Y;\delta } \right)\) in Fig. 1 . Of course, granting and securing property rights is costly as well and depends on unpredictable court decisions and their enforcement. Decisions that lead to taking away tribal property rights or lack of enforcement of decisions that safeguard their property rights would likely result in localized tiger extinction. In Fig. 5 , we observed that when property rights are taken away from the Soligas, extinction is rendered as the outcome of our model within 38 years: a situation of insecure property rights leads the Soligas tribe to be myopic and neglect existence value that might be derived from the tiger. In Fig. 3 , extinction occurs in 49 years when there is a poaching penalty present. In this case, one advantage of having a penalty policy is that localized tiger extinction takes longer to occur than in the no penalty scenario (38 vs. 49 years; Fig. 5 and Table 2 ). Inclusionary policies without penalty provide policymakers the advantage of securing a higher stock and safeguarding the existence values it generates while avoiding the burden of monitoring and enforcement associated with harvesting penalties. The incentive to harvest the tiger resource for private gain would potentially increase with a higher poaching price, \(p\) . If the ratio of price to harvest costs were to increase such that \(\left( {p/c} \right) \gg 1\) , then resource extinction would likely be rendered optimal since \(p\) would be much higher than \(c\) for the last resource unit (Conrad 2010 ). This situation would correspond to the implicit equation \(\phi_{1} \left( {X, Y;\delta } \right)\) that yielded localized tiger extinction in the phase diagram of Fig. 1 . One can infer that if poaching prices were to become higher over time with reductions in tiger population then, it would be especially prudent to secure the Soligas’ property rights so that myopic harvesting is avoided, existence values are realized, and the tiger population is conserved. In addition to providing a framework to test the statement that “tribal peoples are the best conservationists and guardians of the natural world”, the novelty of this paper is in the non-market valuation of an important, but yet-to-be valued aspect of species of conservation concern: the non-use, existence value of wildlife to the tribes that co-exist with them and venerate them. So far, non-use values can only be estimated using contingent valuation and discrete choice experiment surveys. Survey-based non-market valuation approaches to estimate existence values derived by indigenous populations might not be appropriate (Adamowicz et al. 1998 ). Moreover, such surveys typically generate average and total WTP estimates of species preservation, without the policy-relevant information on marginal WTP and how marginal WTP changes as a function of population abundance (Eiswerth and Kooten 2009 ). In the absence of functions that can be used to model tiger existence values, Zabel et al. ( 2011 ) identify ranges of such values for society. The bioeconomic model developed in this paper attempts to fill this gap by using a nonlinear specification of an existence value function and estimating existence values for a tribe that derives spiritual values from the tiger, as opposed to identifying a range of such values for society at large that typically capture moral satisfaction, “warm glow,” and ethical dimensions (Alpizar et al. 2003 ). The simulations presented here have incorporated and backed out existence values and ascertained likely approach paths for tigers in the BRT based on the Soligas’s spiritual traditions. Our model is general enough to be applied to examine the resource dynamics of several other endangered species that provide spiritual ecosystem services—both fauna and flora. Existence values in general, and those derived from spiritual services in particular, stretch to many other species revered by native tribes and forest dwellers around the world. Securing tribal property rights as a conservation policy tool is costly, and government bodies face limited financial and human resources. Having estimates of the existence values placed by tribes in different parks through a framework like the one presented here can offer policymakers information to obtain more complete estimates of non-use values needed for ex-ante and ex-post natural resource damage assessments and project appraisals (Carson et al. 1992 ). Such value estimates can also help policymakers efficiently allocate scarce conservation resources to parks with the highest potential for desired conservation outcomes. Of particular relevance to policy analysis, is knowing how marginal existence values might change as a function of wildlife population levels. For instance, existence values and marginal existence value functions like the ones estimated here can be used for conservation reserve site planning using tools such as the InVEST model (e.g., Polasky et al. 2011 ) or in conservation portfolio design models (e.g., Mallory and Ando 2014 ). Existence value estimates can help prioritize where policy efforts should be focused to maximize conservation return on investment, in ways that do not ignore tribal traditions and recognize the public good values they generate. Change history 29 January 2021 A Correction to this paper has been published: Notes We use the term ‘existence value’ instead of ‘spiritual value’ because of the impossibility to separate the two. According to the Total Economic Value topology of values (Davidson 2013 ), existence value includes values derived from self-transcendence, cultural identity, heritage values and spiritual services. Because these values might not be separable, we use the term existence value more generally, even though the tribe worships the tiger and spiritual values are likely to be a major component of the existence values we estimate. A tribe’s standpoint can be represented by a tribe manager who, in practice, can be a tribe’s elder or other authority, depending on a tribe’s socio-political structure. The expression is multiplied by 2 because in the cost function in \(\pi \left( {X_{t} ,Y_{t} } \right)\) we have \(c/2\) . This value of \(c =\) $216 is slightly higher that the cost estimate of \(c =\) $180 for poaching expeditions reported in Bulte and van Kooten ( 1999a ) and Milner-Gulland and Leader-Williams ( 1992 ). The price received by poachers is not to be confused with buyer black market prices, which can reach $15,000–$20,000 (Damania et al. 2003 ). Individuals or a society with higher rates of mortality, and thus shorter life expectancy, are likely to exhibit higher rates of time preference. The value of \(\delta =\) 0.08 is in the range estimated for developing countries. This exercise circumvents the arbitrariness of an assumed value of \(\beta\) and the possibility of multiple steady states based on this assumption.
Spirituality isn't usually considered a factor in conservation efforts. But indigenous peoples who worship wildlife may be helping protect endangered species from extinction. The Soligas tribe in the Western Ghats of India reveres the Bengal tiger. Their coexistence in India's Biligiri Rangaswamy Temple Tiger Reserve has helped the tiger population flourish, says Shadi Atallah, a natural resource economist in the Department of Agricultural and Consumer Economics at University of Illinois. Atallah first learned about the Soligas from a BBC article that discussed how the tiger population doubled from 2010 to 2014, after the tribe obtained property rights to their ancestral land. "The BBC article stated that the local tribe venerates the tiger and that worshiping relationship makes them the best conservationists," Atallah says. "We could not find anything in the conservation economics literature that backs that claim. There was nothing that accounted for spirituality ecosystem service values." He and co-author Adrian Lopes wanted to investigate how the tribe's spiritual beliefs might make them effective conservation stewards. The researchers conducted a case study to assess spiritual value of the Bengal tiger for the Soligas tribe and show how such values can be harnessed as an economic tool for promoting sustainable wildlife conservation. Atallah and Lopes used bioeconomic modeling to estimate four different management scenarios: Whether or not the Soligas tribe had property rights to the land, and whether or not poaching fines were implemented for illegal harvesting of the tigers. Their results were clear: Tribal property rights were by far the best policy to protect the tigers. "We observed that if you remove the property rights and poaching fines, the species goes to extinction in 49 years. Implementing poaching fines alone delays the extinction by nine years but does not prevent it," Atallah says. He suggests the tribe's veneration of the tiger makes them less likely to look for the quick reward of illegal poaching. There is little precedent for including spiritual values in economic models, Atallah notes. "Putting a dollar value on spirituality is controversial," he says. "But by leaving it out of economic calculations, we assume it has a value of zero." Bioeconomic models include biological information such as status and growth rate of a species and economic policies such as property rights and fines. They can also account for the values generated from wildlife ecotourism. But so far, they have not included wildlife spiritual values, Atallah states. "If we can place a value on spiritual ecosystem services the way we do for ecotourism, we would not be under-accounting for those services when governments make policy decisions," he notes. Conservation efforts often consist of establishing protected areas by separating humans and wildlife. Such policies may involve expulsing indigenous communities and are controversial on ethical and humanitarian grounds. But Atallah and Lopes' research also provides an economic argument by showing that local tribes are, indeed, the best conservationists. The Indian Forest Rights Act grants indigenous tribes property rights to their ancestral lands; however, the tribes need to provide documentation for their claim to the land, and lack of proof has in some cases led to expulsion. "Our research shows if a government has to decide which policy instrument to use, spending money in courts to secure the property rights of the local tribes is much more effective than spending money on catching and fining poachers," Atallah says. "If you care about the survival of the species, securing the property rights of the tribes that venerate them is the best tool you can have," he concludes.
10.1007/s10640-020-00416-1
Biology
Indigenous mortality following Spanish colonization did not always lead to forest regrowth
Non-uniform tropical forest responses to the 'Columbian Exchange' in the Neotropics and Asia-Pacific, Nature Ecology and Evolution (2021). DOI: 10.1038/s41559-021-01474-4 , www.nature.com/articles/s41559-021-01474-4 Journal information: Nature Ecology & Evolution
http://dx.doi.org/10.1038/s41559-021-01474-4
https://phys.org/news/2021-06-indigenous-mortality-spanish-colonization-forest.html
Abstract It has been suggested that Iberian arrival in the Americas in 1492 and subsequent dramatic depopulation led to forest regrowth that had global impacts on atmospheric CO 2 concentrations and surface temperatures. Despite tropical forests representing the most important terrestrial carbon stock globally, systematic examination of historical afforestation in these habitats in the Neotropics is lacking. Additionally, there has been no assessment of similar depopulation–afforestation dynamics in other parts of the global tropics that were incorporated into the Spanish Empire. Here, we compile and semi-quantitatively analyse pollen records from the regions claimed by the Spanish in the Atlantic and Pacific to provide pan-tropical insights into European colonial impacts on forest dynamics. Our results suggest that periods of afforestation over the past millennium varied across space and time and depended on social, economic and biogeographic contexts. We argue that this reveals the unequal and divergent origins of the Anthropocene as a socio-political and biophysical process, highlighting the need for higher-resolution, targeted analyses to fully elucidate pre-colonial and colonial era human–tropical landscape interactions. Main The term Anthropocene—used to describe a new epoch in which human activity has become the dominant influence on Earth systems—has been vigorously debated in the natural and social sciences 1 , 2 , 3 since its popularization two decades ago 4 . Tropical forests, which cover only 14% of the Earth’s surface 5 but contain 68% of the global living carbon stock 6 and half of the Earth’s biodiversity 7 , are a central feature of this discussion 8 . Indeed, human-driven, habitat-scale reorganization of these systems (a conceivable scenario given contemporary climatic, fire and land use trajectories 9 ) is thought to pose an “existential threat to civilization” 10 . The search for the beginnings of the Anthropocene has, in geological circles, centred on the identification of a single golden spike 11 . Attempts to track this critical transition initially focused on the onset of industrial fossil fuel burning in the eighteenth and nineteenth centuries 4 , but today concentrate on the Great Acceleration in the 1960s 11 , 12 . However, there have been growing calls in the social sciences to search for the origins of the Anthropocene as a long-term process that extends back into the pre-industrial era 2 , 13 , 14 , based on the premise that early agricultural processes substantially impacted atmospheric greenhouse gas concentrations, including CO 2 levels 3 . This is particularly important within carbon-rich tropical forests where archaeological and palaeoecological research has revealed evidence for substantial human impacts on ecosystems, species distributions and soils over the past 45,000 years 15 . However, while there is increasing consensus that pre-industrial societies had large impacts on global ecosystems and biodiversity 16 , 17 , the exact scale and nature of anthropogenic alteration, particularly with respect to forest cover and CO 2 concentrations, remains to be elucidated. It has been proposed that contact between the so-called Old World and New World after 1492 ce as part of the expansion of the Spanish and Portuguese empires (termed the Columbian Exchange 18 ) resulted in the radical reorganization of life on Earth without geological precedent 18 , 19 . Not only did Iberian colonizers bring new crops, animals and ways of using the land to the tropics 18 , they also introduced lethal diseases from Eurasia. The ensuing pandemics, alongside starvation and murder, wiped out up to 90% of Indigenous populations in the Americas, with the impact of their lack of immunity compounded by colonial policies focused on urban relocation and enslavement 20 , 21 . Earth system scientists have argued that this Great Dying, and the abandonment of traditional land use now known to have been extensive across the Neotropics, was so widespread that it led to dramatic forest regrowth 22 , 23 . The latest estimates suggest that subsequent afforestation captured 7.4 PgC (3.5 ppm CO 2 equivalent) from the atmosphere, resulting in a CO 2 level drop recognizable in ice cores by 1610 ce and driving global cooling seen in the form of the Little Ice Age (LIA) 21 . Although the global impacts of this regional signal have been promoted as a potential golden spike for the Anthropocene and the scale of the Great Dying is historically well documented (noting that specific estimates remain debated), direct evaluation of consequential vegetation change and overhaul of land management in carbon-rich tropical forests following Iberian colonization has been limited. Research of this nature in the Americas has often been locally constrained (for example, ref. 24 ), while larger-scale analyses have focused on the synthesis of charcoal records to reveal population and land use change, including a sustained period of reduced biomass burning after ~500 calibrated years before the present (cal yr bp ) 21 , 25 , 26 . Assessment of ecosystem responses to these demographic and land use drivers has, to date, been qualitative, non-systematic and focused on a small number of datasets 21 . Homogenous, broad-scale ecological transitions in response to depopulation in the Neotropics thus remain more assumed than proven. Although Alfred Crosby, who coined the term Columbian Exchange 18 , discussed impacts at a global scale, recent framings of this phenomenon on tropical ecosystems and their Earth systems feedbacks have been almost entirely limited to the Atlantic sphere 21 . This is despite the fact that, following their arrival in the Philippine archipelago in the sixteenth century, the Spanish Empire (including Portuguese-claimed regions incorporated into the Spanish Empire between 1580 and 1640 ce during the short-lived Iberian Union) united Madrid, Mexico City and Manila into the first truly pan-tropical biological, cultural and economic system 27 (Fig. 1 ). Like their Neotropical counterparts, many societies living in Southeast Asia and parts of the Pacific had been part of extensive exchange systems that moved people, crops and ideas across vast areas 28 . Historical records and archaeology also show that the Spanish East Indies (including parts of Taiwan, Indonesia, Palau and Micronesia)—particularly those that were geographically isolated from Eurasia—witnessed large-scale (albeit staggered) disease spread 29 and the introduction of additional novel domesticates 30 between the 1500s and 1700s (contact dates shown in Fig. 1 ). The resultant infection rates, coupled with new forms of settlement organization and land use imposed by colonial states, led to major demographic disruption, with a population decrease estimated at 30–90% depending on pre-Iberian geography and demography (discussed for each region in Supplementary Text 2 ). It is thus plausible that associated shifts in traditional farming and forestry may have resulted in similar afforestation processes and, potentially, Earth systems feedbacks to those hypothesized for the Americas 29 , 31 , 32 . Yet, there has been no regional assessment of how Iberian arrival in the Asia-Pacific influenced ecosystems and Indigenous land use and whether any parallels can be drawn between the Pacific and Atlantic hemispheres 33 . Addressing this gap is important for unravelling the landscape legacy of Iberian colonization on a more global scale and, more broadly, as a starting point for assessing cross-continental, European colonial legacies within the tropics. Fig. 1: Extent of the Spanish Empire in the tropics. Map showing the maximum extent of the Spanish Empire in the tropics, including major colonial settlements and, for the Americas, regions impacted by major epidemics by 1600 ce . Dates in brackets indicate the approximate timing of colonization. Data on Spanish-controlled regions in the Philippines were from ref. 93 . Data on epidemics were from ref. 94 . Full size image Here, we systematically compile and semi-quantitatively analyse pollen records from tropical regions of the Americas, Southeast Asia and the Pacific that became part of the first truly pan-tropical Empire—the Spanish Empire—between the 1500s and 1700s (Fig. 1 ). This permits direct, broad-scale insights into vegetation changes over the past 2,000 years—a time frame that provides context for understanding the scale of forest response to shifting land use and climate dynamics in the period leading up to and following European colonization. In doing so, we take advantage of Neotoma 34 —a rich, open access palaeoecological database that permits consistent reclassification of previously published data—to yield broad-scale assessments of ecological change in the Neotropics through time. We also compile pollen, phytolith and charcoal data from available palaeoenvironmental records in the Spanish East Indies to determine how tropical forests in the Asia-Pacific region responded to land use change associated with a decrease in the Indigenous population and Iberian colonization. We test the degree to which a uniform, pan-tropical Anthropocene process is visible following European colonization and assess how interplays of physical and human geography may complicate, or even overprint, this signal in ecosystem dynamics. We seek to provide a major advance on existing work and provide a framework for exploring how the concept of the Anthropocene can be more successfully applied as a tool for discerning the longevity, imbalances and variability of human–Earth system impacts over the past 2,000 years, providing more pragmatic perspectives for ongoing policy and conservation 35 . Results Neotropics The distribution of the 28 Neotropical sites included in our analysis of the Spanish Americas shows a spatial bias to the Andes and coastal regions and lacks data for regions known to be populous in the fifteenth century, including territory occupied by the Triple Alliance (or Aztec Empire) (Fig. 2 ). Nevertheless, there is generally at least one record for each of the major cultural sub-regions that have previously been defined within the Neotropics 21 (Fig. 2a ). This doubles the number of records assessed in the most recent attempt to gauge forest response to the Great Dying in the Americas 21 , thereby representing a major advance on previous assessments of post-colonial Neotropical vegetation change. Half of the analysed sites are located in moist tropical forest, five occur within dry tropical forest, three in tropical coniferous forest and six in tropical/montane savanna settings 36 . Given the poorly resolved age–depth models and sampling resolution of many of the datasets used in this analysis (discussed for each record in Supplementary Text 3 ), many afforestation responses after Iberian contact were classified with a degree of uncertainty (see Methods and Fig. 2 ). This factor has not been accounted for in recent attempts to use pollen data to assert early colonial era afforestation in the Spanish Americas 21 . Fig. 2: Locations of Neotropical pollen records. a , b , Locations of the Neotropical pollen records (points) included in this analysis shown relative to major pre-Iberian and Iberian era colonial geopolitical units ( a ) and ecoregions ( b ) 36 . In a , the red shading corresponds to the tropical Spanish Americas, whereas the blue shading and font represent selected pre-Iberian cultural zones. In b , any shading not defined in the key represents temperate/xeric biomes. The colour of each point corresponds to the assessed afforestation response of each record before (1000–1500 ce ) and after (1500–1600 ce ) Iberian contact. Record names are shown in black. Ecoregion names are shown in grey. In both panels, the insets show a magnified view of the Andes area highlighted by a dashed box in the main map. Terrestrial ecoregion data in b partially reproduced with permission from WWF. Full size image Eleven of the 28 records show a degree of afforestation between 1500 and 1600 ce (Fig. 2 ), consistent with prevailing theories of forest regrowth following European arrival 21 . This signal is classified with a higher degree of certainty (that is, it is clearly reflected in the generalized additive model (GAM) curvature and in the plant functional grouping (Extended Data Fig. 1)) for two of the 11 sites, which are located within the Los Llanos tropical savanna (Laguna Mozambique 37 ) and Andean valley dry forests (Quilichao basin 38 ) (Figs. 2b and 3 , Extended Data Fig. 1 ). Fifteen sites indicate afforestation in the years preceding Spanish arrival (1000–1500 CE) (Figs. 2 and 3 ), probably linked to the onset of broadly wetter climate conditions over much of the Neotropics during this time period (Supplementary Text 1 ) 39 , 40 . Seven of these records occur within the Andes, including all five sites located in or immediately proximal to dry valley forest ecoregions (<2,000 m above sea level). This spatial response bias likely reflects the cooler, wetter conditions associated with the LIA in the Andes, coupled with the sensitivity of seasonally dry and Neotropical montane forests to changing climate drivers 41 . There is no clear spatial or cultural relationship among the remaining eight records that reflect pre-Iberian afforestation. However, five of these sites occur within biomes other than moist tropical forest (four within tropical savanna and scrubland and one in coniferous tropical forest 36 ). This implies heightened climatic sensitivity of non-rainforest biomes that lie closer to precipitation thresholds and/or that changes in the availability of resources under a changing climate regimen within these habitat types encouraged social restructuring. Two of the 28 records indicate forest opening between 1500 and 1600 ce : Lake Caranã 42 (a long-cultivated tropical forest site in the Amazon) and Cenote San Jose Chulchaca 43 (a site occurring on the boundary of the Maya lowlands within dry tropical forest). Fig. 3: Neotropical non-arboreal to arboreal pollen ratios. Non-arboreal to arboreal (NAP:AP) pollen ratios (point data) overlain with GAMs (shading) over the past 2,000 years for 28 pollen records assessed from the Spanish Americas. The data are grouped according to broad geopolitical zones (Fig. 2a ) and the shading of the GAMs corresponds to the contemporary biome in which each record currently occurs (Fig. 2b ) 36 . The orange, blue and red horizontal bars represent the timings of the MWP 75 , LIA 75 and Great Dying 21 , respectively. A composite biomass burning curve for the Americas, reproduced from ref. 26 and expressed as 150-year, LOWESS-smoothed, z scores of the transformed charcoal influx, is included (greyscale curve; top left) to provide context for the region-wide shift in the fire regimen over the past 2,000 years. The data used to create the remaining plots were from refs. 37 , 38 , 42 , 43 , 71 , 74 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 , 114 , 115 , 116 . ASL, above sea level. Full size image We also explored changes in vegetation in these records over the past 150 years to provide some comparison of pre-industrial and industrial era environmental changes, although these results should be approached cautiously given the lack of sampling and age–depth resolution in the younger portion of several of the records included in this analysis (and hence lack of signal detection in the GAM (Fig. 3 )). There appears to be a clear signal of deforestation in sites from the Caribbean (2/3), Atlantic Forests (2/2), Cerrado (1/1), Los Llanos (2/2), lowland Andes (2/5) and Amazonian rainforest (1/2) after ~1850 ce (Fig. 3 ). Spanish East Indies With the exception of the Marshall Islands, our dataset includes at least one island from each of the major tropical archipelagos directly impacted by Iberian imperialism in the Asia-Pacific region (Fig. 4 ). There is, however, a scarcity of data from the Philippines—the centre of (and largest archipelago within) what was known as the Spanish East Indies—which is only represented by a single charcoal record and a single pollen record for the colonial period (both from the island of Luzon). Importantly, historical records suggest that all of the assessed regions in the Spanish East Indies experienced a degree of Indigenous population mortality following European contact between the 1500s and 1700s (the timing and extent of which is reviewed in Supplementary Text 2 ). Together, the 21 palaeoenvironmental records from 13 sites encompass the major tropical biogeographical zones in the region (Taiwan, the Philippines, Wallacea and Micronesia) (Fig. 4 ). Nine of the analysed sites occur in moist tropical forest, three occur within seasonally dry tropical forest and one occurs in tropical coniferous forest 36 (Fig. 4b ). Fig. 4: Locations of the palaeoecological sites in the Asia Pacific region. a , b , Locations of the palaeoecological sites in the Asia Pacific region included in the present analysis, shown relative to geopolitical units ( a ) and ecoregion ( b ) 36 . In a , the coloured shading corresponds to the four biogeographic zones in the former Spanish East Indies. In a and b , circular points represent pollen records, squares represent charcoal records and the asterisk represents the single phytolith record. The colour of each circle (pollen/phytolith records) or square (charcoal records) corresponds to the assessed afforestation or fire response before and after Iberian-introduced epidemics, the timings of which were variable across the region and are discussed in Supplementary Text 2 . Record names are shown in black, whereas select Micronesian island names are shown in grey. Terrestrial ecoregion data in b partially reproduced with permission from WWF. Full size image There is evidence for afforestation and decreased fire activity in four pollen/phytolith records and three charcoal records from Micronesia after archival references to a decrease in the Indigenous population (between ~50 and 90%; Supplementary Text 2 ). Most of these shifts have been classified with a degree of uncertainty (Fig. 4 ) due to the low sampling and chronological resolution of the datasets, meaning that this change is not captured in the GAMs produced for each record (Fig. 5 and Supplementary Text 3 ). All of the records showing afforestation in the years following European contact come from small islands (Yap (Fool and Thool swamps) 44 and Guam (Laguas) 45 ) and 75% of the records are located within seasonally dry tropical forest (Fig. 4b ). Available charcoal data from these islands indicate that afforestation coincides with reduced or unchanged fire activity in the landscape (Fig. 5 ). Decreased fire activity after a known population decrease (50% between 1840 and 1900 ce ) is also reflected in a charcoal record from Pohnpei 46 , although in this case there is no evidence of corresponding afforestation, possibly due to the simultaneous introduction of pigs to the island by the Spanish (see Supplementary Text 3 ). Fig. 5: Afforestation proxies and charcoal curves from the Asia Pacific region. Afforestation proxies (pollen and phytolith records) and charcoal curves (point data) overlain with GAMs (coloured shading for pollen/phytolith data and black shading for charcoal data) over the past 2,000 years for the 21 records assessed from the Spanish East Indies. The data are grouped according to broad geopolitical zones (Fig. 4a ) and the shading of the GAM for the pollen/phytolith data corresponds to the contemporary biome in which each record currently occurs (Fig. 4b ) 36 . The orange and blue horizontal bars represent the occurrences of the MWP and LIA, respectively, in the Asia-Pacific 76 . The timing of Iberian contact and known population decrease is individually annotated for each region. The data used to create the plots were from refs. 44 , 45 , 46 , 47 , 48 , 117 , 118 , 119 , 120 , 121 , as well as the present study. Full size image None of the records from regions that regularly traded with mainland Eurasia before colonization (Minahasa, North Maluku, Luzon and North Taiwan)—which also appear to have relatively low post-colonization mortality rates (15–50%; Supplementary Text 2 )—show an afforestation signal following Iberian contact. In fact, the records assessed from the Philippines and Taiwan suggest that colonization resulted in deforestation and increased fire disturbance, although this is not captured in the GAM (and thus not classified) for the Duck Pond site 47 (Taiwan), while the Lake Paoay 48 (Philippines) charcoal data do not include the Spanish colonial period (Fig. 4 and 5 ; see Supplementary Text 3 for details). Higher sampling resolution and better selection of sites relative to key European occupation zones, particularly for the Philippines, are required to investigate the consistency of this process over space. Approximately half of the pollen/phytolith (6/11) and charcoal (6/10) records show afforestation and decreased fire in the landscape in the centuries before Iberian imperial influence. There is no consistent geographic or ecoregional pattern associated with this signal. This suggests that changes in forest cover after colonization were, in many cases, muted relative to those caused by land use or climatic factors before Iberian contact. The lack of sampling resolution in the Asia-Pacific records, as well as the relatively late influence of Europeans on some of the islands (Palau (1800s), Yap and Pohnpei (1700s); Supplementary Text 2 ) means that it is not possible to tease out an industrial era signal of ecosystem change for this region from available palaeoecological data. Discussion Over one-third of the palaeobotanical records from the Atlantic and Pacific realms indicate a degree of afforestation (including minor or uncertain forest recovery) following Iberian contact. This, in part, appears to support claims made in previous work that the documented decrease in Indigenous populations in the Americas following the introduction of foreign diseases and colonial policies led to a collapse of existing farming and food production systems and concomitant forest regrowth 21 . Furthermore, it provides the first evidence that this process was not exclusive to the Americas, but also occurred in the Spanish East Indies (although it should be noted that the timing of Iberian contact was staggered in the Asia-Pacific; see Supplementary Text 2 ). Nevertheless, the lack of consistency in this response across the entire spectrum of records studied indicates that variable land use strategies, as well as other cultural, social and biophysical factors, played a key role in the observed changes to vegetation and burning. For instance, documented Indigenous resistance to Iberian occupation (for example, in North Sulawesi 49 and the West Caroline Islands 50 ) appears to have resulted in a geographically isolated settlement and/or a protracted Iberian settlement process, thereby drawing out the spread of Eurasian pathogens and land use (discussed in Supplementary Text 2 ). A similar result may have also ensued from the geographic inaccessibility of, lack of Iberian interest in or socio-ecological resilience of certain regions, including the interior Amazon 51 and Pacific coast rainforests 52 , the Llanos de Moxos 53 , Palau 54 , the Brazilian Cerrado 55 , the West Carolines 56 and the highlands of Hispaniola 57 and the Philippines 58 (discussed in Supplementary Text 2 ). As a consequence, the demography of some of these less accessible sites after Iberian colonization may have been characterized by population replacement or migration rather than just an abrupt decrease in the population. Finally, it is important to point out that certain land use strategies adopted in the tropics (for example, the polyagricultural systems deployed in the eastern and southwestern Amazon Basin) may have actually sustained forest cover, thus challenging the assumption that afforestation in palaeoenvironmental records should be the only expected ecological signal of a decrease in the Indigenous population following colonization 42 , 59 . Ecological and biogeographic factors may also have mediated forest resilience to human disturbance in both the pre-colonial and early colonial era. For instance, seasonal ecosystems within both the Atlantic and the Pacific appear more prone to Iberian-era afforestation, potentially reflecting their structural reliance on Indigenous land use practices, particularly swiddening 60 . Similarly, islands that were seemingly pushed towards their natural resource limits by pre-Iberian populations (including the small Micronesian islands and Hispaniola) are, from a biogeographic perspective, more sensitive to disturbance 61 and appear to show a higher prevalence of forest regrowth after European contact. Interestingly, the more open habitat types that appear to show the greatest forest dynamism in response to Iberian conquest have, in general, lower carbon sequestration capacity than the apparently less sensitive, dense, perpetually humid rainforests of South America or Southeast Asia 6 —a factor that is overlooked in the calculation of the impact of early colonial era afforestation on the global carbon budget 21 . It is also worth noting that our dataset lacks coverage within key pre-Iberian and Iberian era urban hubs—regions that would be expected to show higher levels of ecological restructuring following colonization. Key under-researched sites include the Valley of Mexico (controlled by the populous Empire of the Triple Alliance at the time of Spanish contact) and important Spanish settlements in the Philippines (for example, Manila, which was central to European colonialism-driven biological exchange because it hosted the Philippines–Acapulco Galleon Trade). A lack of research within important Iberian hubs relates, at least in part, to the fact that most pollen-based studies focus on reconstructing past ecological–climate or human–environment relationships over much longer time scales, biasing site selection away from landscapes that have been heavily modified over the past ~1,000 years 40 . Targeted site selection to European settlement and trade centres, as well as improved chronological and sampling control within recent centuries, is thus an important element of future palaeoecological work. Indeed, these limitations have been raised in the context of recent work attempting to use palaeoecology to gauge pre-European landscape burning patterns in Northern America 62 . Our dataset also documents, in many instances, afforestation in the centuries before Iberian conquest across the study area (that is, after 1000 ce ). In several cases, this process actually appears to have exceeded early Iberian era forest regrowth in terms of scale. Notably, it suggests that coupled atmospheric–human drivers (Supplementary Text 1 ), social disruption and, potentially, ecosystem engineering by pre-colonial populations (Supplementary Text 3 ) may have been more important drivers of regional forest cover than Iberian contact. Particularly important climate drivers probably included increasing climatic variability between ~1050 and 1400 ce , linked to the El Niño–Southern Oscillation (ENSO) 63 , 64 , as well as the heterogeneous expression of the Medieval Warm Period (MWP) and LIA across the study region (see Figs. 3 and 5 for the approximate timing in the Spanish Americas and Spanish East Indies, respectively). For example, the MWP is linked to warmer, wetter conditions in the Northern Hemisphere in the centuries preceding Spanish arrival 65 (Supplementary Text 1 ). As with the early Iberian era, increased forest regrowth before European contact also appears to have been partially controlled by biogeography. For instance, climatically driven increases in forest cover between 1200 and 1450 ce was more common in the seasonal ecosystems of western South America (including Los Llanos, Andean valley dry forests and the Brazilian Cerrado), as well as in the resource-limited Pacific islands, than in perpetually humid forests. However, it is also worth pointing out that climate-driven landscape changes were, in all likelihood, accompanied and potentially reinforced by human behavioural adaptations. For instance, there is evidence that Indigenous groups, including the Classic and post-Classic Maya and Inka, engaged in adaptive agroforestry and developed new agricultural practices to cope with climatic extremes 66 . Similarly, the ENSO-driven 1300 ce climate event in the Pacific, which resulted in a substantial depletion of available food resources (see Supplementary Text 3 ), is speculated to have led to the development of inland rather than coastal systems for procuring food, reef flat infill and construction of defensive infrastructure 67 . More detailed work is thus required to determine the changing intensity of pre-colonial and colonial human–environment–climate interactions in many of these tropical regions, such as has been conducted for the Brazilian highlands 68 . Our data also demonstrate patterns of deforestation after Iberian arrival, both as a more immediate response to settlement and as a later response to the broader consequences of European colonialism, including the rise of capitalist European hegemony 69 , 70 and the Great Acceleration 11 . Although currently limited in number, sites in Taiwan and the Philippines that are proximal to early Spanish centres hint at intensified land clearance following settlement (Fig. 4 ). A potential side effect of these settlements, which were usually established in agriculturally primed, governable lowlands may have been the active decision by Indigenous populations to migrate to less accessible uplands (for example, the Luzon highlands). It is plausible that this could have led to locally intensified land use and forest clearance within previously uncultivated areas as a corollary of a social and political resistance to colonial rule 58 . Sites in the Americas proximal to early Iberian settlements also indicate localized forest opening after an intensification of European land use. For instance, records from the Hispaniola lowlands show dramatic landscape opening in the sixteenth and seventeenth centuries following the establishment of intensive monocultural cropping systems 71 . Similarly, sites proximal to mining centres in the Andes and Veracruz, Mexico indicate intensified landscape disturbance following Iberian arrival 72 . Neotropical records from converted landscapes (for instance, those from the Atlantic Forests) highlight that industrial era deforestation far exceeds in magnitude any other shifts in forest cover over the past 2,000 years. The low temporal (subsampling) resolution of core tops and the spatial sampling bias towards sites surrounded by more intact ecosystems means that this change is likely to be under-represented in the assessed pollen records. Overall, our analysis indicates that while forest regrowth did often occur following the decimation of large Indigenous populations after Iberian contact in the tropical Americas 21 , as well as the Asia-Pacific 29 , 73 , the timing and extent of observed afforestation in the early Iberian era appears contingent on spatially variable cultural and climatic factors coupled with ecoregion-specific resilience. Tropical forests reflect long-term land use legacies on an interhemispheric basis 15 . The murder, relocation and infection of Indigenous populations in many regions, as well as the floral and faunal exchanges that took place following Iberian colonialism, are essential considerations for re-evaluating the Anthropocene as a temporally variable and biogeographically/culturally contingent unequal process 13 . However, the variations in forest dynamics we observed before and after the initial period of Iberian contact and the establishment of colonies highlight the need to develop more detailed records of vegetation and land management change in different parts of the tropics, combining archaeology, palaeoecology and Indigenous traditional knowledge. This will permit a comprehensive exploration of the ways in which Indigenous resistance, invasive species, economic imbalances and the extension of colonial power and profit-driven land use left their varied marks on contemporary landscapes around the tropical world. Addressing these fine-scale, interdisciplinary questions will require well-dated, high-resolution palaeoenvironmental reconstructions spanning the past 2,000 years and covering the range of pre-colonial and colonial land use strategies that were present across the tropics. Only when such records become available can more realistic estimates of land use change and corresponding carbon fluxes be produced and fed into Earth systems models, with current projections 21 likely to be simplifications. It is also clear that more refined understandings and records will enable conservation practitioners to grapple with the diverse socio-political, cultural and economic factors that have shaped, and continue to shape, the composition, diversity and resilience of tropical landscapes into the twenty-first century. Methods Due to the inaccessibility of raw palynological data from sites in the Spanish East Indies relative to the Spanish Americas (shaded regions in Figs. 2a and 4a ), palaeoecological data from each of these regions were extracted and prepared differently, as described below. For the Neotropics, we did not attempt to quantitatively reanalyse charcoal data from the region, as previous work has already demonstrated a sustained period of reduced biomass burning after ~500 cal yr bp 21 , 25 , 26 . In some instances, this change has been linked to a decrease in anthropogenic fire use following Iberian arrival and has been used to support the hypothesis that reduced land cultivation following a decrease in the population led to region-wide afforestation. However, site-specific discussion of the role of changing fire regimens relative to vegetation response are discussed in Supplementary Text 3 and included in our analyses where relevant. A composite curve of transformed charcoal influx (biomass burning) for the Americas (including North and South America) 26 , which shows a decrease in charcoal influx (biomass burning) between 1500 and 1650 ce to a minimum at 1650–1700 ce , is included in Fig. 3 . Neotropics data preparation We extracted Neotropical pollen datasets from the Neotoma Paleoecology Database 34 (Neotoma), which were relevant for reconstructing tropical floristic change in the former Spanish Americas before, during and after Spanish colonization, using the following criteria: 1. The record was located within geopolitical units that were part of the former Spanish Empire (including Brazil). 2. The record was directly dated. 3. The record encompassed the time period spanning at least 600–1900 ce , permitting reasonable assessment of the scale of Iberian-induced change relative to the past 2,000 years. These datasets ( n = 98; Supplementary Text 3 ; Supplementary Data 1 ) were individually assessed and selected for further analysis if: 4. The record derived from a terrestrial site that currently occurs within a tropical or subtropical biome 36 . If in a montane grassland, savanna and shrubland biome, the site was proximal (<5 km) to a tropical or subtropical biome. 5. The record included one sample that was estimated to come from the time frame 1500–1600 ce , thereby permitting assessment of floristic response to any Iberian-induced land use change. 6. The temporal resolution of the upper 2,000 years of the record (or total core length where the base of the record was <2,000 cal yr bp ) was <200 years per sample. The cut-off in criterion (6) was set in an attempt to capture forest turnover while maintaining a reasonable distribution of records across the study area. The Cobweb Swamp (Sawgrass Core) 74 record, which has a resolution of 212 years per sample (Supplementary Data 1 ), was retained for analysis as it is situated within the heart of urban development across the Mesoamerican lowlands during the Classic Maya period. We set 2,000 years as an appropriate time frame for assessing ecological dynamics as it is short enough to identify late Holocene-scale floristic change while being long enough to assess the magnitude of Iberian-influenced change against the backdrop of pre-European land use dynamics and key late Holocene climate forcing (notably, the MWP and LIA and intensification of the ENSO 75 , 76 ). An overview of how these events impacted the various geographic regions assessed in this study is outlined in Supplementary Text 1 . The application of the above criteria resulted in a final selection of 28 pollen records (from the same number of sites) for analysis. The setting, location and publication(s) associated with the selected records are detailed in Supplementary Data 1 and discussed in Supplementary Text 3 . The chronology and sampling resolution of the pollen records can influence whether rapid response dynamics are captured and appropriately constrained 77 . While this is problematic for many Neotropical records 78 , our selection of the pollen records based on sampling density and chronological control over the time period of interest attempts to selectively remove data that do not adequately capture ecosystem dynamics before, during and after the Iberian colonial period. We used the most up-to-date chronological models developed for the records in Neotoma and include a discussion of the interpreted ecological change, in the context of each chronological model produced for the records, as part of Supplementary Text 3 . A major obstacle to comparing palynological records across space (including different cultural zones and ecoregions) is the variability in the taxonomic resolution and the range of methods used by the original authors to classify vegetation change through time. To manage this, we adopted a two-pronged approach to assessing ecological change. First, we used the raw pollen counts from each record and consistently reclassified all individual pollen taxa into nine plant functional groups that can provide information about major state shifts in site vegetation through time. These functional groupings are listed in Extended Data Fig. 1 . Classification was based on previously published work assigning different pollen types to biomes using surface pollen data from Latin America 79 . Once regrouped, we converted raw values to relative abundances and plotted each dataset stratigraphically against the age–depth models produced for each record (Supplementary Text 3 ). Major changes in the records were identified by clustering the data (method = coniss; distance = Euclidean; stratigraphically constrained) 80 , 81 . Second, the reclassified data for each site were used to calculate the ratio of non-arboreal to arboreal taxa as a proxy for landscape openness in the tropics 82 , 83 . This methodology attempts to eliminate interpretations of change based on fluctuations in aquatic, wetland and fern taxa, which are commonly driven by site-specific, local-scale hydrological shifts. This method assumes that grass pollen derives from a dryland versus wetland source. Changes in this ratio were cross-checked against shifts in the plant functional groupings, as well as the nature of the site type (Supplementary Text 3 ). Spanish East Indies data preparation Neotoma and the Global Paleofire Database 84 were searched for pollen and charcoal data within the time frame 0–2000 ce from countries within the former Spanish East Indies. This search returned four charcoal records (Supplementary Data 2 ) and no pollen records. We obtained an additional, unpublished raw charcoal dataset prepared by J.S. from a site (Lake Bulalacao) in the Philippines, and the raw pollen data were from Lake Paoay 48 . The five raw charcoal datasets were analysed by converting raw concentration values into influx rates (fragments per cm 2 per year). The Lake Paoay pollen data were prepared by calculating and plotting the ratio of grass to arboreal pollen as a proxy for forest openness. This proxy was chosen to maintain consistency with other available pollen curves from the region. To increase data capture within the Spanish East Indies, a review of regional published pollen and publications was conducted. Data from any plots of grass or dry herbs data plotted against depth (in most cases, used as a proxy for landscape openness), as well as any complimentary charcoal data, were extracted using WebPlotDigitizer 85 . While all efforts were made to ensure precise data extraction, minor sample or variable offsets may have been introduced as errors into the dataset depending on the quality of the initial graph production. Because some of these plots were made against depth (versus age), and the interpretation of age was based on outdated chronologies, an updated age–depth model for several of the cores was constructed using the program Bacon 86 (R version 3.6.2) 81 (details included in Supplementary Text 3 ). Because of the paucity of datasets from this part of the world, more liberal inclusion criteria were set for these datasets. Records were included if they captured environmental change within the 200-year period before Iberian contact (or a known disease-influenced population decrease) and at least one post-European sample. In the absence of pollen data, a single phytolith record was used to obtain a forest response signal from the island of Pohnpei. This was the only phytolith record used in the analysis given that phytoliths appear to be less sensitive to changes in tree cover than pollen in evergreen forests 87 , including the majority of the sites considered in this paper, and tend to represent more local rather than regional proxies for vegetation change, making them less useful than pollen for gauging broader afforestation signals. The extracted data totalled ten pollen records, eight charcoal records and one phytolith record from 13 sites. Site details are outlined in the Supplementary Data 2 and discussed in Supplementary Text 3 . Generalized additive modelling of palaeoecological data The non-arboreal to arboreal ratios calculated for each Neotropical record, and the charcoal data and various pollen proxies for forest openness in the Spanish East Indies, were summarized using GAMs. This permitted an assessment of nonlinear trends in palaeoecological data, particularly those that are irregularly spaced in time, as is the case both within and between some of the Neotropical pollen datasets 88 . Data were first standardized by transformation (logit transformation for percentage data; log transformation for count and frequency data) and then standardized into z scores. Each GAM was fitted using thin-plate regression splines as the basis function 89 : the rank of the basis function was set to one-tenth the sample size or 5, whichever value was larger (ranks ranged from 5–26). GAM plots included 95% uncertainty intervals around the GAM fit line. Implementation of the GAM fits, calculation of uncertainty intervals and creation of record-specific GAM plots was undertaken in R version 3.6.2 (ref. 81 ) using the mgcv package (version 1.8-31) 90 . For data importation and manipulation, we used the packages data.table (version 1.12.8) 91 and readxl (version 1.3.1) 92 . Resultant data are plotted stratigraphically in Fig. 3 (Spanish Americas) and Fig. 5 (Spanish East Indies). The analytical script used to create core-specific plots is available at the Open Science Framework (OSF) project page , which also includes code to apply the same methods used in this paper to other datasets. Analysis of afforestation signal over 2,000 years The palaeoecological proxies for forest cover, their associated GAMs, the known timings of Iberian contact and disease-induced population decrease (outlined for each region in Supplementary Text 2 ) and, for the Neotropical sites, cluster analysis of the plant function group (PFG) data, were used to semi-quantitatively assess whether the records showed afforestation or deforestation following Iberian contact, pre-Iberian (1000–1500 ce ) afforestation or limited forest change over the past 1,000 years. We set 1,000 years as the time frame for classifying change as it is sufficiently long to provide a context for the pre-Iberian forest conditions while reducing the need to consider the influence of protracted mid- to late Holocene climate change on forest cover 40 . However, attention was given to the timing and asynchronous influence of shorter-term climate events on forest cover over the past 1,000 years (that is, the MWP and LIA), the regional influence of which is discussed in Supplementary Texts 1 and 3 . The following criteria were used to classify the afforestation signal for each record: Post-Iberian afforestation (that is, afforestation after Iberian contact): the forest cover proxy data and GAM curvature show an increase in forest pollen in the 100-year time frame following a known population decrease associated with Iberian contact. For the Neotropics only, the clustered PFG data show that this shift is associated with a clear change in forest type and forest cover. Minor (or unclear) post-Iberian afforestation: either (1) the forest cover proxy data and GAM curvature show an increase in forest pollen in the 100-year time frame following a known population decrease associated with Iberian contact but the PFG data indicate forest stability over the same time period or (2) the forest cover proxy and PFG data indicate an increase in forest pollen in the 100-year time frame following a known population decrease associated with Iberian contact but this is not captured in the curvature of the GAM. Post-Iberian deforestation (that is, deforestation after Iberian contact): as for post-Iberian afforestation, but the data indicate forest opening rather than closing. Pre-Iberian afforestation: there is an afforestation or minor afforestation signal (as above) in the period between 1000 and 1400 ce for the Neotropical sites or between 1000 ce and the timing of Iberian contact for the Asia-Pacific sites. A limited forest response was determined if the records did not meet any of the above-listed criteria. Changes in the Asia-Pacific charcoal records were assessed over the same time frames as those used for the above-discussed vegetation data. Decreases or increases in fire activity (as interpreted from the charcoal proxy data) after Iberian contact that were not captured in the GAM curvature were classified as minor/uncertain. Site-by-site analysis of the palaeoecological and chronological trends and confidence for each record, together with other external supporting data, are presented in Supplementary Text 3 . The results of pre- and post-Iberian land use and forest change analysis were mapped for each region in ArcGIS Pro 2.5 (Fig. 2 and Fig. 4 ) and interpreted within both broad, pre-Iberian cultural groupings (Figs. 2a and 4a ), as well within biomes (Figs. 2b and 4b ). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The synthesized datasets used to undertake the analyses are available at the following OSF project page: . Code availability The analytical script used to create core-specific plots is available at the following OSF project page: . This includes code to apply the same methods used in this paper to other datasets.
A new study, published now in Nature Ecology and Evolution, draws on pollen records from tropical regions formerly claimed by the Spanish Empire in both the Atlantic and the Pacific to test the significance and extent of forest regrowth following widespread mortality among Indigenous populations after European contact in the 15th and 16th centuries. By analyzing microscopic pollen grains preserved in lake sediments, scientists are able to build up a picture as to how environments have changed over time. It is well documented that the arrival of Europeans in the Americas resulted in the spread of diseases including small pox, measles, typhus and cholera to Indigenous populations, many of whom were practicing sedentary agriculture. Archaeological and historical records indicate that this may have wiped out up to 90% of the Indigenous population, making it perhaps the most significant epidemiological disaster ever known. What is less known, however, is the impact of this so-called "Great Dying" on tropical landscapes that had, by this point, been managed by food producers and even urban dwellers for millennia. Researchers have recently argued in a widely popularized paper that the drastic reduction in Indigenous populations and the cessation of their land use in many tropical parts of the Neotropics, led to a dramatic regrowth of forest. So significant was this ecological change, the paper argues, that these new trees captured enough carbon to cause a recognizable dip in global atmospheric CO2 levels. This global atmospheric change is not only implicated in the Little Ice Age that caused lakes to freeze over in Europe, but has also been suggested as a potential start date for the Anthropocene. Nevertheless, existing assessments of forest regrowth are based on a limited number of environmental records and have been exclusively focused on the Americas. In the new study, the research team, comprised of palaeoecologists, archaeologists and historians, set out to empirically test the proposed link between colonization and forest regrowth by synthesizing and assessing long-term records of tropical vegetation change from across the Americas, as well as the often-overlooked Asian-Pacific domain of the Spanish Empire. Pollen grains from different plants have unique shapes and can be used to reconstruct ecosystem composition at different time points. Credit: R. Hamilton Their analysis paints a much more complex picture of colonial human-environment interactions. "Though we were expecting a signal of forest regrowth following known Indigenous population decline, clear examples of this process were only seen in about one third of cases in both the Americas and in Pacific Asia. Changes in forest cover were, in fact, diverse," says Dr. Rebecca Hamilton, lead author of the study. The team attributes this complexity to the variable influence of climate, humans and geography across space and through time. "Our results suggest that dense, wet forests and highland forests were less likely to show an afforestation signal following Spanish contact," continues Hamilton. The authors offer two possibilities for the apparent lack of forest growth. One is that these habitats were maintained by agro-forestry prior to colonization, meaning they had never been cleared of trees to begin with. Another possibility is that these landscapes were more difficult for Europeans to access, leading to the persistence of Indigenous populations well into the Spanish period, as is documented in historical records. By contrast, isolated, water limited ecosystems, particularly in the Pacific, showed clearer signals of forest regrowth. It is widely held that dense tropical forest encroached into open agricultural landscapes following the mass mortality of Indigenous farmers from introduced European disease. Credit: R. Hamilton In some cases, the imposition of European land-use policies, including consolidated settlement and population relocation, plantations and ranching, led to a lack of forest re-growth, or even deforestation. "Future studies exploring the impact of European colonialism on tropical landscapes need to develop detailed archaeological, historical and palaeoecological insights into how different parts of the tropics and their populations resisted, shaped and were impacted by processes of colonialism from the 15th century onwards," concludes Hamilton. The authors' work has ramifications for the future conservation of tropical ecosystems, which requires a careful consideration of historic land-use, population dynamics, geography, ecology and climate. The study also cautions that perspectives on the Anthropocene that hinge on a single time point may be overly simplistic. As the project co-lead Dr. Patrick Roberts puts it, "treating the Anthropocene solely as a recent, single 'spike' can have the outcome of suggesting that it is the logical product of all humanity." In fact, he argues, the work of the team and others in the tropics make it clear "that the Anthropocene is a long-term, varied and unequal process in the tropics—something that needs to be highlighted to develop more just, sustainable approaches to these crucial landscapes moving forward."
10.1038/s41559-021-01474-4
Medicine
A surprise advance in the treatment of adult cancers
Impaired H3K36 methylation defines a subset of head and neck squamous cell carcinomas, Nature Genetics, nature.com/articles/doi:10.1038/ng.3757 Journal information: Nature Genetics
http://nature.com/articles/doi:10.1038/ng.3757
https://medicalxpress.com/news/2017-01-advance-treatment-adult-cancers.html
Abstract Human papillomavirus (HPV)-negative head and neck squamous cell carcinomas (HNSCCs) are deadly and common cancers. Recent genomic studies implicate multiple genetic pathways, including cell signaling, cell cycle and immune evasion, in their development. Here we analyze public data sets and uncover a previously unappreciated role of epigenome deregulation in the genesis of 13% of HPV-negative HNSCCs. Specifically, we identify novel recurrent mutations encoding p.Lys36Met (K36M) alterations in multiple H3 histone genes. histones. We further validate the presence of these alterations in multiple independent HNSCC data sets and show that, along with previously described NSD1 mutations, they correspond to a specific DNA methylation cluster. The K36M substitution and NSD1 defects converge on altering methylation of histone H3 at K36 (H3K36), subsequently blocking cellular differentiation and promoting oncogenesis. Our data further indicate limited redundancy for NSD family members in HPV-negative HNSCCs and suggest a potential role for impaired H3K36 methylation in their development. Further investigation of drugs targeting chromatin regulators is warranted in HPV-negative HNSCCs driven by aberrant H3K36 methylation. Main HNSCCs are a heterogeneous group of tumors that develop through chemically (as a result of tobacco and/or alcohol abuse) or virally induced carcinogenesis after infection with high-risk HPV 1 . They represent the seventh most frequent cancer worldwide, with ∼ 600,000 new cases per year 2 , and occur throughout the oral cavity, hypopharynx, oropharynx, nasopharynx or larynx. Tumors are often locally advanced, invading proximal structures at diagnosis, or show spread to cervical lymph nodes or distant metastases. Despite several advances and innovations in multimodality treatment, survival rates of locally advanced HNSCCs have not substantially improved, and the prognosis for relapsed or metastatic tumors remains dismal 3 . Several studies, including a recent comprehensive genomic profiling of a vast number of tumors by The Cancer Genome Atlas Project (TCGA), have shown that HPV-negative and HPV-positive HNSCCs represent distinct disease entities 4 , 5 , 6 . The latter, which arises mainly in the oropharynx (tonsils and base of tongue), is associated with a substantially improved outcome and harbors distinct molecular alterations 3 , 6 , 7 . The vast majority of HPV-negative tumors have loss-of-function TP53 mutations and CDKN2A inactivation. Integrated genomic and DNA methylation analyses indicate a high level of heterogeneity in this entity, with the presence of several molecular clusters seemingly enriched for specific genetic alterations 6 . These studies also identified frequent alterations in pathways involving growth factor receptors, RAS and PI3K signaling, the cell cycle, cell death and differentiation and/or oxidative stress, which have thus become the focus of current research efforts and targeted therapies in HNSCCs 3 , 4 , 5 , 6 , 8 , 9 , 10 , 11 , 12 , 13 . Interestingly, alterations in the gene encoding epigenetic modulator nuclear receptor-binding set domain protein 1 (NSD1) were recently identified in HPV-negative HNSCCs, but insights into whether or how these mutations could drive oncogenesis are lacking 6 , 12 . NSD1 is a member of the NSD family of mammalian histone methyltransferases, which are essential in development. NSD enzymes act as mono- and dimethyltransferases for H3K36 (ref. 14 ), and SETD2 mediates trimethylation of H3K36, a chromatin mark associated with active transcription, gene splicing and DNA damage repair. Notably, aberrant regulation of H3K36 methylation has been identified in several neoplasms and could potentially promote oncogenesis 15 , 16 , 17 , 18 , 19 , 20 . Germline hypomorphic NSD1 mutations lead to Sotos overgrowth syndrome, which is associated with a higher likelihood of cancer 21 . Somatic translocation and gain-of-function mutations involving NSD2 have been identified in leukemia and multiple myeloma, and NSD1 loss-of-function mutations have been described in lung cancers 22 . In acute myeloid leukemia (AML), the recurring t(5;11)(q35;p15.5) translocation fuses NSD1 to NUP98 (encoding nucleoporin-98) and leads to leukemogenesis through aberrant establishment of H3K36 methylation at specific genomic loci 16 . SETD2 mutations have also been described in a wide range of cancers 15 , 19 , 23 . Therefore, we reasoned that an abnormal epigenetic landscape, through alterations in the H3K36 methylation pathway, might constitute a mechanism in the development of HPV-negative HNSCCs. To investigate this, we performed an integrative analysis of existing genomic and DNA methylation data sets of HNSCCs to assess the functional impact of NSD1 alterations and investigated the presence of other genetic alterations affecting the post-translational modifications of histones including H3K36. We found an epigenetically distinct subgroup in HPV-negative HNSCCs, where the main genetic alterations converge to specifically disrupt H3K36 methylation, potentially driving oncogenesis. Results Reanalysis of the TCGA HNSCC data set To assess the prevalence and nature of the genetic alterations affecting the epigenome in HNSCCs, we analyzed available HNSCC data with reported results from next-generation sequencing and DNA methylation profiling 4 , 5 , 6 , 10 , 11 . The TCGA data set, the largest generated on HNSCCs, had DNA methylation data (Infinium HumanMethylation450 BeadChip arrays) available for 528 samples, whole-exome sequencing (WES) for 526 samples and RNA sequencing for 520 samples. A total of 518 samples were profiled on all three platforms ( Supplementary Fig. 1 ). We performed unsupervised hierarchical clustering of DNA methylation data ( Fig. 1 ) and identified a subgroup previously found to be enriched in NSD1 mutations. This subgroup comprised 61 samples, 44 of which carried damaging single nucleotide variants in NSD1 annotated in the initial TCGA analysis 6 ( Supplementary Fig. 2 and Supplementary Table 1 ). Upon closer inspection of the primary data, we found that other tumors within this cluster had more complex genomic events, including large chromosomal deletions encompassing this gene ( n = 2), focal deletions within NSD1 ( n = 2) and splicing defects ( n = 1). Analysis of RNA-seq data showed absent NSD1 transcripts in two samples with noninterpretable WES data. In one sample, no conclusion could be drawn on NSD1 mutational status, as only methylation data were available ( Supplementary Table 2 and Supplementary Fig. 3 ). Overall, the reanalysis indicated that 51/61 (84%) of samples within this methylation cluster carried NSD1 alterations. Notably, the majority of missense mutations in NSD1 in this cluster were located close to or within the functional SET domain and were frequently associated with additional truncating mutations in the same samples ( Supplementary Fig. 2c ). In silico prediction of the functional impact of these mutations 24 suggests that they compromise the methyltransferase activity of NSD1 ( Supplementary Table 1 ). Two of these missense mutations were previously reported in Sotos syndrome, a genetic disorder associated with germline NSD1 mutations 21 including one encoding p.Arg2005Gln, which has been shown to specifically ablate H3K36 methyltransferase activity 25 . The methylation-based clustering also clearly identified an HPV-positive TP53 wild-type tumor subgroup, which is known to constitute a distinct entity within HNSCCs, along with three other clusters (which we labeled A, B and C) that had no clear relationship to known alterations in HNSCC. NSD1 mutations were infrequent (4%), and were mostly missense and predicted to be less damaging in these clusters ( Fig. 1 and Supplementary Table 3 ). Figure 1: H3K36M and NSD1 alterations define a specific HNSCC subgroup. Unsupervised hierarchical clustering of DNA methylation data identifies five HNSCC subgroups, including a subgroup comprised of NSD1 or H3K36M alterations (labeled the H3K36 cluster). The H3K36 cluster is enriched in HPV-negative, TP53 mutant and heavy-smoking patients. Those tumors are found in the larynx and oral cavity. An HPV-positive, TP53 wild-type subgroup (HPV+) was apparent as well as three other subgroups (A, B and C). Gray and white bars for clinical variables (HPV status, anatomical location) indicate no hit or not available, respectively. Full size image Identification of H3K36M alterations in HNSCC Given the high prevalence of NSD1 mutations in this DNA methylation cluster, we investigated the mutational status of other genes involved in H3K36 methylation. We identified 11 samples harboring K36M-encoding mutations in various histone H3 genes. The mutations occurred in genes encoding the canonical histones H3.1 and H3.2 or the variant histone H3.3 ( Supplementary Fig. 2b and Supplementary Table 4 ). Notably, 10 of the K36M alterations occurred within the DNA methylation cluster predominantly defined by NSD1 defects ( Fig. 1 and Supplementary Fig. 2a,b ). Taken together, H3K36M and NSD1 alterations accounted for all samples with available genomic data ( Supplementary Fig. 2a ) included in the DNA methylation–derived cluster (thus labeled the H3K36 cluster) and jointly represent 13% of HPV-negative HNSCC. Only one sample with an H3K36M alteration was observed outside the H3K36 cluster ( Fig. 1 ). Other genes encoding potential H3K36 methyltransferases, including SETD2 and NSD2 , were infrequently mutated in HNSCC, and samples with these mutations were spread across all HNSCC methylation subgroups ( Supplementary Fig. 4 ). Clinical and molecular features of the H3K36 HNSCC subgroup A number of altered genes have been identified to be enriched in HPV-negative HNSCCs 4 , 5 , 6 , 8 , 9 , 10 , 12 , 26 , 27 , 28 , including TP53 , EGFR , NOTCH1 , PI3KCA and HRAS . We found that only the prevalence of TP53 and CASP8 varied significantly ( P = 0.000621 and P = 0.00647, respectively, ANOVA) among HNSCC methylation subgroups ( Fig. 1 and Supplementary Fig. 4 ). Similarly, there were no copy number alterations specific to the H3K36 cluster (data not shown). Consistent with the original TCGA report, we found tumors within this cluster to be strongly DNA hypomethylated ( P < 1.99 × 10 −14 , Student's t -test) with no difference between H3K36M and NSD1 mutants ( P = 0.36, Student's t -test) ( Supplementary Fig. 5 ), suggesting that the unique DNA methylation signature of the H3K36 cluster is driven by defects in the H3K36 methylation pathway. NSD1 -mutated samples within the H3K36 cluster showed lower NSD1 expression (Student's t -test, P = 0.0007), global DNA hypomethylation (Student's t -test, P = 1.41 × 10 −6 ) and more truncating mutations than did NSD1 -mutated samples outside the H3K36 cluster ( Supplementary Fig. 6 ). The H3K36 cluster included tumors from heavy smokers or former heavy smokers who had recently stopped ( Fig. 1 ). When assessing the mutational burden in a given cluster, NSD1 mutant tumors were hypermutated ( P < 0.0004, Student's t -test), whereas H3K36M samples had similar numbers of mutations per sample to that in other methylation clusters ( Supplementary Fig. 7 ). Notably, when we analyzed the somatic mutational pattern in relation to smoking, the signatures we obtained for each molecular entity showed that NSD1 but not H3K36M mutants had a mutational signature strongly associated with smoking ( Supplementary Fig. 8 ). Except for HPV-positive cancers, which are associated with a favorable outcome, there were no differences in tumor grade or outcome among HPV-negative HNSCCs ( Supplementary Fig. 9 ). Moreover, we observed distinct anatomical locations for samples within the H3K36 cluster: all H3K36M tumors (100%, 10/10) occurred in the oral cavity, whereas NSD1 mutant tumors localized more often in the larynx (29/51, 57%, P < 0.007, χ 2 test) ( Fig. 2 and Supplementary Fig. 10 ). No other clinical feature reached statistically significant enrichment ( Supplementary Table 5 ). We also reanalyzed four other HNSCC WES data sets. In combination (196 samples in total), these studies enabled us to identify 14 NSD1 mutations, most of which were located in the larynx, and 1 HIST3H3H K36M-encoding mutation in the oral cavity 4 , 5 , 29 , 30 . The 15 patients were HPV negative and were heavy smokers, consistent with our findings using the TCGA data set ( Fig. 1 and Supplementary Table 6 ). Figure 2: Schematic representation of the anatomical locations of NSD1 and H3K36M alterations in HNSCCs. Tumors with H3K36M were mainly located in the oral cavity, whereas NSD1 -mutant tumors occurred mainly in the larynx and less frequently in other known locations of HNSCCs. The size of the dotted lines is proportional to the frequency of the alteration in a given anatomical location. Full size image Effects of mutations in modifiers of the epigenome We assembled an independent cohort of 158 oropharyngeal (78 HPV-positive and 80 HPV-negative) HNSCCs with available tissue microarrays (TMAs) to determine the prevalence and impact of the K36M substitution on H3K36 methylation. Immunohistochemistry analysis identified three samples with typical nuclear staining of H3K36M, confirming expression of the mutant histone in HNSCC tumor tissues in this independent cohort ( Fig. 3 ). We have previously shown that the K36M substitution, although affecting only 1 of the 32 alleles encoding histone H3, acts as a dominant variant and inhibits the respective SET domain–containing methyltransferases, leading to decreased H3K36 dimethylation (H3K36me2) and trimethylation (H3K36me3) 31 , 32 . As expected, immunohistochemical staining of the TMAs showed that samples carrying H3K36M have a drastic decrease in H3K36me2 and H3K36me3 levels ( Fig. 3 ). In tumor cores lacking NSD1 staining, global levels of H3K36me3 were mostly unchanged, and H3K36me2 levels were markedly reduced, as expected and in keeping with the role of NSD1 as a di- but not trimethyltransferase for H3K36 ( Fig. 3 ). Figure 3: Representative immunohistochemical staining of H3K36M, NSD1, H3K36me2 and H3K36me3 in H3K36M or NSD1 -mutant or wild-type HNSCCs. Samples were counterstained with hematoxylin. H3K36M was expressed in 3/85 HPV-negative HNSCCs. Pictures were taken under 40× magnification and collated using Photoshop. Each staining was repeated twice on slides from a tissue microarray from HNSCCs. Four pictures per core were taken on average for each staining. Staining was independently scored by 3 pathologists (Online Methods ). Full size image We also obtained two HNSCC cell lines carrying NSD1 truncating mutations (SCC-4 and SKN-3) and two cell lines bearing wild-type NSD1 (Fadu and PCI-4B). Immunoblotting and mass spectrometry analyses showed a marked decrease in H3K36me2 levels and relatively preserved H3K36me3 levels in histone extracts from the NSD1 mutant cell lines ( Fig. 4 ). Ectopic expression of H3K36M in HEK293 cells, two HNSCC cell lines bearing wild-type H3 and NSD1 and an NSD1 -deficient HNSCC cell line led to significant decreases in H3K36me2 and H3K36me3 levels, similar to that observed in HNSCC patient TMA samples, and small interfering RNA (siRNA)-mediated knockdown of NSD1 specifically led to decreased H3K36me2 ( Fig. 4b–e ). These findings suggest that both alterations may contribute to the progression of HNSCC through reduction of H3K36me2. It is notable that H3K36me2 levels were significantly altered in NSD1 -mutant HNSCC samples and cell lines, despite showing relatively normal expression of NSD2 ( P > 0.22, Student's t -test) ( Fig. 4a and Supplementary Fig. 11 ). Furthermore, knockdown of NSD1 resulted in a more pronounced reduction in H3K36me2 in HNSCC cells than in HEK293 cells ( Fig. 4c ). These results highlight a major and nonredundant role of NSD1 in establishing H3K36me2 in HNSCC. Figure 4: H3K36M and NSD1 alterations decrease levels of H3K36me2 in HNSCC. ( a ) Immunoblot analysis of NSD1, NSD2, NSD3, SETD2, H3K36me2 and H3K36me3 in NSD1 wild-type (Fadu and PCI-4B) and NSD1-deficient (SCC-4 and SKN-3) HNSCC cell lines. ( b ) Immunoblot analysis of H3K36me2, H3K36me3 and H3K36M in Fadu, PCI-4B and SCC-4 HNSCC cells and in SCC-4 cells stably expressing H3K36M. ( c ) Immunoblot analysis of NSD1, H3K36me2, H3K36me3 and H3K36M in 293T, Fadu and PCI-4B cells treated with NSD1-targeting siRNA (siNSD1) or stably expressing H3K36M (K36M). ( d , e ) Mass spectrometry–based quantification of levels of H3K36 methylation in H3.1 and H3.2 ( d ) or H3.3 ( e ) in Fadu, PCI-4B, SCC-4 and SKN-3 cells and in Fadu cells stably expressing H3K36M (Fadu_K36M). For a – c , images are cropped and representative of 2 independent experiments (full-length blots are included in Supplementary Fig. 14 ). For d and e , error bars represent s.d. from 3 independent cell cultures. n.s., not significant; * P < 0.05; ** P < 0.01, Student's t -test. Full size image Finally, we analyzed transcriptome data available with the TCGA data set and were able to identify a cluster that corresponds, to a large extent, to the DNA methylation–defined H3K36 subgroup ( Supplementary Fig. 12 ), suggesting that altered methylation at H3K36 affects both the epigenome and transcriptome in HNSCC samples. We next performed differential expression analysis focusing on genes that are differentially expressed between H3K36 and other HNSCC subgroups 6 , 9 , 33 , 34 . Our results indicate that NSD1 and H3K36M mutants specifically suppressed the expression of genes involved in epidermal differentiation and keratinization processes ( Supplementary Fig. 13 and Supplementary Table 7 ). These findings mirror recent data in mouse mesenchymal progenitor cells, where overexpression of H3K36M resulted in a blockade in differentiation to the respective downstream lineages and induced undifferentiated sarcomas 32 . Discussion In this study, we reanalyzed public genomic and epigenomic data sets from HNSCCs and identified a previously underappreciated role of epigenome deregulation in HNSCC tumorigenesis, specifically at H3K36. Our unsupervised clustering approach using DNA methylation data segregated 61 HNSCC samples into a cluster we labeled H3K36. This DNA methylation cluster exclusively contained samples carrying NSD1 mutations or K36M substitutions, reported here for the first time in HNSCC. Characterized by H3K36 alteration and DNA hypomethylation, this subgroup represents a significant subset of HPV-negative tumors (13%). Other clusters included a subgroup highly enriched in HPV-positive samples along with groups A, B and C, which did not harbor obviously defining clinico-pathological features or genetic alterations. Except for HPV-positive cancers, we did not observe any statistically significant differences in event-free survival among methylation clusters. This may be due to the relatively short median follow-up time (401 d) of the TCGA HNSCC cohort. When using a similar unsupervised clustering approach on RNA-seq data ( Supplementary Fig. 12 ), we were also able to recuperate a group of samples enriched in NSD1 or H3K36M alterations, suggesting that the H3K36 cluster corresponds to a specific molecular entity. In-depth analysis of the NSD1 locus in samples included in the H3K36 subgroup enabled us to identify alterations such as splicing defects and focal deletion of a small set of exons that were not detectable by standard automated approaches ( Supplementary Table 2 ). Moreover, a gene-based approach previously failed to identify the enrichment of H3K36M substitutions, as they are observed across a wide array of histone H3 genes ( Supplementary Fig. 2b and Supplementary Table 4 ). This is consistent with our previous work, which demonstrated that the effect of the H3K36M alteration is dominant and independent of H3 isoform 31 , 32 ( Figs. 3 and 4 ). A complementary meta-analysis of earlier studies confirmed the prevalence of H3K36M and NSD1 alterations in HNSCC and their association with smoking and specific anatomical locations. We replicated our genomic findings using an independent, previously unpublished cohort of HNSCC patient samples and showed that the mutant histone is expressed exclusively in tumor cells and leads to decreases in H3K36me2 and H3K36me3 ( Fig. 3 ). In contrast, NSD1 loss leads to decreased H3K36me2 ( Figs. 3 and 4 ), consistent with previous biochemical characterization of the enzyme as a specific H3K36 mono- and dimethyltransferase. Given the converging effects of NSD1 and H3K36M alterations on DNA methylation and H3K36me2, it appears that these alterations may help drive HNSCC pathogenesis through a common mechanism involving epigenome deregulation. H3K36M alterations have been observed in chondroblastoma 17 and more recently in pediatric soft-tissue tumors 32 . Our group also showed that, when introduced into mesenchymal progenitor cells, H3K36M was sufficient to induce undifferentiated sarcomas in mice by blocking cellular differentiation 32 . Our transcriptome analysis showed that genes specifically downregulated in the H3K36 subgroup are involved in cellular differentiation. These results support a model whereby an altered epigenetic landscape, mediated by impaired H3K36 methylation subsequent to the genetic alterations, arrests keratinocytes in a progenitor state refractory to differentiation stimuli. This differentiation blockade in turn synergizes with signaling and cell cycle deregulation to facilitate HNSCC development. Notably, no NSD2 mutations were found in the H3K36 subgroup, and NSD1 deficiency was sufficient to decrease H3K36me2 in HNSCC cell lines despite normal expression of NSD2 . Moreover, a recent report showed overexpression of NSD2 and increased H3K36me2 in a set of HNSCC samples 35 . Therefore, it appears that NSD1 and NSD2 may have nonredundant roles in certain cell types and that their effect can vary on the basis of the DNA methylation group. Further studies are warranted to investigate the distinct mechanisms by which gain or loss of H3K36me2 promotes HNSCC development. The H3K36 cluster showed marked DNA hypomethylation compared to the other methylation subgroups. Interestingly, a recent study observed a DNA hypomethylation signature in Sotos syndrome, a monogenic disorder defined by germline NSD1 mutations 36 . Therefore, the DNA hypomethylation signature in the H3K36 subgroup is likely to be caused by genetic or H3K36M-mediated biochemical inactivation of NSD1 . Several recent reports suggest a link between H3K36 methylation and the binding of de novo DNA methyltransferases (DNMT3A and DNTM3B) 37 , 38 . The PWWP domain of DNMT3A, which is essential for its chromatin recruitment, specifically binds to H3K36me2 and H3K36me3. In agreement with this, the gene-body localization of Dnmt3b in mouse embryonic stem cells depends on Setd2-mediated H3K36me3 (ref. 38 ). Therefore, the recruitment of DNMT3A and DNMT3B could be impaired in NSD1 mutant or H3K36M mutant tumors, leading to the global DNA hypomethylation phenotype observed in the H3K36 cluster. Large and comprehensive genomic and epigenomic surveys of HNSCCs have increased understanding of the key driver mutations and heterogenic nature of this disease. Most of these studies emphasize the role of mutations affecting the cell cycle (in TP53 and CDKN2A ) or signaling pathways, including growth factor receptors ( FGFR , EGFR ), RAS and PI3K pathways. Accordingly, a number of targeted therapies to attenuate signaling defects are under investigation in preclinical studies and clinical trials. However, recent studies in HNSCCs indicate that recurrent or metastatic tumors often fail to reproduce the mutational landscape of the primary tumor, specifically for the aforementioned targetable alterations 10 . In contrast, this evolutionary dynamic has not been observed in tumors carrying epigenetic drivers. Indeed, in the previously studied cancers that are promoted by mutations in genes encoding epigenetic modulators, such as DNMT3A, IDH 39 or histone H3 (ref. 40 ), the epigenetic drivers are invariably present throughout the course of the disease. Our study uncovers a previously unsuspected role of impaired H3K36 methylation in HNSCCs. This finding underscores the need for further investigation into the mechanisms of epigenome deregulation in HNSCC tumorigenesis, as targeting epigenetic modifiers may be, when further validated by additional in vitro and in vivo studies, of therapeutic benefit for HNSCC patients with altered H3K36 methylation. Methods DNA methylation clustering. We obtained level 1 (IDAT format) methylation data from the TCGA data portal for 528 HNSCC samples. We extracted raw beta values using the R package minfi 41 . We removed probes that contained or targeted SNPs as per Illumina's HumanMethylation450k Manifest. We performed hierarchical clustering using the 1,000 most variable sites. Distance ( d ) was assessed using d = 1 − r , where r is the Pearson product-moment coefficient. Clustering was performed using average linkage (unweighted pair group method with arithmetic mean (UPGMA)). To establish robustness of our groups, we tested their predictive ability using tenfold cross-validation. We computed the sample-by-sample correlation between the test and training and averaged on the training set subgroups. The test set labels were predicted according to the highest average correlation among the training set subgroup. Somatic mutations. We used all the somatic mutation calls from level 2 (MAF format) data from the TCGA data portal for 517 samples. We merged calls from the different sequencing centers, removed duplicates and converted the data (MAF to VCF) using a custom script. We annotated the resulting comprehensive mutation list using ANNOVAR 42 to obtain the resulting amino acid changes. To address the prevalence of specific mutations in HPV-negative HNSCC methylation subgroups, we performed an ANOVA on the fraction of mutations observed in a target gene (number of mutations in target gene/total mutation count), excluding the HPV-positive subgroup. Every gene was treated independently, and P values were corrected for multiple testing using Bonferroni. To predict the impact of NSD1 missense mutations on the protein function, we used Mutation Assessor, which evaluates the functional impact of an amino acid substitution on the basis of the evolutionary conservation of amino acid residues in a protein family multiple sequence alignment, assuming that mutations that affect conserved residues are more likely to be functional. Alterations in the H3K36 cluster. Samples in the H3K36 cluster not carrying a somatic H3K36M or NSD1 alteration were manually assessed at the NSD1 locus. We identified large chromosomal deletions containing NSD1 ( n = 2), focal deletion of a set of exons ( n = 2) or splicing defects ( n = 1). RNA-seq analysis showed absent NSD1 transcripts in two samples with noninterpretable WES data, and one sample in the cluster lacked genomic data ( Supplementary Table 2 ). Lollipop plots were obtained from MutationMapper. Clinical annotation. Smoking. We tested for association with smoking using the 4-factor definition given in the NCI CDE browser. To test for enrichment for smoking in H3K36 samples, we performed a Fisher's exact test on those factors. Anatomical location. Anatomical locations were as follows (some anatomical locations are grouped to simplify visual representation): alveolar ridge; oral cavity (buccal mucosa, floor of mouth, hard palate, oral cavity, oral tongue); pharynx (base of tongue, hypopharynx, oropharynx); larynx; tonsil. HPV status. In situ hybridization and p16 tests were combined together so that samples flagged as HPV-positive by either test would be considered as HPV-positive in our analysis, and vice versa for a negative diagnosis. Samples with no information were labeled as N/A. Copy number analysis. We obtained the level 3 CNV calls from the TCGA data portal for 520 samples. We used the nocnv_hg19.seg.txt as input for GISTIC 2.0 to extract significant CNVs. We clustered samples on the basis of CNV using a method previously described 6 . We considered NSD1 GISTIC value of −0.3 as sufficient evidence for loss of heterozygosity (LOH). Mutation signature. To infer the mutagenic processes in HNSCC subgroups, we used the somatic mutations obtained as described above. Samples from each methylation-defined subgroup were merged because of the overall low number of mutations from WES. We appended mutations from eight TCGA cancers using the R package SomaticCancerAlterations. We extracted the mutational context (flanking nucleotides), extracted the somatic spectrum of each cancer and computed a 5-signature model using non-negative matrix decomposition using the R package SomaticSignatures. Hierarchical clustering was done on the somatic spectrum matrix using cosine distance. RNA-seq. We obtained raw BAM files using gtdownload from the Cancer Genomics Hub. Gene expression was quantified with featureCount 43 using the UCSC hg19 annotation with isoforms collapsed into a single entry per gene. We used the raw counts and normalized them for total library size using DESeq2 (ref. 44 ). We performed hierarchical clustering using log 2 -transformed read count for the 1,000 most variable genes. Distance was assessed as d = 1 − r , where r is the Pearson product-moment coefficient. Clustering was performed using average linkage (UPGMA). To obtain differentially expressed genes specific to the H3K36 cluster, we performed two independent differential expression analyses. We used all HNSCC tumor samples and looked for genes differentially expressed specifically in the H3K36 cluster as compared to all other subgroups using anatomical location as a covariate. The threshold for establishing whether a gene was differentially expressed was: Bonferroni-adjusted P < 0.01 and an absolute value of log 2 fold change > 2. Cell culture, siRNA transfection and lentiviral infection. Fadu (ATCC), SKN-3 (JCRB cell bank) and PCI-4B (University of Pittsburgh) cells were cultured in DMEM (Invitrogen) with 10% FBS (CellGro). SCC-4 (ATCC) cells were cultured in DMEM:F12 Medium (Invitrogen) with 10% FBS and 400 ng/ml hydrocortisone (Sigma-Aldrich). These HNSCC cell lines have been demonstrated to be tumorigenic 45 , 46 , 47 , 48 ; NSD1 mutational status was obtained through COSMIC or Cancer Cell Line Encyclopedia and verified by immunoblotting. For siRNA transfection, cells were reverse transfected with Lipofectamine RNAiMAX (Life Technologies) and ON-TARGETplus SMARTpool siRNA against human NSD1 (GE Healthcare). After 72 h, cells were harvested for immunoblotting analysis. To generate cells stably expressing H3K36M mutant histone, cells were transduced with concentrated lentivirus (2 × 10 7 infectious units per ml) generated as previously described 32 . Transduced cells were grown under puromycin selection (2 μg/ml) 48 h after transduction and selected for 1 week before being harvested for immunoblotting or mass spectrometry analysis. Immunoblotting. For immunoblotting, acid-extracted histones or whole cell lysates were separated by SDS-PAGE, transferred to a nitrocellulose membrane, blocked in 5% nonfat milk in PBS plus 0.5% Tween-20, probed with primary antibodies, and detected with horseradish peroxidase–conjugated anti-rabbit or anti-mouse secondary antibodies (GE Healthcare). Primary antibodies used were: anti-NSD1 (NeuroMab clone N312/10); anti-NSD2 (Millipore, MABE191); anti-NSD3 (Abcam, ab180500); anti-SETD2 (Abcam, ab31358); anti-H3K36me3 (Active Motif, 61021); anti-H3K36me2 (Cell Signaling Tech, 2901); anti-H3K36M (gift of Millipore); anti-H3 (Abcam, ab1791); anti-tubulin (Cell Signaling Tech, 2144) 32 . Immunohistochemistry (IHC). All samples were obtained with informed consent after approval of the Institutional Review Board of the respective hospitals they were treated in, and were independently reviewed by pathologists with expertise in head and neck cancer (W.S. and C.J.H.). Tissue microarrays (TMAs) from a collection of 158 patients with HNSCCs were obtained following standard procedures. Consecutive sections from the TMAs were cut at 4 μm, placed on SuperFrost/Plus slides (Fisher) and dried overnight at 37 °C, before IHC processing. The slides were then loaded onto the Discovery XT Autostainer (Ventana Medical System). All solutions used for automated IHC were from Ventana Medical System unless otherwise specified. Slides underwent de-paraffinization, heat-induced epitope retrieval (prediluted CC1 solution, catalog number 950-124, standard protocol). Immunostaining was performed using a previously described 32 heat protocol. Primary antibody was omitted as a negative control. Mouse monoclonal anti-H3K36me3 antibody (Diagenode, CS-058-100) and anti-H3K36me2 (Cell Signaling Tech, 2901) were diluted at 1:500 in antibody diluent (catalog number 251-018). Rabbit polyclonal anti-H3K36M antibody and anti-NSD1 (custom made by Millipore) were diluted at 1:22 and 1:20 in the antibody diluent for 32 min at 37 °C and subsequently processed as previously described 32 . Slides were counterstained with hematoxylin for 4 min, treated with Bluing Reagent for 4 min, removed from the autostainer, washed in warm soapy water, dehydrated through graded alcohols, cleared in xylene and mounted with Eukitt Mounting Medium (EMS, 15320). Sections were scanned using an Aperio system. Scoring was done on 100 tumor cells from four separate fields for all four antibodies. Staining was considered conclusive for H3K36M when strong specific nuclear staining was seen in tumor cells with no immunoreactivity in adjacent stroma. In the absence of genotype data, a tumor was considered as possibly carrying NSD1 mutation if more than 70% of tumor cells lacked nuclear staining while normal NSD1 nuclear staining could be seen in infiltrating inflammatory cells or adjacent stroma. H3K36me3 and H3K36me2 stains were similarly scored to NSD1. Histone acid extraction, histone derivatization and PTM analysis by nano-liquid chromatography–mass spectrometry (nano-LC-MS). Cells were lysed in hypotonic lysis buffer (10 mM HEPES, 10 mM KCl, 1.5 mM MgCl 2 , 0.5 mM DTT, protease inhibitors) for 1 h at 4 °C. H 2 SO 4 was added to 0.2 N followed by overnight rotation at 4 °C. After centrifugation (14,000 r.p.m. for 10 min at 4 °C), supernatants were collected, proteins were precipitated in 33% TCA, washed with acetone, and resuspended in deionized water. Acid-extracted histones (15–25 μg) were resuspended in 100 mM ammonium bicarbonate (pH 8), derivatized using propionic anhydride and digested with trypsin as previously described 49 . The resulting histone peptides were desalted using C 18 Stage Tips, dried using a centrifugal evaporator and reconstituted using 0.1% formic acid in preparation for nano-LC-MS analysis. Nano-LC was performed using a Thermo Scientific Easy nLC 1000 system equipped with a 75 μm × 20 cm in-house-packed column using Reprosil-Pur C18-AQ (3 μm; Dr. Maisch GmbH). Buffer A was 0.1% formic acid and buffer B was 0.1% formic acid in 100% acetonitrile. Peptides were resolved using a two-step gradient from 0% to 26% B over 45 min, then from 26% B to 98% B over 10 min at a flow rate of 300 nL/min. The HPLC was coupled online to an Orbitrap Elite mass spectrometer operating in the positive mode using a Nanospray Flex Ion Source (Thermo Scientific) at 2.40 kV. MS was performed using data-independent acquisition (DIA) as previously described 50 with slight modifications. Briefly, two full MS scans ( m / z 300–1,100) were acquired in the orbitrap with a resolution of 120,000 (at 200 m / z ) every 8 DIA MS/MS events, using isolation windows of 50 m / z each (for example, 300–350, 350–400 and so on). The full MS scan was performed twice within the same duty cycle to allow for a more resolved definition of the precursor peak profile. MS/MS events were acquired in the ion trap operating in normal mode. Fragmentation was performed using collision-induced dissociation (CID) in the ion trap mass analyzer with a normalized collision energy of 35. AGC target and maximum injection time were 10 6 and 50 ms for the full MS scan, and 10 4 and 150 ms for the MS/MS scan, respectively. Mass-to-charge ratios were calculated for each modified form of H3 peptide. The area under the curve was calculated from extracted ion chromatograms using Xcalibur Qual Browser for each histone peptide mass-to-charge ratio using a 10 p.p.m. mass accuracy cutoff. Data from all detectable charge states were summed. The area for each modification state of a peptide was normalized against the sum of all peptides sharing the same sequence to give the relative abundance of each modified state. Three biological replicates were analyzed per condition, and the relative abundance of each peptide modification was averaged across runs. Statistical significance was determined using Student's t -test. Data availability. All sequencing and methylation data were obtained from the TCGA Genomic Data Commons ( ). The NCI CDE browser can be accessed at . Anatomical locations of cancers were grouped as described by the NIH National Cancer institute at . URLs. Mutation Assessor, ; MutationMapper ( ); Cancer Genomics Hub (CGHub), .
A team of researchers at the Research Institute of the McGill University Health Centre (RI-MUHC) has found an epigenetic modification that might be the cause of 15% of adult cancers of the throat linked to alcohol and tobacco use. This is a first in the field of epigenetics and the researchers are hopeful that the discovery can blaze a path in the development of new, targeted, more effective treatments that could arise over the next few years. "This discovery was absolutely unexpected since it seemed highly improbable that the kind of alterations of the epigenome that we had previously found in other types of tumours in children and young adults could also target an epithelial tumour like throat cancer that occurs only in adults," explains Dr. Nada Jabado, a researcher at the RI-MUHC and one of the principal authors of the study published in Nature Genetics. Head and neck cancers, also called oropharyngeal cancers or throat cancer often have devastating consequences. Standard treatments involve surgery, radiotherapy or chemotherapy. Unfortunately, the side effects of these treatments are significant and relapses are common. That's why oncologists are searching to develop more effective treatments that will be less harmful and have fewer deleterious effects. The discovery of this epigenetic modification opens new treatment possibilities. In fact, some promising drug molecules are already on the market for other illnesses and could possibly be tested for head and neck cancers as well as other cancers like multiple myeloma and lung cancer. Dr. Jabado, who is also a pediatric hemato-oncologist, has hopes that this discover will have positive repercussions for pediatric cancers as well. "Now that we've identified this cohort of patients, we can move quite quickly since in the case of adults, as opposed to children, there are more patients and lots of clinical trials. The medicines could then be tested on children afterward." About the discovery Dr. Jabado's work focuses on epigenetics in pediatric cancers, and more specifically on the mutations of histone H3. Histones are proteins that package the structure of our DNA and regulate the expression of our genes. For this group of researchers, collaboration between scientists and access to the vast genomic databases of patients around the world is essential. They were particularly intrigued by a 2015 publication by the Tumor Cancer Genome Atlas Consortium (TCGA) on head and neck cancer that mentioned one of the genes that regulates H3. "We made use of the same data but took a completely different approach. Instead of concentrating on genetic mutations, we looked at the effect of these mutations on histone H3 proteins. That's when we discovered that the histone H3 protein was abnormal or incorrectly modified in about 15% of patients with head and neck cancer. The data were there, but this fact had gone unnoticed," explains Dr. Jacek Majewski, a researcher at McGill University and one of the principal authors of the study. "It's crucial to have access to public data, because it allows us to advance faster and go further in our analyses. In our case, this discovery revealed a sub-group of patients who might benefit from a therapy that targets the epigenome. This could improve the treatment of more than one in five patients suffering from devastating oropharyngeal cancer," points out Dr. Jabado. "We are currently collaborating with two big groups specializing in head and neck cancer with the goal of finding treatments." Epigenetics The body is composed of a large number of different types of cells (neurons, skin cells, fat cells, immune system cells, etc.) Even though they differ, all these cells have the same DNA or genome. Scientists have only recently discovered that their differences can be explained by epigenetics, i.e. what triggers activity in each cell. To illustrate her research, Dr. Jabado compares the genome to notes of music and our epigenetics to a musical score. "Using the same scale, you can create many different melodies. Like a sheet of music that dictates which notes to play and in which order, epigenetics organizes and provides meaning to our genes. If an error makes its way into the score, there's going to be a problem with the music." Epigenetics can provide explanations for why environmental factors, like tobacco or alcohol, can induce changes in the expression of our genes without actually modifying our DNA.
nature.com/articles/doi:10.1038/ng.3757
Biology
A new method to pinpoint genetic differences between species could benefit human health
Carly V. Weiss et al, Genetic dissection of interspecific differences in yeast thermotolerance, Nature Genetics (2018). DOI: 10.1038/s41588-018-0243-4 Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-018-0243-4
https://phys.org/news/2018-10-method-genetic-differences-species-benefit.html
Abstract Some of the most unique and compelling survival strategies in the natural world are fixed in isolated species 1 . To date, molecular insight into these ancient adaptations has been limited, as classic experimental genetics has focused on interfertile individuals in populations 2 . Here we use a new mapping approach, which screens mutants in a sterile interspecific hybrid, to identify eight housekeeping genes that underlie the growth advantage of Saccharomyces cerevisiae over its distant relative Saccharomyces paradoxus at high temperature. Pro-thermotolerance alleles at these mapped loci were required for the adaptive trait in S. cerevisiae and sufficient for its partial reconstruction in S. paradoxus . The emerging picture is one in which S. cerevisiae improved the heat resistance of multiple components of the fundamental growth machinery in response to selective pressure. Our study lays the groundwork for the mapping of genotype to phenotype in clades of sister species across Eukarya. Main Geneticists since Mendel have sought to understand how and why wild individuals differ. Studies to this end routinely test for a relationship between genotype and phenotype via linkage or association 2 . These familiar approaches, though powerful in many contexts, have an important drawback—they can only be applied to interfertile members of the same species. This rules out any case in which an innovation in form or function evolved long ago and is now fixed in a reproductively isolated population. As organisms undergo selection over long timescales, their traits may be refined by processes quite different from those that happen early in adaptation 3 , 4 . We know little about these mechanisms in the wild, expressly because when the resulting lineages become reproductively incompatible, classic statistical-genetic methods cannot be used to analyze them 1 . To date, the field has advanced largely on the strength of candidate-based studies that implicate a single variant gene in an interspecific trait 5 , 6 , with the complete genetic architecture often remaining unknown. Against the backdrop of a few specialized introgression 7 , 8 , 9 , 10 and molecular-evolution 11 techniques available in the field, dissection of complex trait differences between species has remained a key challenge. Here we develop a new genetic mapping strategy, based on the reciprocal hemizygosity test 12 , 13 , and use it to identify the determinants of a difference in high-temperature growth between isolated Saccharomyces yeast species. We validate the contributions of the mapped loci to the thermotolerance trait, and we investigate their evolutionary history. At high temperature, the yeast Saccharomyces cerevisiae grows qualitatively better than other members of its clade 14 , 15 , 16 , including its closest relative, Saccharomyces paradoxus , from which it diverged ~5 million years ago 17 . In culture at 39 °C, S. cerevisiae doubled faster than S. paradoxus and accumulated more biomass over a time course, a compound trait that we call thermotolerance. The magnitude of differences in thermotolerance between species far exceeded that of strain variation within each species (Fig. 1 ), whereas no such effect was detectable at 28 °C (Supplementary Fig. 1 ). The failure by S. paradoxus to grow to high density at 39 °C was, at least in part, a product of reduced survival relative to that of S. cerevisiae , as cells of the former were largely unable to form colonies after heat treatment (Supplementary Fig. 2 ). In microscopy experiments, S. paradoxus cells were almost uniformly visible as large-budded dyads after 24 h at 39 °C (Supplementary Fig. 3 ), suggestive of defects late in the cell cycle as a proximal cause of death 18 . No such tendency was apparent in S. cerevisiae at high temperature, or in either species at 28 °C (Supplementary Fig. 3 ). Fig. 1: S. cerevisiae grows better at high temperature than S. paradoxus . a , Each point reports optical density (OD 600nm ) of the indicated wild isolate of S. cerevisiae (various shades of blue) or S. paradoxus (orange) over a time course of growth at 39 °C. Each solid line shows a logistic population growth model fit to the respective cell density measurements. b , Each bar reports mean efficiency ( n = 4 cultures) of the indicated strain at 39 °C, defined as the difference between cell density at 24 h of growth and that at the time of inoculation. * P = 3.78 × 10 −11 , two-sample, two-tailed t -test; individual measurements are reported as circles. Full size image We set out to dissect the genetic basis of S. cerevisiae thermotolerance, using a genomic implementation of the reciprocal hemizygote test 12 , 13 (Fig. 2a ). For this purpose, we first mated S. cerevisiae strain DBVPG1373, a soil isolate from the Netherlands, with S. paradoxus strain Z1, an English oak tree isolate. The resulting sterile hybrid had a thermotolerance phenotype between those of its purebred parents (Supplementary Fig. 4 ). In this hybrid background we generated hemizygote mutants using a plasmid-borne, selectable piggyBac transposon system 19 . We cultured the pool of mutants in bulk for approximately seven generations at 39 °C and, separately, at 28 °C. From cells in each culture we sequenced transposon insertion locations 20 as a readout of the genotypes and abundance of mutant hemizygote clones present in the selected sample. In these sequencing data, at each of 4,888 genes we detected transposon mutant clones in both species’ alleles in the hybrid (Supplementary Fig. 5 ), with transposon insertions distributed in a largely unbiased manner across the genome (Supplementary Fig. 6 ). For a given gene, we tabulated the abundances of mutants whose transposon insertion fell in the S. cerevisiae allele of the hybrid, after high-temperature selection relative to the 28 °C control, and we compared them to the abundance distribution of mutants in the S. paradoxus allele (Fig. 2a ). Any difference in abundance between these reciprocal hemizygote cohorts can be ascribed to variants between the retained alleles at the respective locus; we refer to the comparison as reciprocal hemizygosity analysis via sequencing (RH-seq). Integrating this approach with a quality-control pipeline (Supplementary Fig. 5 ), in a survey of 3,416 high-coverage genes we identified 8 top-scoring hits (false discovery rate 0.01; Fig. 2b and Supplementary Table 1 ). At each such locus, disruption of the S. cerevisiae allele in the hybrid was associated with low clone abundance after selection at 39 °C relative to at 28 °C (Fig. 2b ), reflecting a requirement for the S. cerevisiae allele for thermotolerance. All of the genes mapped by RH-seq were annotated as housekeeping factors: ESP1 , DYN1 , MYO1 , CEP3 , APC1 , and SCC2 function in chromosome segregation and cytokinesis and AFG2 and TAF2 in transcription/translation. Fig. 2: Mapping thermotolerance by RH-seq. a , A transposon (Tn; rectangle) disrupts the allele from S. cerevisiae (blue) or S. paradoxus (orange) of a gene ( YFG ) in an interspecific hybrid (green). Clones lacking the pro-thermotolerance allele grow poorly at 39 °C (dashed outlines), as measured by sequencing and reported in smoothed histograms (traces, colored to indicate the species’ allele that is not disrupted). b , Each panel reports results from one RH-seq hit locus. In the main graph of a given panel, the x axis reports the log 2 of abundance, measured by RH-seq after selection at 39 °C, of a clone harboring a transposon insertion in the indicated species’ allele, relative to the analogous quantity for that clone from selection at 28 °C. The y axis reports the proportion of all clones bearing insertions in the indicated allele that exhibited the abundance ratio on the x , as a kernel density estimate. In insets, each bar reports the relative efficiency, calculated as the mean growth efficiency at 39 °C ( n = 8–12 cultures) of the indicated targeted-deletion hemizygote measured in liquid culture assays, normalized to the analogous quantity for the wild-type hybrid. * P ≤ 0.002, two-sample, one-tailed t -test; individual measurements are reported as circles. See Supplementary Table 1 for exact P -values and sample numbers. Full size image To evaluate the role in thermotolerance of genes that emerged from RH-seq, we first sought to verify that growth differences between hemizygotes at a given locus were the consequence of allelic variation rather than an artifact of our genomic approach. To this end, at each RH-seq hit gene we engineered hemizygotes by targeted deletion of each species’ allele in turn in the hybrid. In growth assays, the strain lacking the S. cerevisiae allele at each gene grew poorly at high temperature (Fig. 2b ), with little impact at 28 °C (Supplementary Fig. 7 ), as inferred from RH-seq. Likewise, at each locus, the S. paradoxus allele made no contribution to the phenotype of the hybrid, since deleting it had no effect (Fig. 2b and Supplementary Fig. 7 ). Locus effect sizes from this single-gene validation paradigm largely paralleled the estimates from RH-seq ( R 2 = 0.74). We conclude that RH-seq hits represent bona fide determinants of thermotolerance in the hybrid. We expected that variation at our RH-seq hits, though mapped by virtue of their impact in the hybrid, could also explain thermotolerance differences between purebred species. As a test of this notion, for each mapped gene in turn, we replaced the two copies of the endogenous allele in each purebred diploid with the allele from the other species. Growth assays of these transgenics established the S. cerevisiae allele of each locus as necessary or sufficient for biomass accumulation at 39 °C, or both: thermotolerance in the S. cerevisiae background was compromised by S. paradoxus alleles at 7 of the 8 genes and, in the S. paradoxus was improved by S. cerevisiae alleles at 6 of 8 loci (Fig. 3 ). Allele replacements had little impact on growth at 28 °C (Supplementary Fig 8 ). These trends mirrored the direction of locus effects from hemizygotes in the hybrid, though the magnitudes were often different. Most salient were the small effect sizes in S. paradoxus relative to other backgrounds, indicative of strong epistasis in this poorly performing species (Supplementary Fig. 9 ). Thus, the loci mapped by RH-seq in an interspecies hybrid contribute causally to thermotolerance in purebreds, with effect sizes that depend on the context in which they are interrogated. Fig. 3: S. cerevisiae thermotolerance alleles are necessary and sufficient for growth at high temperature. a , Each bar reports mean growth efficiency at 39 °C, measured in liquid culture assays ( n = 8–12 cultures), of an S. cerevisiae strain harboring the S. paradoxus allele at the indicated RH-seq hit locus, relative to the analogous quantity for wild-type S. cerevisiae . b , Data are as in a , except that each bar reports results from a S. paradoxus strain harboring the S. cerevisiae allele at the indicated locus, normalized to wild-type S. paradoxus . In a given panel, the top and bottom dotted lines report the relative efficiency of wild-type S. cerevisiae and S. paradoxus , respectively. * P ≤ 0.036; ** P ≤ 0.001, one-sample, one-tailed t -test; individual measurements are reported as circles. See Supplementary Table 1 for exact P -values and sample numbers. Full size image Avid growth at high temperature is a defining characteristic of S. cerevisiae as a species, relative to other members of Saccharomycetes 14 , 15 , 16 (Fig. 1 ). In principle, the loci mapped by RH-seq could be unique to the genetic architecture of thermotolerance in our focal S. cerevisiae strain, DBVPG1373, or it could be part of a mechanism common to many S. cerevisiae isolates. In support of the latter model, transgenesis experiments showed that a diverse panel of S. cerevisiae isolates all harbored alleles conferring modest but significant growth benefits at high temperature, and alleles from multiple S. paradoxus isolates were deleterious (Supplementary Fig. 10a,b ). We detected no such impact at 28 °C (Supplementary Fig. 10a,b ). Similarly, we found elevated sequence divergence from S. paradoxus to be a shared feature of S. cerevisiae strains at the loci mapped by RH-seq (using the absolute divergence measure D xy ; Supplementary Fig. 10c ). These findings indicate that the S. cerevisiae population accumulated divergent, pro-thermotolerance alleles at appreciable density in the loci mapped by RH-seq, consistent with a role in the trait for these genes across the species. Additionally, in the yeast phylogeny, RH-seq hit genes were distinguished by accelerated evolution along the branch leading to S. cerevisiae , as expected if the ancestral program has been conserved among the other species in the clade (Supplementary Figure 10c ). In this work, we have developed the RH-seq method for genome-wide mapping of natural trait variation, and we have used it to elucidate the genetics of thermotolerance in reproductively isolated yeasts. Growth at high temperature is likely a derived character in S. cerevisiae 14 , 15 , 16 , and the mechanism by which evolution built the trait, after the split from S. paradoxus , has remained unknown. In pursuing the genetics of this putative ancient adaptation, we complement studies of younger, intraspecific variants that erode thermotolerance in the few S. cerevisiae isolates that have lost the trait relatively recently 12 , 21 . We have sought to shed light on more ancient evolutionary events by considering S. paradoxus as a representative of the ancestral state, to which thermotolerant S. cerevisiae can be compared. Using this approach, we have mapped eight loci at which S. cerevisiae alleles are necessary and sufficient for thermotolerance. As our RH-seq scan did not attain complete genomic coverage, the hits we did find likely represent a lower bound on the complexity of the architecture of the trait. Six of the RH-seq hit genes are essential for growth in standard conditions 22 , and all eight contribute to fundamental growth processes. ESP1 , DYN1 , CEP3 , APC1 , MYO1 , and SCC2 mediate mitotic spindle assembly, chromatid cohesion and separation, cytokinesis, and mitotic exit; AFG2 regulates the release of maturation factors from the ribosome; and TAF2 encodes a TFIID subunit. In each case, our growth experiments in the interspecific hybrid have shown that the S. paradoxus allele acts as a hypomorph at high temperature. Our work leaves unanswered exactly how heat-treated S. paradoxus dies in the absence of these functions, though the cells’ large-budded morphology strongly suggests regulated arrest or stochastic failure late in the cell cycle. That said, given that some but not all RH-seq hit loci have roles in mitosis, this is likely only one of the choke points at which S. paradoxus alleles are a liability at high temperature. Assuming that these heat-sensitive alleles also littered the genome of the common ancestor with S. cerevisiae , thermotolerance would have evolved along the S. cerevisiae lineage by resolving each of them, boosting the heat resistance of many housekeeping processes. Such a mechanism would dovetail with the recent finding that, across species, the limiting temperature for cell growth correlates with the melting temperatures of a subset of essential proteins 23 . These insights into the evolution of a complex yeast trait serve as a proof of concept for RH-seq. To date, the reciprocal hemizygosity test has led to landmark discoveries in a candidate-gene framework, confirming the effects of variation at a given locus identified by other means 12 , 13 . Schemes to scale up the test have generated a genome’s worth of hemizygotes from deletion-strain purebreds, which tend to harbor secondary mutations that come through screens as false positives 24 , 25 . As such, a key advantage of RH-seq is that we carry out mutagenesis in the hybrid, which ensures coverage of essential genes and obviates the use of mutation-prone null genotypes. Furthermore, any secondary mutations that do arise in a given hemizygote clone, for example during a long competition in the condition of interest, would not have a strong influence on RH-seq mapping, because deep mutagenesis generates many independent clones per gene that are analyzed together. One important caveat of RH-seq, as in single-gene reciprocal hemizygote tests, is the assumption that no epistasis unique to the hybrid will mask the effects of loci underlying a trait difference of interest between the parents. In our case study, the genetic architecture of thermotolerance in the hybrid did bear out as relevant for the purebreds, albeit with locus effect sizes that varied across the backgrounds. More dramatic discrepancies may be particularly likely when the hybrid has a heterotic (that is, extreme) phenotype and is a poor model for the genetics of the parents 26 . The choice of a non-heterotic hybrid in which to pursue RH-seq would be analogous to classical linkage mapping in a cross whose progeny have, on average, phenotypes that are intermediate between those of the parents. In fact, although we have focused here on ancient divergence, the RH-seq method would be just as applicable to individuals within a species, as a high-resolution alternative to linkage analysis. We thus anticipate that RH-seq will accelerate the mapping of genotype to phenotype in many systems, whether the parents of a cross are closely related or members of a species complex that have been locally adapting for millions of years. URLs SGRP2 Database, ftp://ftp.sanger.ac.uk/pub/users/dmc/yeast/SGRP2/input/strains ; Yeast Resource Center, ; Saccharomyces Genome Database, ; RH-seq data analysis scripts, . Methods Strains Strains used in this study are listed in Supplementary Table 2 . Homozygous diploid strains of S. cerevisiae and S. paradoxus used as parents of the interspecific hybrid, and as the backgrounds for allele-swap experiments, were homothallic DBVPG1373 and Z1, respectively. In the case of the hybrid parents, each strain was rendered homozygous null for URA3 via homologous recombination with a HYGMX cassette, then sporulated; a given mated spore from a dissected tetrad was grown into a diploid that was homozygous null at URA3 and tested for the presence of both genomes by PCR with species-specific primers. PiggyBac transposon machinery For untargeted, genome-scale construction of reciprocal hemizygotes in the S. cerevisiae × S. paradoxus hybrid, we adapted methods for piggyBac transposon mutagenesis 19 to develop a system in which the transposon machinery was borne on a selectable and counter-selectable plasmid lacking a centromere. We constructed this plasmid (final identifier pJR487) in three steps. In step 1 we cloned the piggyBac transposase enzyme gene driven by the S. cerevisiae TDH3 promoter (from plasmid p3E1.2, a gift from M. Fraser, Notre Dame) into plasmid pJED104 (which contains URA3 , an autonomously replicating sequence, and the CEN6 locus, and was a gift from J. Dueber, University of California, Berkeley). For this cloning, the amplification used a forward and a reverse primer containing a BamHI and an XhoI site, respectively, that upon restriction digest yielded sticky ends for ligation to recipient BamHI and XhoI sites in digested pJED104. We used the resulting plasmid as input into step 2, removal of the CEN6 sequence: we first amplified the entire plasmid with primers that initiated outside of CEN6 , were directed away from it, and contained reciprocally complementary NheI sites; sticky ends of the linear PCR product were then ligated together for recircularization. We used the resulting plasmid as input into step 3, the cloning in of a construct comprising the KANMX cassette flanked by long terminal arms (328 bp and 361 bp) from the piggyBac transposon. We first amplified KANMX from pUG6 27 and each transposon arm from p3E1.2, using primers that contained overlapping sequence on the fragment ends that would ultimately be the interior of the construct, and XbaI sites on the fragment ends that would ultimately be the 5′- and 3′-most ends of the construct. We stitched the three fragments together by overlap extension PCR, digested the resulting construct and the plasmid from step 2 with XbaI, and annealed sticky ends of the two to yield the final pJR487 plasmid. Untargeted hemizygote construction via transposon mutagenesis For mutagenesis, pJR487 was gigaprepped using a column kit (Zymo Research) to generate ~11 mg of plasmid. To prepare for transformation, JR507 (the S. cerevisiae DBVPG1373 × S. paradoxus Z1 hybrid) was streaked from a −80 °C freezer stock onto a yeast peptone dextrose (YPD; 1% yeast extract (BD Biosciences), 2% yeast peptone (BD Biosciences), 2% d -glucose (Sigma)) agar plate and incubated for 2 days at 26 °C. A single colony was inoculated into 100 ml YPD and shaken at 28 °C, 200 r.p.m. for ~24 h. We then transferred cells from this pre-culture, and YPD, to each of four 1 l flasks at the volumes required to attain an optical density at 600 nm (OD 600 ) of 0.2 in 500 ml each. We cultured each for 6 h at 28 °C with shaking at 200 r.p.m. Two of these cultures were combined into 1 l of culture and two into a separate 1 l, and each such culture was subjected to transformation (for a total of two transformations) as follows. The 1 l was split into 20 50 ml conical tubes. Each aliquot was centrifuged and washed with water and then with 0.1 M lithium acetate (Sigma) mixed with 1X Tris-EDTA buffer (10 mM Tris-HCl and 1.0 mM EDTA); after spin-down, to each tube was added a solution of 0.269 mg of pJR487 mixed 5:1 by volume with salmon sperm DNA (Invitrogen), and then to each was added 3 ml of 39.52% polyethylene glycol, 0.12 M lithium acetate and 1.2X Tris-EDTA buffer (12 mM Tris-HCl and 1.2 mM EDTA). Tubes were rested for 10 min at room temperature, then heat-shocked in a water bath at 39 °C for 26 min. Cells from all 20 tubes were then combined. We transferred cells from this post-transformation culture, and YPD, to each of three 1 l flasks at the volumes required to attain an OD 600 of ~0.35–4 in 500 ml. Each such culture was recovered by shaking at 28 °C and 200 r.p.m. for 2 h. G418 (Geneticin; Gibco) was added to each at a concentration of 300 µg ml −1 to select for those cells that had taken up the plasmid, and cultures were incubated with 200 r.p.m. shaking at 28 °C for 2 days until each reached an OD 600 of ~2.3. All six such selected cultures across the two transformations were combined. We transferred cells from this combined culture, and YPD + G418 (300 µg ml −1 ), to each of two 1 l flasks at the volumes required to attain an OD 600 of 0.2 in 500 ml each. We cultured each flask with 200 r.p.m. shaking at 28 °C overnight until each reached an OD 600 of 2.18, then combined the two cultures again to yield one culture. To cure transformants of the pJR487 URA + plasmid, we spun down a volume of this master culture and resuspended in water with the volume required to attain a cell density of 1.85 OD 600 units ml −1 . Twelve milliliters of this resuspension were plated (1 ml per 24.1 cm × 24.1 cm plate) onto plates containing complete synthetic media with 5-fluoroorotic acid (0.2% dropout amino acid mix without uracil or yeast nitrogen base (US Biological), 0.005% uracil (Sigma), 2% d -glucose (Sigma), 0.67% yeast nitrogen base without amino acids (Difco), 0.075% 5-fluoroorotic acid (Zymo Research)). After incubation at 28 °C to enable colony growth, colonies were scraped off all 12 plates and combined into water at the volume required to attain 40 OD 600 units per 900 µl, yielding the final transposon mutant hemizygote pool. This was aliquoted into 1 ml volumes with 10% dimethylsulfoxide and frozen at −80 °C. Thermotolerance phenotyping via selection of the hemizygote pool One aliquot of the pool of transposon mutant hemizygotes in the JR507 S. cerevisiae DBVPG1373 × S. paradoxus Z1 hybrid background was thawed and inoculated into 150 ml of YPD in a 250 ml flask and cultured for 7.25 h at 28 °C, with shaking at 200 r.p.m. We used this time point as time zero of our thermotolerance experiment and took four aliquots of 6.43 ml (7 OD 600 units) as technical replicates for sequencing of transposon insertion positions (see below). Of the remaining culture, 9.19 ml was back-diluted to an OD 600 of 0.02 in a total of 500 ml YPD in each of six 2 l glass flasks for cultures that we call selections; three were grown at 28 ° C and three at 39 ° C (shaking at 200 r.p.m.) until an OD 600 of 1.9–2.12 was reached, corresponding to about 6.5 doublings in each case. Four cell pellets of 7 OD 600 units each were harvested from each of these biological replicate flasks, for sequencing as technical replicates (see below). In total, 28 pellets were subjected to sequencing: 4 technical replicates from the time-zero culture; 3 biological replicates, 4 technical replicates each, from the 28 °C selection; and 3 biological replicates, 4 technical replicates each, from the 39 °C selection (Supplementary Table 3 ). Transposon sequencing library construction To determine the abundance of transposon mutant hemizygote clones after selection, we first sequenced transposon insertions as follows. Each cell pellet from a time-zero or selection sample (see above) was thawed on ice, and its genomic DNA (gDNA) was harvested with the ZR Fungal/Bacterial DNA MiniPrep Kit (Zymo Research). gDNA was resuspended in DNA elution buffer (Zymo Research) prewarmed to 65 °C, and its concentration was quantified using a Qubit 3.0 fluorometer. Illumina transposon sequencing (Tn-seq) library construction was as described previously 28 . Briefly, gDNA was sonicated and ligated with common adapters, and for each fragment deriving from a transposon insertion in the genome, a sequence containing a portion of the transposon and a portion of its genomic context (the transposon–genome junction) was amplified using one primer homologous to a region in the transposon and another primer homologous to a region in the adapter. See Supplementary Table 4 for the transposon-specific primer (‘forward primer’), where Ns represent random nucleotides, and the indexed adapter-specific primer (‘reverse primer’), where the six Ns are a unique index used for multiplexing multiple libraries onto the same HiSeq sequencing lane. Amplification used Jumpstart polymerase (Sigma) and the following cycling protocol: 94 °C for 2 min, (94 °C for 30 s, 65 °C for 20 s, 72 °C for 30 s) × 25, 72 °C for 10 min. Sequencing of single-end reads of 150 bp was done over eight lanes on a HiSeq 2500 at the Joint Genome Institute (Walnut Creek, CA). Reads sequenced per library are reported in Supplementary Table 3 . Tn-seq read-mapping and data analysis For analysis of data from the sequencing of transposon insertion sites in pools of hemizygotes, we first searched each read for a string corresponding to the last 20 base pairs of the left arm of the piggyBac transposon sequence, allowing up to two mismatches. For each transposon-containing read, we then identified the genomic location of the sequence immediately downstream of the transposon insertion site, which we call the genomic context of the insertion, by mapping with BLAT (minimum sequence identity, 95; tile size, 12) against a hybrid reference genome made by concatenating amended S. cerevisiae DBVPG1373 and S. paradoxus Z1 genomes (see below). These genomic-context sequence fragments were of variable length; any case in which the sequence was shorter than 50 bp was eliminated from further analysis, as was any case in which a genomic-context sequence mapped to more than one location in the hybrid reference. The resulting dataset thus comprised reads containing genomic-context sequences specifically mapping to a single location in either S. cerevisiae DBVPG1373 or S. paradoxus Z1, which we call usable reads. For a given library, given a cohort of usable reads whose genomic-context sequence mapped to the same genomic location, we inferred that these reads originated from clones of a single mutant with the transposon inserted at the respective site, which we call an insertion. In cases where the genomic-context sequences from reads in a given library mapped to positions within 3 bases of each other, we inferred that these all originated from the same mutant genotype and combined them, assigning to them the position corresponding to the single location to which the most reads mapped among those combined. For a given insertion thus defined, we considered the number of associated reads, n insert , as a measure proportional to the abundance of the insertion clone in the cell pellet whose gDNA was sequenced. To enable comparison of these abundances across samples, we tabulated the total number of usable reads, n pellet , from each cell pellet, took the average of this quantity across pellets, < n pellet >, and multiplied each n insert by < n pellet >/ n pellet to yield a insert , the final normalized estimate of the abundance of the insertion clone in the respective pellet. For any insertions that were not detected in a given pellet’s library ( n insert = 0) but were detectable in another library of the dataset, we assigned n insert = 1. We evaluated, from the mapped genomic-context sequence of each insertion, whether it fell into a gene according to the S. cerevisiae and S. paradoxus genome annotations 17 , 29 , and we retained for further analysis only those insertions that fell into genes that were syntenic in the two species. For each such insertion, for each biological replicate corresponding to a selection culture (at 28 °C or 39 °C), we averaged the normalized abundances a insert across technical replicates, yielding a single abundance estimate < a insert > technical for the biological replicate. We then calculated the mean of the latter quantities across all biological replicates of the selection, to yield a final abundance estimate for the insertion in this selection, < a insert > total . Likewise, for each insertion and selection experiment we calculated CV insert,total , the coefficient of variation of < a insert > technical values across biological replicates. To use Tn-seq data in reciprocal hemizygosity tests, we considered for analysis only genes annotated with the same (orthologous) gene name in the S. cerevisiae and S. paradoxus reference genomes. For each insertion, we divided the < a insert > total value from the 39 °C selection by the analogous quantity from the 28 °C selection and took the log 2 of this ratio, which we consider to reflect thermotolerance as measured by RH-seq. For each gene in turn, we used a two-tailed Mann-Whitney U test to compare thermotolerance measured by RH-seq from the set of insertions falling into the S. cerevisiae alleles of the gene against the analogous quantity from the set of insertions falling into the S. paradoxus allele of the gene, and we corrected for multiple testing using the Benjamini-Hochberg method. We tabulated the number of inserts and genes used as input into the reciprocal hemizygote test, and the number of top-scoring genes emerging from these tests, under each of a range of possible thresholds for coverage and measurement noise parameter values (Supplementary Fig. 5 ). We used in the final analysis the parameter-value set yielding the most extensive coverage and the most high-significance hits: this corresponded to insertions whose abundances had, in the data from at least one of the two selections (at 28 °C or 39 °C), CV insert,total ≤ 1.5 and < a insert > total ≥ 1.1, and genes for which this high-confidence insertion dataset contained at least five insertions in each species’ allele. This final dataset comprised 110,678 high-quality insertions (Supplementary Table 5 ) in 3,416 genes (Supplementary Table 6 ). We used this complement of data in all display items of this paper, with the following exception. To evaluate ex post facto the reproducibility across replicates of RH-seq measurements on genes called as hits, we first randomly paired each biological replicate at 39 °C with one at 28 °C; then, from the sequencing data from each pair in turn, we identified insertions whose abundances had < a insert > total ≥ 1.1 in at least one of the two temperatures for the respective replicate, and genes for which we had at least five such insertions in each species’ allele. From these data, for a given RH-seq hit gene we tabulated single-replicate estimates of the abundances of hemizygotes harboring insertions in each species’ homolog; see columns G–L of Supplementary Table 6 . Amended reference genome construction We generated reference genomes for S. cerevisiae strain DBVPG1373 and S. paradoxus strain Z1 as follows. Raw genome sequencing reads for each strain were downloaded from the SGRP2 database (see URLs). Reads were aligned using bowtie2 30 with default options; DBVPG1373 reads were aligned to version R64.2.1 of the reference sequence of the S. cerevisiae type strain S288C (Genbank Assembly Accession GCA_000146045.2 ), and Z1 reads were aligned to the S. paradoxus strain CBS432 reference sequence 31 . SNPs were called using a pipeline of samtools 32 , bcftools, and bgzip and were filtered for a quality score of >20 and a combined depth of >5 and either <65 ( S. cerevisiae ) or <255 ( S. paradoxus ). We then amended each reference genome with the respective filtered SNPs: we replaced the S288C allele with that of DBVPG1373 at each filtered SNP using bcftools’ consensus command with default options (42,983 bp total), and amendment of the CBS432 sequence was carried out analogously using Z1 alleles (15,126 bp total). Sequence analysis D xy analysis To evaluate whether sequence divergence from S. paradoxus at RH-seq hit loci was a shared feature of S. cerevisiae isolates, we used the D xy statistic 33 , the average number of pairwise differences between S. cerevisiae strains and S. paradoxus , normalized for gene length, as follows. We downloaded S. cerevisiae genomic sequences from the following sources: YJM978, UWOPS83-787, Y55, UWOPS05-217.3, 273614N, YS9, BC187, YPS128, DBVPG6765, YJM975, L1374, DBVPG1106, K11, SK1, 378604X, YJM981, UWOPS87-2421, DBVPG1373, NCYC3601, YPS606, Y12, UWOPS05-227.2, and YS2 from the Yeast Resource Center (see URLs); Sigma1278b, ZTW1, T7, and YJM789 from the Saccharomyces Genome Database (see URLs); and RM11 from NCBI (accession PRJNA13674 ). For each strain, we extracted the coding sequence of each gene in turn, and we downloaded the S. paradoxus reference sequence for each orthologous coding region from ref. 17 . Sequences were aligned using MAFFT 34 with default settings. Alignments that did not contain a start and stop codon, or those that contained gaps at greater than 40% of sites, were considered poor quality and were discarded. We tabulated D xy for each gene. To evaluate whether the 8 RH-seq hit genes were enriched for elevated D xy , we first tabulated < D xy > true , the mean value across the 8 RH-seq hit genes. We then sampled 8 random genes from the set of 3,416 genes tested by RH-seq; to account for biases associated with lower rates of divergence among essential genes, the resampled set contained 6 essential genes and 2 non-essential genes, mirroring the breakdown of essentiality among the RH-seq hits. Across this random sample we tabulated the mean D xy , < D xy > resample . We repeated the resampling 5,000 times and used as an empirical P -value the proportion of resamples at which < D xy > resample ≤ < D xy > true . Phylogenetic analysis We downloaded orthologous protein coding regions for the type strains of S. cerevisiae , S. paradoxus , and an outgroup, S. mikatae , from ref. 17 . For each gene for which ortholog sequences were available in all three species, we aligned the sequences with PRANK 35 , utilizing the ‘-codon’ option for codon alignment. These alignments were used as input into the codeml module of PAML 36 , which was run assuming no molecular clock and allowing omega values to vary for each branch in the phylogeny. From the resulting inferences, we tabulated the branch length on the S. cerevisiae lineage for each gene. To evaluate whether sequence divergence of the 8 RH-seq hit genes showed signatures of rapid evolution along the S. cerevisiae lineage, we used the resampling test detailed above. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Code availability Custom Python and R scripts used for RH-seq data analysis are available on GitHub (see URLs). For strain construction, growth assays, microscopy, and locus effect size methods, see the Supplementary Note . Data availability RH-seq data have been deposited in the Sequence Read Archive (SRA) under accession SRP156210 .
Why do naked mole rats live up to 31 years, when distantly related lab mice are lucky to make it to four? How is the aquatic axolotl capable of regenerating limbs, jaws, spines and even brains when frogs can only regrow tails as tadpoles? How can these difference be exploited for the benefit of human health? Questions like these set up a classic starting point for genetics research. When a trait catches a geneticist's eye, the goal is always to find the DNA sequence differences that underlie it. But to date, the tools of the field have focused on individuals of a population—what makes one human different from another, for example, when we are all the same species. Trait differences between species have been essentially out of reach. Now researchers at the Buck Institute have broken through this roadblock by designing a method that pinpoints genetic causality for trait differences between closely related species. The research, published in Nature Genetics, lays the groundwork for the mapping of specific genes to specific traits in sister species throughout the plant and animal kingdoms. "Think of all the amazing characteristics we see in organisms around us, from longevity and disease resistance to the spots on a butterfly," said Rachel Brem, Ph.D., associate professor at the Buck Institute for Research in Aging. "We want to find the genes at the root of these traits, and in many cases, they'll be relevant for human health." As an example, Brem points out that there are mouse species with stark differences in longevity, immunity, susceptibility to cancer, and the ability to recover from brain injury, among others. "Now we'll be able to find the genetic basis for those advantages and start comparing the findings with human genetics." Brem says the method is being utilized in multiple labs at the Buck, where the ultimate goal is to design drugs that mimic, in humans, the best that the natural world has to offer. The new method, which Brem says is straightforward and simple, was developed by genetically dissecting an ancient divergence in two species of yeast that differed in their tolerance to heat. In addition to research involving human health, Brem says the work has wide applicability to plant biology and agriculture, as well as climate change as scientists seek to understand how animals adapt to extreme environments. "Evolutionary geneticists are interested in 'outliers' - plants and animals that developed unusual traits as they evolved to survive and thrive in new environments and conditions," said Brem. "There's a lot left to be discovered and we're hoping that our work will provide a jump start to that process."
10.1038/s41588-018-0243-4
Medicine
Scientists link genes to brain anatomy in autism
Rafael Romero-Garcia et al, Synaptic and transcriptionally downregulated genes are associated with cortical thickness differences in autism, Molecular Psychiatry (2018). DOI: 10.1038/s41380-018-0023-7 Journal information: Molecular Psychiatry
http://dx.doi.org/10.1038/s41380-018-0023-7
https://medicalxpress.com/news/2018-02-scientists-link-genes-brain-anatomy.html
Abstract Differences in cortical morphology—in particular, cortical volume, thickness and surface area—have been reported in individuals with autism. However, it is unclear what aspects of genetic and transcriptomic variation are associated with these differences. Here we investigate the genetic correlates of global cortical thickness differences (ΔCT) in children with autism. We used Partial Least Squares Regression (PLSR) on structural MRI data from 548 children (166 with autism, 295 neurotypical children and 87 children with ADHD) and cortical gene expression data from the Allen Institute for Brain Science to identify genetic correlates of ΔCT in autism. We identify that these genes are enriched for synaptic transmission pathways and explain significant variation in ΔCT. These genes are also significantly enriched for genes dysregulated in the autism post-mortem cortex (Odd Ratio (OR) = 1.11, P corrected 10 −14 ), driven entirely by downregulated genes (OR = 1.87, P corrected 10 −15 ). We validated the enrichment for downregulated genes in two independent data sets: Validation 1 (OR = 1.44, P corrected = 0.004) and Validation 2 (OR = 1.30; P corrected = 0.001). We conclude that transcriptionally downregulated genes implicated in autism are robustly associated with global changes in cortical thickness variability in children with autism. Introduction Autism Spectrum Conditions (henceforth ‘autism’) are characterized by difficulties in social communication alongside unusually narrow interests and restrictive, repetitive behaviours, a resistance to unexpected change and sensory hypersensitivity [ 1 ]. In addition to behavioural and clinical differences, differences in cortical morphology between individuals with autism compared to typical controls have been reported [ 2 , 3 , 4 , 5 ]. While heterogeneous, recent studies have reported increased cortical volumes in the first years of life with autism compared to controls, with accelerated decline or arrest in growth in adolescents [ 3 , 4 ]. Changes in cortical volume may be attributed to changes in cortical thickness (CT), changes in surface area or both [ 3 ]. In support of this, studies have separately identified differences in both surface area [ 6 ] and CT [ 7 ] in children with autism. For example, Smith et al. [ 7 ] show that the developmentally accelerated gain in grey matter volume in autism is largely driven by the lack of typical age-related CT decrease ( ). Furthermore, earlier studies identified differential trajectories in CT development in autism [ 8 ] as well as CT differences in autism in specific brain regions [ 9 , 10 ]. In addition, Hardan et al. [ 11 ] found areas of increased CT in children with autism, predominantly in temporal and parietal lobules. In contrast however, Hadjikhani et al. [ 12 ] report a pattern of cortical thinning in adults with autism, mainly within the mirror neuron system. Both these studies point towards significant heterogeneity within findings that is at least partially related to differences in age. Despite significant heterogeneity in cortical morphology across autism imaging studies [ 13 ], recent studies have also indicated alterations in areas associated with higher cognition (e.g. language, social perception and self-referential processing) [ 7 , 14 ]. This has been supported by observed differences in cortical minicolumns in association areas in individuals with autism [ 15 ]. It is unclear what contributes to these differences in cortical morphology in individuals with autism. Genetic factors play a major role in the development of brain networks and volumes in typically developing individuals [ 16 , 17 , 18 ]. For instance, twin heritability of CT measures suggest modest to high heritability for most regions of the brain [ 19 ]. In parallel, the contribution of genetic factors for autism has been estimated between 50–90% [ 20 , 21 , 22 ]. Different classes of genetic variation have been associated with risk for autism. Several recent studies have identified a significant contribution of rare, de novo putative loss-of-function mutations for autism [ 23 , 24 , 25 , 26 , 27 ]. In addition, common genetic variants, cumulatively, account for approximately half of the total variance in risk for autism [ 20 ]. Studies have also identified genes dysregulated in the autism post-mortem cortex [ 28 , 29 , 30 ], enriched in processes such as synaptic transmission and astrocyte and microglial genes. These dysregulated genes may either represent causal mechanisms for risk or compensatory mechanisms as a result of upstream biological and cellular changes. Genes dysregulated in the autism post-mortem cortex are also enriched in specific gene co-expression modules identified using both adult [ 28 , 29 , 30 ] and fetal [ 31 ] cortical post-mortem samples. Despite considerable progress in understanding neuroanatomical and genetic risk for autism, several questions remain. Mechanistically, it is likely that genetic risk variants alter neuroanatomical structural and functional properties, contributing to behavioural and clinical phenotypes. Given the heterogeneity in autism imaging findings [ 13 ], it is pertinent to ask how genetic risk for autism is associated with variability in cortical morphology observed in individuals with autism. Thus, the goal of the present study was to identify molecular correlates of disease-related neuroanatomy irrespective of regional specific neuroanatomical differences that may not replicate well across studies [ 13 ]. Here, focusing on CT, we ask 3 specific questions: (Q1) Which genes and biological pathways are associated with in CT variability (ΔCT) in children with autism? (Q2) What is the spatial expression profile of genes associated with ΔCT? and (Q3) Are these genes enriched for three different classes of risk factors associated with autism: rare, de novo variants, common genetic variants and/or dysregulated genes in the post-mortem cortex? We address these questions by combining analysis of ΔCT in autism, as measured with MRI, with gene expression post-mortem data provided by the Allen Institute for Brain Science (AIBS; [ 32 , 33 ]). Methods Overview We first assessed differences in CT (ΔCT) across 308 cortical regions in individuals with autism by extracting CT estimates for 62 children with autism (cases) and 87 matched typically developing individuals (controls) from the ABIDE-I (Table 1 ; Discovery dataset). Using median gene expression of 20,737 genes from six post-mortem cortical brain samples [ 32 ], we conduct a partial least squares regression (PLSR), a data reduction and regression technique, to identify significant genes and enriched pathways that contribute to ΔCT (Q1). We next quantified the expression of the same significant genes in terms of their spatial profile by comparing them across the different brain regions and Von Economo classes [ 45 ], which provides a way of assessing the hypothesis that there would be a global differentiation between higher order cognitive processing and more primary sensory processing (Q2). We tested any significant genes for enrichment for classes of genetic and transcriptomic variation associated with autism (Q3): (1) Genes dysregulated in the autism post-mortem cortex; (2) Adult cortical gene co-expression modules associated with dysregulated genes in the autism post-mortem cortex; (3) Fetal cortical gene co-expression modules associated with dysregulated genes in the autism post-mortem cortex; (4) Genes enriched for rare, de novo loss of function mutations in autism; and (5) Common genetic variants associated with autism. To assess the replicability of the findings, we validate the results using two independent datasets from ABIDE-II (Table 1 ). In parallel, we also used a second list of genes dysregulated in autism identified using a partially overlapping cortical gene expression data set of autism and control post-mortem brain samples to validate the enrichment analysis across all datasets. To assess specificity of our results, we furthermore sought to answer these questions in a matched MRI dataset of children with ADHD, another childhood psychiatric condition. A schematic overview of the study protocol is provided in Fig. 1 . Table 1 Descriptive statistics for all four datasets Full size table Fig. 1 Schematic overview of the methodology used to identify gene contribution. Mean cortical thickness was extracted for both the autism and the neurotypical groups across 308 cortical nodes ( a ). A difference score in cortical thickness (ΔCT; autism—neurotypical) was calculated between these two groups ( b ). In parallel the median AIBS gene expression profiles for 20,737 genes were calculated across the same 308 cortical nodes used in the imaging analysis ( c ). Both these streams were included in a bootstrapped PLSR analysis that used the gene expression profiles as predictors and the ΔCT as response variable ( d ). The PLSR assigns weights to each gene in terms of its contribution to the overall model in each component. Bootstrapped standard errors were derived and the gene weights were Z-transformed and corrected for multiple comparison using a FDR inverse quantile transform correction to account for winners curse ( e ; i = gene index number, z = z -score for that gene’s association and q = FDR corrected z-score). Genes that were significant after FDR correction ( z -score >1.96) were analysed in terms of their spatial expression as well as tested for enrichment against three classes of risk for autism: dysregulated autism genes in the postmortem cortex, genes harbouring rare de novo variants and common genetic variants in autism ( f ) Full size image Discovery dataset Neuroimaging, gene expression and PLS regression Discovery imaging data used in this study are described in detail in the supplementary materials and elsewhere [ 34 ]. In short, structural T1-weighted MPRAGE images were obtained from the ABIDE I database ( selecting participants in the age range from 9 to 11, all males. All subjects were matched on age, and IQ between groups (see Table 1 ; Discovery Data) (see ref. [ 34 ] and Supplementary Materials for details on matching and scanner site). CT estimates were extracted using freesurfer and visually inspected for quality of segmentation by two independent researchers. Only when there was consensus between researchers were images included. Next, images were parcellated into 308 cortical regions and mean CT for these regions was extracted. In addition, scanner site was regressed out from CT estimates and the residuals were added to the group mean to allow for easier interpretation. The final sample consisted of 62 children with autism (cases) and 87 neurotypical individuals (controls). We used the transcriptomic dataset of the adult human brain created by the Allen Institute for Brain Science ( ) [ 32 , 33 ]. The anatomical structure associated to each tissue sample was determined using the MRI data provided by the AIBS for each donor. The high-resolution parcellation with 308 cortical regions, employed in the neuroimaging dataset, was warped from the anatomical space of the average subject provided by FreeSurfer (fsaverage) into the surface reconstruction of each AIBS donor brain. After pre-processing regional gene expression values were represented as a 308 × 20,737 matrix that contained the whole-genome expression data for the 308 MRI regions of interest [ 35 ]. Code used to determine regional gene expression levels is available at ( ) and data used can be downloaded from Cambridge Data Repository [ 36 ]. More details on tissue sample handling, processing, batch correction and consistency of gene expression data across donors are provided in the Supplementary Materials. Cortical surface representations were plotted using BrainsForPublication v0.2.1 ( ). We used PLSR to identify which genes were significantly associated with ΔCT. After obtaining PLS weights for each gene, these were z-transformed (based on standard errors obtained from bootstrapping) and FDR-adjusted using a FDR inverse quantile transformation correction to account for winners curse bias [ 37 ]. Only genes that passed FDR correction of p < 0.05 were included in enrichment analysis. We used significant genes with both negative and positive weights in our analysis. As our dependent variable, ΔCT, had both positive and negative values, weight signs were not informative about directionality in the analysis. A detailed description of the PLSR regression and the detailed rationale behind choosing the unsigned weights is provided in the Supplementary Materials. Genetic modules and enrichment analyses We used Enrichr ( ) [ 38 , 39 ] to test for enrichment of significant PLSR genes for each component against Gene Ontology Biological Processes and report significant results after Benjamini–Hochberg FDR correction ( q < 0.05). Cell-type specific enrichment was conducted for five broad classes of cells: neurons, astrocytes, oligodendrocytes, microglia and vascular cells [ 40 ]. We defined cell-type specific genes as the top 500 genes with higher expression in the cell-type compared to the remaining five genes. As these classes of genes are largely distinct with minimal overlap, we used a Bonferroni correction to correct for cell-type specific enrichment. We also investigated the enrichment in different classes of risk genes for autism using logistic regression (more detail on each class of genes can be found in the supplementary materials): 1. Transcriptionally dysregulated genes ( n = 1143, 584 upregulated and 558 downregulated in the autism cortex) were identified from Parikshak et al. [ 28 ]. 2. Adult gene co-expression modules [ 28 ]. 3. Fetal gene co-expression modules [ 31 ]. 4. Genes encriched for rare, de novo, putative loss of function variants (rare, de novo genes, n = 65) were identified from Sanders et al. [ 23 ]. 5. Common genetic variants associated with autism were downloaded from the latest data freeze from the Psychiatric Genomics Consortium (5305 cases and 5305 pseudocontrols). Gene based P -values and Z scores were obtained using MAGMA for each gene [ 41 ]. Enrichment analyses for the different classes of autism risk genes were done using logistic regression after accounting for gene length as a covariate. Enrichments are reported as significant if they had a Benjamini-Hochberg FDR adjusted P -value < 0.05 [ 42 ] and if they have an enrichment odds ratio (OR) >1. The supplementary material provides further details about the gene sets and the methods used. Validation and specificity We conducted extensive validation of our initial results against two independent datasets and checked for specificity of an autism effect against a matched ADHD dataset. There are significant phenotypic and genetic correlations between the two conditions, and we had access to MRI data from children with ADHD [ 34 ], making this a suitable dataset for testing specificity. Details on all these three datasets are provided in the supplementary materials. Results Autism discovery MRI dataset PLSR analyses and characterization Cross-validation using an initial 35 component analyses identified that a 13-component model had the best fit (Supplementary Table S 1 ). Note that the number of components chosen for the model does not affect the individual component composition. Consequently, PLSR was run using a 13-component model. Four components (Components 1, 3, 4 and 6) explained more than 10% of the variance (Supplementary Figure S1 ). However, variance in ΔCT explained by PLS components was higher than expected by chance only for the first component ( P = 0.009, 10,000 permutations) but not for the remaining components ( P = 0.303, P = 0.693 and P = 0.394, for components 3, 4 and 6, respectively). Thus, only PLSR1 was used for subsequent analyses and we only included genes that passed FDR correction ( q < 0.05). Only the GO term “Synaptic Transmission” in component 1 (PLSR1) survived FDR correction for multiple comparisons ( P corrected = 0.00006). PLSR1 was also significantly enriched for 11 pathways (Table S2 ) in the Kyoto Encyclopedia of Genes and Genomes (KEGG). There was a significant positive correlation between ΔCT and the scores of PLSR1 ( r = 0.32; P = 4.15 × 10 −9 ). Cell-type specific analysis identified a significant enrichment for neurons (OR = 1.1; P corrected = 3.19 × 10 −12 ), but no enrichment for genes enriched in astrocytes (OR = 1; P corrected = 1), oligodendrocytes (OR = 0.99; P corrected 1), microglia (OR = 0.97; P corrected = 0.43) or vascular cells (OR = 0.97; P corrected = 0.62). Topographical enrichment analyses Previous studies reported an association between CT and cytoarchitectural cortical features [ 43 ] linked to specific abnormalities in laminar thickness of supragranular layers of the cortex of schizophrenia patients [ 44 ]. Here, we also conducted spatial characterization of PLSR1 across all 5 Von Economo classes [ 45 ] as well as an additional 2 subtype classes covering limbic regions and allocortex (class 6) and insular cortex (class 7) [ 17 , 46 ]. We expected a potential differentiation between higher order cognitive processing and more primary sensory processing. The genes in PLSR1 were significantly over-expressed in secondary sensory and association cortices (VE classes 2, 3 and 4: all P corrected < 0.01) compared to a null distribution. In limbic and insular regions, however these genes appeared to be under-expressed (VE classes 6 and 7: all P corrected < 0.01). However, they also appear to be over-expressed in granular and primary motor cortices (VE Class 1). Figure 2 shows the results from the spatial characterization of the first component across all VE classes. Fig. 2 Expression and Von Economo classification for PLSR1. The heatmap in a shows the ΔCT distribution across all 308 cortical regions. The barplot in b shows the z -scores of the mean distribution across the different Von Economo Classes (Class 1: granular cortex, primary motor cortex. Class 2: association cortex. Class 3: association cortex. Class 4: dysgranular cortex, secondary sensory cortex. Class 5: agranular cortex, primary sensory cortex. Class 6: limbic regions, allocortex. Class 7: insular cortex.). All significant over- or under-expression classes are marked with an asterisk. To determine significance, we used permutation testing and an false discovery rate corrected p-value < 0.025 to fully account for two-tailed testing Full size image Gene enrichment analyses We identified a significant enrichment for genes that are dysregulated in the autism post-mortem cortex (OR = 1.21; P corrected < 2.81 × 10 −15 ), driven entirely by genes downregulated in autism cortex (OR = 1.87; P corrected < 3.55 × 10 −16 ). In comparison, there was no enrichment for upregulated genes (OR = 1.01; P corrected = 0.49). The downregulated genes have been previously reported to be enriched for several GO terms including synaptic transmission [ 28 ]. Transcriptionally dysregulated genes can reflect several different underlying processes. To provide better resolution of the processes involved, we next investigated if this enrichment was associated with six adult co-expression modules associated with dysregulated autism genes [ 28 ]. Three of these were associated with genes downregulated in the autism postmortem cortex (M4, M10, M16), and three were enriched for genes upregulated in the autism post-mortem cortex (M9, M19 and M20) compared to controls. As we had identified a significant enrichment for downregulated autism genes but not for the upregulated autism genes, we hypothesized that gene co-expression modules associated with downregulated genes would also be enriched for association with PLSR1 genes. Indeed, PLSR1 was enriched for all three downregulated modules but none of the 3 upregulated modules. See Fig. 3j , Table 2 and supplementary Table S 6 . Fig. 3 Gene enrichment and dataset comparisons. a – c Show the correlation between ∆CT in the three datasets. d – f Show the correlation between the PLSR scores of all three datasets. g – i Show the correlation between ∆CT and the PLSR scores in all three datasets (indicating that increased scores are strongly correlated with increased ∆CT). j Shows the odds ratios for the gene-enrichment analysis in the discovery dataset. All significantly enriched modules were replicated in the validation datasets ( k and l ) apart from module 4 of the adult co-expression modules. Pearson correlation coefficient and P -values of the correlations are provided in the top of the respective panels Full size image Table 2 Gene enrichment Full size table We also investigated if the significant genes in PLSR1 were enriched in specific cortical developmental modules [ 31 ]. The Mdev13, Mdev16 and the Mdev17 modules are enriched for transcriptionally dysregulated genes in autism postmortem frontal and temporal cortices [ 31 ]. The Mdev2 and the Mdev3 modules are enriched for rare variants identified in autism [ 31 ]. Again, we identified significant enrichment for three adult co-expression modules enriched for transcriptionally dysregulated genes. For the two modules associated with rare, de novo variants, we identified fewer PLSR1 genes than expected by chance. See Fig. 3j , Table 2 and Supplementary Table S 6 . We did not identify a significant enrichment for rare, de novo genes. We also did not identify a significant enrichment for common variants using MAGMA to collapse SNP based P -values to gene based P -values (OR = 1.00; P corrected = 0.29). Results of the gene enrichment analysis are provided in Fig. 3j , Table 2 and Supplementary Table S 6 . Validation of initial findings PLSR analyses and characterization We validated all analyses using ΔCT from two independent cohorts (Table 1 ). There was no correlation in ΔCT between the discovery and the two validation datasets (Fig. 3a,b ), which is in line with recent large scale assessments of autism neuroimaging studies [ 13 ]. This may be explained by factors such as heterogeneity due to scanner sites in the discovery dataset, age of onset of puberty and clinical conditions. There was a significant positive correlation in ΔCT between the two validation datasets ( r = 0.476; P < 2.2 × 10 −16 ). Heterogeneity in autism neuroimaging studies is well documented and complex [ 13 , 47 ], but it should be emphasized that the present analysis focuses on the relation between whole-brain variation in ΔCT and whole-brain variation in gene expression, thus a lack of spatial overlap in ΔCT does not affect the ΔCT—Gene relation. Again, only the first component (PLSR1-validation1 and PLSR1-validation2; see Fig. 4 ) ( P < 10 −14 , 10,000 permutations) (Supplementary Figure S2 ) explained a significant amount of the variance. There was a significant positive correlation between ΔCT and the gene expression scores in both validation datasets (Fig. 3h,i ). Further, PLSR1 was enriched for the GO term ‘Synaptic transmission’ ( P corrected = 1 × 10 −4 for both Validation 1 and Validation 2). In addition, for Validation 2, the PSLR component was also enriched for the GO term ‘Membrane depolarization’ ( P corrected = 7 × 10 −4 ). KEGG pathway enrichment for both datasets are provided in Supplementary Tables S 3 and S 4 . Further, we replicated the cell-type enrichment in genes expressed in neurons (OR = 1.13; P = 2.01 × 10 −7 for Validation 1 and OR = 1.06; P = 0.009 for Validation 2). Given the lack of consistent correlations in ΔCT between the original discovery dataset and both validation datasets we also added another validation dataset to confirm the PLS findings (Supplementary section 8 , Figure S6 ). Results from this dataset were in line with the two original validation datasets and PLS weights were consistent across datasets. Gene enrichment analyses We replicated the significant enrichment of transcriptionally downregulated genes in the autism post mortem cortex (Validation 1: OR = 1.24; P corrected = 0.004; Validation 2: OR = 1.3; P corrected = 0.001), providing confidence in the robustness of our initial results (Fig. 3k,l and Supplementary Table S 6 ). Mirroring the enrichment with the downregulated genes in the autism post-mortem cortex, we also identified enrichment for the three adult gene co-expression modules that are enriched for downregulated genes in Validation 1 (M4, M10 and M16) and two of the three (M10 and M16) adult gene co-expression modules for Validation 2 (Fig. 3k,l , Supplementary Table S 6 ). We also replicated the enrichment for the three fetal gene co-expression modules (Mdev13, Mdev16 and Mdev17) in both validation datasets (Fig. 3k,l , Supplementary Table S 6 ). Fig. 4 PLSR1 scores for all autism datasets. a – c represent the PLSR1 scores for the three autism datasets across 308 cortical regions. a Represents the discovery dataset, b represents Validation 1 and c represents Validation 2 Full size image To understand the genes involved in this process, we focused on the top 200 genes (approximately 1%) based on their gene weights in PLS1 across all the three datasets (Supplementary Dataset ). This identified several genes that have been implicated in autism, synaptic processes and neural development. For example, one of the genes identified is SCN1A , which encodes a voltage-gated sodium channel, is one of the genes identified to be frequently mutated in autism [ 48 , 49 , 50 ]. SLIT1 and SLIT3 are important for regulating midline axon crossing in the developing forebrain [ 51 ]. Focusing on just the top 1% in the two validation datasets, given their high correlation, we identified several other genes that have been implicated in autism and synaptic transmission including GABRA3 , GABRA5 and GABRB1 , all of which encode subunits of the GABA receptor. Mutations in PTCHD1 , another gene identified in the top 1%, have been implicated in autism and intellectual disabilities and contributes to dysfunction of excitatory synapses [ 52 ]. SYN2 and SYT17 encode synapsin2 and synatotagmin17 respectively, which are present in the presynaptic terminal and regulate neurotransmitter release. Comparison with ADHD PLSR analysis of ΔCT in ADHD data did not identify any components that significantly explained the variance in ΔCT (Supplementary Table S 5 ). Thus, we did not consider the ADHD dataset for further analyses. Details of the number of components, the model fit and the variance explained are provided in the supplementary materials. Discussion Here we report the association of transcriptionally downregulated genes in the autism post-mortem cortex with global differences in CT in 166 children with autism and 295 neurotypical children. Using partial least squares regression on a discovery dataset of 62 cases and 87 controls, we identify one component (PLSR1) that explains a significant proportion of variance in ΔCT and is enriched for the GO term ‘Synaptic Transmission’ and for neuronal genes. This component was enriched for genes downregulated in the autism post-mortem cortex and validated using two independent datasets. We also find that PLSR1 genes are enriched for fetal and adult developmental cortical modules that have been previously reported to be enriched for transcriptionally dysregulated genes in the post-mortem autism cortex and for genes involved in synaptic transmission [ 28 , 31 ]. We were unable to identify genes associated with ΔCT in ADHD, another childhood condition. Our study provides robust evidence linking disease-related variance in CT to synaptic genes and dysregulated genes in the autism post-mortem cortex, linking molecular and macroscopic pathology. Validation using two independent autism MRI datasets suggests that the results are valid even using MRI data from different cohorts that had different scanner settings. The results were valid despite non-significant correlations in global ΔCT between the discovery and the two validation datasets and sex did not contribute to any of the observed differences between datasets (see Supplementary material). This suggests that the same sets of genes are associated with ΔCT regardless of sex. Studies have identified differences in cortical morphology between neurotypical males and females and between males and females with autism [ 2 , 53 ]. Here we identified a high correlation between a males-only dataset and two males and females combined MRI datasets for the gene weights and gene scores in the first PLS component. Changes in CT may be due to a host of factors such as changes in myelination, synaptic pruning and dendritic arborisation. Evidence from rare genetic variants [ 54 , 55 ] and transcriptionally dysregulated genes in autism have highlighted a role for synaptic transmission in the aetiology of autism [ 28 , 29 ]. Transcriptional dysregulation may reflect either a causative risk mechanism for autism, or a compensatory consequence of genetic, hormonal and environmental risk for autism. Here, we are unable to disentangle if transcriptionally dysregulated genes causally contribute to cortical morphology changes, or if they are both downstream of a common risk mechanism, or both. It is possible that both CT variability and transcriptional dysregulation are downstream processes of genetic variation implicated in autism, and, as such the enrichment for transcriptionally dysregulated genes need not be causative of cortical morphological changes. We did not identify enrichments for rare, de novo loss of function genes or common variants implicated in autism. The lack of enrichment with rare, de novo loss of function genes may be due to both the relative low frequencies of such variants and small proportion of variance in liability explained by rare de novo variants [ 20 ]. In contrast, the lack of enrichment with common variants may be explained by the lack of statistical power of the largest available autism GWAS dataset. Indeed, there is no enrichment for common genetic variants associated with autism in co-expression modules enriched for transcriptionally dysregulated genes in autism [ 28 ]. In contrast, common variants for schizophrenia are enriched in co-expression modules associated with dysregulated genes in schizophrenia [ 56 ]. It is likely that larger samples will better reveal the role of common genetic variants in cortical morphology differences in autism. While we do not know the genetic make-up of the cases and controls, our results likely represent common downstream convergence of upstream genetic perturbations. Animal studies have shown that several candidate genes for autism risk are regulated by synaptic activity, leading to the hypothesis that dysregulation in synaptic homeostasis is a major risk for autism [ 55 ]. The effects of this can contribute to both neural signal processing, and to more morphological changes in neuroanatomy via processes such as activity dependent synaptic pruning and dendritic arborization. Post-mortem studies of the brains of children and adolescents with autism have identified deficits in synaptic pruning [ 57 ]. Investigating the specific role of synaptic genes in altering neural circuitry and cortical morphology will help elucidate the precise molecular mechanisms underlying CT differences seen in autism. There are some caveats that need to be taken into consideration while interpreting these results. Gene expression data was derived from only six post-mortem adult brain samples. Gene expression is known to vary with age [ 58 , 59 ]. Unfortunately, we are restricted in using the adult gene expression data from the AIBS for several reasons. First, this is the most spatially detailed dataset of gene expression. Second, the availability of MNI coordinates in the adult gene expression datasets allows for mapping of gene expression in distinct brain regions to CT differences extracted from MRI scans. Third, gene expression changes with age are limited and restricted to specific brain regions. A recent study identified only 9 genes significantly altered globally across the 10 regions investigated in post-mortem tissue samples [ 60 ], largely driven by glial genes. Cell specific enrichment in our dataset implicated neuronal genes only. Fourth, as autism is a developmental condition, investigating differences in cortical morphology at an early age is important to limit the role of environmental factors that contribute to differences in cortical morphology later in life [ 8 , 61 ]. Fifth, enrichment for gene expression modules associated with autism risk in the developing cortex provides further confidence that the genes identified here are relevant across the age-spectrum. We do acknowledge that investigating a paediatric specific gene-expression dataset will help further refine the analyses, once this data becomes available. Lastly, the present study used CT in contrast to other morphological features such as cortical volume. It is known that grey matter volume relies on the relationship between two different morphometric parameters, CT and surface area that are unrelated genetically [ 62 ] and associated with different developmental trajectories1 [ 63 ]. The combination of at least two different sources of genetic and maturational influences on cortical volume would complicate meaningful analysis of associated genetic weights. To our knowledge, this is the first study linking different genetic risk mechanisms in autism with changes in cortical morphology. In sum, we have shown that genes that are enriched for synaptic transmission and downregulated in individuals with autism are significantly associated with global changes in CT. We also show that these genes are generally overexpressed in association cortices. We validated the results in multiple independent datasets but not in a matched MRI dataset that included individuals with ADHD, showing both replicability as well as selectivity.
A team of scientists at the University of Cambridge has discovered that specific genes are linked to individual differences in brain anatomy in autistic children. Previous studies have reported differences in brain structure of autistic individuals. However, until now, scientists have not known which genes are linked to these differences. The Cambridge team analysed magnetic resonance imaging (MRI) brain scans from more than 150 autistic children and compared them with MRI scans from similarly aged children but who did not have autism. They looked at variation in the thickness of the cortex, the outermost layer of the brain, and linked this to gene activity in the brain. They discovered a set of genes linked to differences in the thickness of the cortex between autistic kids and non-autistic children. Many of these genes are involved in how brain cells (or neurons) communicate with each other. Interestingly, many of the genes identified in this study have been shown to have lower gene activity at the molecular level in autistic post mortem brain tissue samples. The study was led by two postdoctoral scientists, Dr Rafael Romero-Garcia and Dr Richard Bethlehem, and Varun Warrier, a PhD student. The study is published in the journal Molecular Psychiatry and provides the first evidence linking differences in the autistic brain to genes with atypical gene activity in autistic brains. Dr Richard Bethlehem said: "This takes us one step closer to understanding why the brains of people with and without autism may differ from one another. We have long known that autism itself is genetic, but by combining these different data sets (brain imaging and genetics) we can now identify more precisely which genes are linked to how the autistic brain may differ. In essence, we are beginning to link molecular and macroscopic levels of analysis to better understand the diversity and complexity of autism." Varun Warrier added: "We now need to confirm these results using new genetic and brain scan data so as to understand how exactly gene activity and thickness of the cortex are linked in autism." "The identification of genes linked to brain changes in autism is just the first step," said Dr Rafael Romero-Garcia. "These promising findings reveal how important multidisciplinary approaches are if we want to better understand the molecular mechanisms underlying autism. The complexity of this condition requires a joint effort from a wide scientific community."
10.1038/s41380-018-0023-7
Medicine
Study underscores need for multidisciplinary care for COVID-19 long-haulers
Post-acute COVID-19 syndrome, Nature Medicine (2021). DOI: 10.1038/s41591-021-01283-z Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-021-01283-z
https://medicalxpress.com/news/2021-03-underscores-multidisciplinary-covid-long-haulers.html
Abstract Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the pathogen responsible for the coronavirus disease 2019 (COVID-19) pandemic, which has resulted in global healthcare crises and strained health resources. As the population of patients recovering from COVID-19 grows, it is paramount to establish an understanding of the healthcare issues surrounding them. COVID-19 is now recognized as a multi-organ disease with a broad spectrum of manifestations. Similarly to post-acute viral syndromes described in survivors of other virulent coronavirus epidemics, there are increasing reports of persistent and prolonged effects after acute COVID-19. Patient advocacy groups, many members of which identify themselves as long haulers, have helped contribute to the recognition of post-acute COVID-19, a syndrome characterized by persistent symptoms and/or delayed or long-term complications beyond 4 weeks from the onset of symptoms. Here, we provide a comprehensive review of the current literature on post-acute COVID-19, its pathophysiology and its organ-specific sequelae. Finally, we discuss relevant considerations for the multidisciplinary care of COVID-19 survivors and propose a framework for the identification of those at high risk for post-acute COVID-19 and their coordinated management through dedicated COVID-19 clinics. Main Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the pathogen responsible for coronavirus disease 2019 (COVID-19), has caused morbidity and mortality at an unprecedented scale globally 1 . Scientific and clinical evidence is evolving on the subacute and long-term effects of COVID-19, which can affect multiple organ systems 2 . Early reports suggest residual effects of SARS-CoV-2 infection, such as fatigue, dyspnea, chest pain, cognitive disturbances, arthralgia and decline in quality of life 3 , 4 , 5 . Cellular damage, a robust innate immune response with inflammatory cytokine production, and a pro-coagulant state induced by SARS-CoV-2 infection may contribute to these sequelae 6 , 7 , 8 . Survivors of previous coronavirus infections, including the SARS epidemic of 2003 and the Middle East respiratory syndrome (MERS) outbreak of 2012, have demonstrated a similar constellation of persistent symptoms, reinforcing concern for clinically significant sequelae of COVID-19 (refs. 9 , 10 , 11 , 12 , 13 , 14 , 15 ). Systematic study of sequelae after recovery from acute COVID-19 is needed to develop an evidence-based multidisciplinary team approach for caring for these patients, and to inform research priorities. A comprehensive understanding of patient care needs beyond the acute phase will help in the development of infrastructure for COVID-19 clinics that will be equipped to provide integrated multispecialty care in the outpatient setting. While the definition of the post-acute COVID-19 timeline is evolving, it has been suggested to include persistence of symptoms or development of sequelae beyond 3 or 4 weeks from the onset of acute symptoms of COVID-19 (refs. 16 , 17 ), as replication-competent SARS-CoV-2 has not been isolated after 3 weeks 18 . For the purpose of this review, we defined post-acute COVID-19 as persistent symptoms and/or delayed or long-term complications of SARS-CoV-2 infection beyond 4 weeks from the onset of symptoms (Fig. 1 ). Based on recent literature, it is further divided into two categories: (1) subacute or ongoing symptomatic COVID-19, which includes symptoms and abnormalities present from 4–12 weeks beyond acute COVID-19; and (2) chronic or post-COVID-19 syndrome, which includes symptoms and abnormalities persisting or present beyond 12 weeks of the onset of acute COVID-19 and not attributable to alternative diagnoses 17 , 19 . Herein, we summarize the epidemiology and organ-specific sequelae of post-acute COVID-19 and address management considerations for the interdisciplinary comprehensive care of these patients in COVID-19 clinics (Box 1 and Fig. 2 ). Fig. 1: Timeline of post-acute COVID-19. Acute COVID-19 usually lasts until 4 weeks from the onset of symptoms, beyond which replication-competent SARS-CoV-2 has not been isolated. Post-acute COVID-19 is defined as persistent symptoms and/or delayed or long-term complications beyond 4 weeks from the onset of symptoms. The common symptoms observed in post-acute COVID-19 are summarized. Full size image Fig. 2: Interdisciplinary management in COVID-19 clinics. Multidisciplinary collaboration is essential to provide integrated outpatient care to survivors of acute COVID-19 in COVID-19 clinics. Depending on resources, prioritization may be considered for those at high risk for post-acute COVID-19, defined as those with severe illness during acute COVID-19 and/or requirement for care in an ICU, advanced age and the presence of organ comorbidities (pre-existing respiratory disease, obesity, diabetes, hypertension, chronic cardiovascular disease, chronic kidney disease, post-organ transplant or active cancer). The pulmonary/cardiovascular management plan was adapted from a guidance document for patients hospitalized with COVID-19 pneumonia 76 . HRCT, high-resolution computed tomography; PE, pulmonary embolism. Full size image Box 1 Summary of post-acute COVID-19 by organ system Pulmonary Dyspnea, decreased exercise capacity and hypoxia are commonly persistent symptoms and signs Reduced diffusion capacity, restrictive pulmonary physiology, and ground-glass opacities and fibrotic changes on imaging have been noted at follow-up of COVID-19 survivors Assessment of progression or recovery of pulmonary disease and function may include home pulse oximetry, 6MWTs, PFTs, high-resolution computed tomography of the chest and computed tomography pulmonary angiogram as clinically appropriate Hematologic Thromboembolic events have been noted to be <5% in post-acute COVID-19 in retrospective studies The duration of the hyperinflammatory state induced by infection with SARS-CoV-2 is unknown Direct oral anticoagulants and low-molecular-weight heparin may be considered for extended thromboprophylaxis after risk–benefit discussion in patients with predisposing risk factors for immobility, persistently elevated d -dimer levels (greater than twice the upper limit of normal) and other high-risk comorbidities such as cancer Cardiovascular Persistent symptoms may include palpitations, dyspnea and chest pain Long-term sequelae may include increased cardiometabolic demand, myocardial fibrosis or scarring (detectable via cardiac MRI), arrhythmias, tachycardia and autonomic dysfunction Patients with cardiovascular complications during acute infection or those experiencing persistent cardiac symptoms may be monitored with serial clinical, echocardiogram and electrocardiogram follow-up Neuropsychiatric Persistent abnormalities may include fatigue, myalgia, headache, dysautonomia and cognitive impairment (brain fog) Anxiety, depression, sleep disturbances and PTSD have been reported in 30–40% of COVID-19 survivors, similar to survivors of other pathogenic coronaviruses The pathophysiology of neuropsychiatric complications is mechanistically diverse and entails immune dysregulation, inflammation, microvascular thrombosis, iatrogenic effects of medications and psychosocial impacts of infection Renal Resolution of AKI during acute COVID-19 occurs in the majority of patients; however, reduced eGFR has been reported at 6 months follow-up COVAN may be the predominant pattern of renal injury in individuals of African descent COVID-19 survivors with persistent impaired renal function may benefit from early and close follow-up in AKI survivor clinics Endocrine Endocrine sequelae may include new or worsening control of existing diabetes mellitus, subacute thyroiditis and bone demineralization Patients with newly diagnosed diabetes in the absence of traditional risk factors for type 2 diabetes, suspected hypothalamic–pituitary–adrenal axis suppression or hyperthyroidism should undergo the appropriate laboratory testing and should be referred to endocrinology Gastrointestinal and hepatobiliary Prolonged viral fecal shedding can occur in COVID-19 even after negative nasopharyngeal swab testing COVID-19 has the potential to alter the gut microbiome, including enrichment of opportunistic organisms and depletion of beneficial commensals Dermatologic Hair loss is the predominant symptom and has been reported in approximately 20% of COVID-19 survivors MIS-C Diagnostic criteria: <21 years old with fever, elevated inflammatory markers, multiple organ dysfunction, current or recent SARS-CoV-2 infection and exclusion of other plausible diagnoses Typically affects children >7 years and disproportionately of African, Afro-Caribbean or Hispanic origin Cardiovascular (coronary artery aneurysm) and neurologic (headache, encephalopathy, stroke and seizure) complications can occur Show more Epidemiology Early reports have now emerged on post-acute infectious consequences of COVID-19, with studies from the United States, Europe and China reporting outcomes for those who survived hospitalization for acute COVID-19. The findings from studies reporting outcomes in subacute/ongoing symptomatic COVID-19 and chronic/post-COVID-19 syndrome are summarized in Table 1 . Table 1 Findings from clinical studies on the prevalence of post-acute COVID-19 syndrome Full size table An observational cohort study from 38 hospitals in Michigan, United States evaluated the outcomes of 1,250 patients discharged alive at 60 d by utilizing medical record abstraction and telephone surveys (hereby referred to as the post-acute COVID-19 US study) 20 . During the study period, 6.7% of patients died, while 15.1% of patients required re-admission. Of 488 patients who completed the telephone survey in this study, 32.6% of patients reported persistent symptoms, including 18.9% with new or worsened symptoms. Dyspnea while walking up the stairs (22.9%) was most commonly reported, while other symptoms included cough (15.4%) and persistent loss of taste and/or smell (13.1%). Similar findings were reported from studies in Europe. A post-acute outpatient service established in Italy (hereby referred to as the post-acute COVID-19 Italian study) 3 reported persistence of symptoms in 87.4% of 143 patients discharged from hospital who recovered from acute COVID-19 at a mean follow-up of 60 d from the onset of the first symptom. Fatigue (53.1%), dyspnea (43.4%), joint pain (27.3%) and chest pain (21.7%) were the most commonly reported symptoms, with 55% of patients continuing to experience three or more symptoms. A decline in quality of life, as measured by the EuroQol visual analog scale, was noted in 44.1% of patients in this study. A study focused on 150 survivors of non-critical COVID-19 from France similarly reported persistence of symptoms in two-thirds of individuals at 60 d follow-up, with one-third reporting feeling worse than at the onset of acute COVID-19 (ref. 21 ). Other studies, including in-person prospective follow-up studies of 110 survivors in the United Kingdom at 8–12 weeks after hospital admission 22 and 277 survivors in Spain at 10–14 weeks after disease onset 23 , as well as survey studies of 100 COVID-19 survivors in the United Kingdom at 4–8 weeks post-discharge 24 , 183 individuals in the United States at 35 d post-discharge 25 and 120 patients discharged from hospital in France, at 100 d following admission 26 , reported similar findings. Fatigue, dyspnea and psychological distress, such as post-traumatic stress disorder (PTSD), anxiety, depression and concentration and sleep abnormalities, were noted in approximately 30% or more study participants at the time of follow-up. In a prospective cohort study from Wuhan, China, long-term consequences of acute COVID-19 were evaluated by comprehensive in-person evaluation of 1,733 patients at 6 months from symptom onset (hereby referred to as the post-acute COVID-19 Chinese study) 5 . The study utilized survey questionnaires, physical examination, 6-min walk tests (6MWT) and blood tests and, in selected cases, pulmonary function tests (PFTs), high-resolution computed tomography of the chest and ultrasonography to evaluate post-acute COVID-19 end organ injury. A majority of the patients (76%) reported at least one symptom. Similar to other studies, fatigue/muscular weakness was the most commonly reported symptom (63%), followed by sleep difficulties (26%) and anxiety/depression (23%). These studies provide early evidence to aid the identification of people at high risk for post-acute COVID-19. The severity of illness during acute COVID-19 (measured, for example, by admission to an intensive care unit (ICU) and/or requirement for non-invasive and/or invasive mechanical ventilation) has been significantly associated with the presence or persistence of symptoms (such as dyspnea, fatigue/muscular weakness and PTSD), reduction in health-related quality of life scores, pulmonary function abnormalities and radiographic abnormalities in the post-acute COVID-19 setting 5 , 22 , 24 . Furthermore, Halpin et al. 24 reported additional associations between pre-existing respiratory disease, higher body mass index, older age and Black, Asian and minority ethnic (BAME) and dyspnea at 4–8 weeks follow-up. The post-acute COVID-19 Chinese study also suggested sex differences, with women more likely to experience fatigue and anxiety/depression at 6 months follow-up 5 , similar to SARS survivors 15 . While other comorbidities, such as diabetes, obesity, chronic cardiovascular or kidney disease, cancer and organ transplantation, are well-recognized determinants of increased severity and mortality related to acute COVID-19 (refs. 2 , 27 ), their association with post-acute COVID-19 outcomes in those who have recovered remains to be determined. Pathophysiology The predominant pathophysiologic mechanisms of acute COVID-19 include the following: direct viral toxicity; endothelial damage and microvascular injury; immune system dysregulation and stimulation of a hyperinflammatory state; hypercoagulability with resultant in situ thrombosis and macrothrombosis; and maladaptation of the angiotensin-converting enzyme 2 (ACE2) pathway 2 . The overlap of sequelae of post-acute COVID-19 with those of SARS and MERS may be explained by phylogenetic similarities between the responsible pathogenic coronaviruses. The overlap of genomic sequence identity of SARS-CoV-2 is 79% with SARS-CoV-1 and 50% with MERS-CoV 28 , 29 . Moreover, SARS-CoV-1 and SARS-CoV-2 share the same host cell receptor: ACE2. However, there are notable differences, such as the higher affinity of SARS-CoV-2 for ACE2 compared with SARS-CoV-1, which is probably due to differences in the receptor-binding domain of the spike protein that mediates contact with ACE2. In contrast with the other structural genes, the spike gene has diverged in SARS-CoV-2, with only 73% amino acid similarity with SARS-CoV-1 in the receptor-binding domain of the spike protein 30 . Moreover, an additional S1–S2 cleavage site in SARS-CoV-2 enables more effective cleavage by host proteases and facilitates more effective binding 30 , 31 . These mechanisms have probably contributed to the more effective and widespread transmission of SARS-CoV-2. Potential mechanisms contributing to the pathophysiology of post-acute COVID-19 include: (1) virus-specific pathophysiologic changes; (2) immunologic aberrations and inflammatory damage in response to the acute infection; and (3) expected sequelae of post-critical illness. While the first two are discussed in more detail in the organ-specific sections below, post-intensive care syndrome is now well recognized and includes new or worsening abnormalities in physical, cognitive and psychiatric domains after critical illness 32 , 33 , 34 , 35 , 36 . The pathophysiology of post-intensive care syndrome is multifactorial and has been proposed to involve microvascular ischemia and injury, immobility and metabolic alterations during critical illness 34 . Additionally, similar to previous studies of SARS survivors, 25–30% of whom experienced secondary infections 37 , 38 , survivors of acute COVID-19 may be at increased risk of infections with bacterial, fungal (pulmonary aspergillosis) or other pathogens 39 , 40 , 41 . However, these secondary infections do not explain the persistent and prolonged sequelae of post-acute COVID-19. Pulmonary sequelae Epidemiology and clinical manifestations A spectrum of pulmonary manifestations, ranging from dyspnea (with or without chronic oxygen dependence) to difficult ventilator weaning and fibrotic lung damage, has been reported among COVID-19 survivors. Similar to survivors of acute respiratory distress syndrome (ARDS) from other etiologies, dyspnea is the most common persistent symptom beyond acute COVID-19, ranging from 42–66% prevalence at 60–100 d follow-up 3 , 20 , 24 , 26 . In the post-acute COVID-19 Chinese study, the median 6-min walking distance was lower than normal reference values in approximately one-quarter of patients at 6 months 5 —a prevalence similar to that in SARS and MERS survivors 9 . The need for supplemental oxygen due to persistent hypoxemia, or new requirement for continuous positive airway pressure or other breathing support while sleeping, was reported in 6.6 and 6.9% of patients, respectively, at 60 d follow-up in the post-acute COVID-19 US study 20 . Among 1,800 patients requiring tracheostomies during acute COVID-19, only 52% were successfully weaned from mechanical ventilation 1 month later in a national cohort study from Spain 42 . A reduction in diffusion capacity is the most commonly reported physiologic impairment in post-acute COVID-19, with significant decrement directly related to the severity of acute illness 5 , 43 , 44 , 45 , 46 , which is consistent with studies of SARS and MERS survivors 9 , mild H1N1 influenza survivors 47 and historical ARDS survivors 48 . Although less common, hospitalized COVID-19 survivors have been found to have restrictive pulmonary physiology at 3 and 6 months 5 , 49 , which has also been observed in historical ARDS survivor populations 48 , 50 . Approximately 50% of 349 patients who underwent high-resolution computed tomography of the chest at 6 months had at least one abnormal pattern in the post-acute COVID-19 Chinese study 5 . The majority of abnormalities observed by computed tomography were ground-glass opacities. This study did not investigate chronic pulmonary embolism as computed tomography pulmonary angiograms were not obtained. The long-term risks of chronic pulmonary embolism and consequent pulmonary hypertension are unknown at this time. Fibrotic changes on computed tomography scans of the chest, consisting primarily of reticulations or traction bronchiectasis, were observed 3 months after hospital discharge in approximately 25 and 65% of survivors in cohort studies of mild-to-moderate cases 45 and mostly severe cases 49 , respectively, as distinguished by a requirement for supplemental oxygen. However, these prevalence estimates should be considered preliminary given the sample size of each of these cohorts. The prevalence estimates of post-acute COVID-19 sequelae from these studies suggest that patients with greater severity of acute COVID-19 (especially those requiring a high-flow nasal cannula and non-invasive or invasive mechanical ventilation) are at the highest risk for long-term pulmonary complications, including persistent diffusion impairment and radiographic pulmonary abnormalities (such as pulmonary fibrosis) 5 , 22 . Pathology and pathophysiology Viral-dependent mechanisms (including invasion of alveolar epithelial and endothelial cells by SARS-CoV-2) and viral-independent mechanisms (such as immunological damage, including perivascular inflammation) contribute to the breakdown of the endothelial–epithelial barrier with invasion of monocytes and neutrophils and extravasation of a protein-rich exudate into the alveolar space, consistent with other forms of ARDS 51 . All phases of diffuse alveolar damage have been reported in COVID-19 autopsy series, with organizing and focal fibroproliferative diffuse alveolar damage seen later in the disease course 52 , 53 , consistent with other etiologies of ARDS 54 , 55 . Rare areas of myofibroblast proliferation, mural fibrosis and microcystic honeycombing have also been noted. This fibrotic state may be provoked by cytokines such as interleukin-6 (IL-6) and transforming growth factor-β, which have been implicated in the development of pulmonary fibrosis 6 , 56 , 57 , 58 and may predispose to bacterial colonization and subsequent infection 59 , 60 , 61 . Analysis of lung tissue from five cases with severe COVID-19-associated pneumonia, including two autopsy specimens and three specimens from explanted lungs of recipients of lung transplantation, showed histopathologic and single-cell RNA expression patterns similar to end-stage pulmonary fibrosis without persistent SARS-CoV-2 infection, suggesting that some individuals develop accelerated lung fibrosis after resolution of the active infection 62 . Pulmonary vascular microthrombosis and macrothrombosis have been observed in 20–30% of patients with COVID-19 (refs. 63 , 64 , 65 , 66 , 67 ), which is higher than in other critically ill patient populations (1–10%) 68 , 69 . In addition, the severity of endothelial injury and widespread thrombosis with microangiopathy seen on lung autopsy is greater than that seen in ARDS from influenza 70 , 71 . Management considerations Post-hospital discharge care of COVID-19 survivors has been recognized as a major research priority by professional organizations 72 , and guidance for the management of these patients is still evolving 19 . Home pulse oximetry using Food and Drug Administration-approved devices has been suggested as a useful tool for monitoring patients with persistent symptoms; however, supporting evidence is currently lacking 73 , 74 . Some experts have also proposed evaluation with serial PFTs and 6MWTs for those with persistent dyspnea, as well as high-resolution computed tomography of the chest at 6 and 12 months 75 . In a guidance document adopted by the British Thoracic Society, algorithms for evaluating COVID-19 survivors in the first 3 months after hospital discharge are based on the severity of acute COVID-19 and whether or not the patient received ICU-level care 76 . Algorithms for both severe and mild-to-moderate COVID-19 groups recommend clinical assessment and chest X-ray in all patients at 12 weeks, along with consideration of PFTs, 6MWTs, sputum sampling and echocardiogram according to clinical judgment. Based on this 12-week assessment, patients are further recommended to be evaluated with high-resolution computed tomography of the chest, computed tomography pulmonary angiogram or echocardiogram, or discharged from follow-up. In addition to this 12-week assessment, an earlier clinical assessment for respiratory, psychiatric and thromboembolic sequelae, as well as rehabilitation needs, is also recommended at 4–6 weeks after discharge for those with severe acute COVID-19, defined as those who had severe pneumonia, required ICU care, are elderly or have multiple comorbidities. Treatment with corticosteroids may be beneficial in a subset of patients with post-COVID inflammatory lung disease, as suggested by a preliminary observation of significant symptomatic and radiological improvement in a small UK cohort of COVID-19 survivors with organizing pneumonia at 6 weeks after hospital discharge 77 . Steroid use during acute COVID-19 was not associated with diffusion impairment and radiographic abnormalities at 6 months follow-up in the post-acute COVID-19 Chinese study 5 . Lung transplantation has previously been performed for fibroproliferative lung disease after ARDS 78 due to influenza A (H1N1) infection 79 and COVID-19 (refs. 62 , 80 ). Clinical trials of antifibrotic therapies to prevent pulmonary fibrosis after COVID-19 are underway (Table 2 ) 81 . Table 2 Active research studies and questions pertaining to post-acute COVID-19 Full size table Hematologic sequelae Epidemiology and clinical manifestations Retrospective data on post-acute thromboembolic events, although limited by small sample size, variability in outcome ascertainment and inadequate systematic follow-up, suggest the rate of venous thromboembolism (VTE) in the post-acute COVID-19 setting to be <5%. A single-center report of 163 patients from the United States without post-discharge thromboprophylaxis suggested a 2.5% cumulative incidence of thrombosis at 30 d following discharge, including segmental pulmonary embolism, intracardiac thrombus, thrombosed arteriovenous fistula and ischemic stroke 82 . The median duration to these events was 23 d post-discharge. In this same study, there was a 3.7% cumulative incidence of bleeding at 30 d post-discharge, mostly related to mechanical falls. Similar VTE rates have been reported in retrospective studies from the United Kingdom 83 , 84 . A prospective study from Belgium at 6 weeks post-discharge follow-up assessed d -dimer levels and venous ultrasound in 102 patients; 8% received post-discharge thromboprophylaxis 85 . Only one asymptomatic VTE event was reported. Similarly, no DVT was seen in 390 participants (selected using a stratified sampling procedure to include those with a higher severity of acute COVID-19) who had ultrasonography of lower extremities in the post-acute COVID-19 Chinese study 5 . Larger ongoing studies, such as CORONA-VTE, CISCO-19 and CORE-19, will help to establish more definitive rates of such complications 86 , 87 . Pathology and pathophysiology Unlike the consumptive coagulopathy characteristic of disseminated intravascular coagulation, COVID-19-associated coagulopathy is consistent with a hyperinflammatory and hypercoagulable state 88 , 89 . This may explain the disproportionately high rates (20–30%) of thrombotic rather than bleeding complications in acute COVID-19 (ref. 90 ). Mechanisms of thromboinflammation include endothelial injury 70 , 91 , 92 , 93 , complement activation 94 , 95 , 96 , platelet activation and platelet–leukocyte interactions 97 , 98 , 99 , neutrophil extracellular traps 95 , 100 , 101 , release of pro-inflammatory cytokines 102 , disruption of normal coagulant pathways 103 and hypoxia 104 , similar to the pathophysiology of thrombotic microangiopathy syndromes 105 . The risk of thrombotic complications in the post-acute COVID-19 phase is probably linked to the duration and severity of a hyperinflammatory state, although how long this persists is unknown. Management considerations Although conclusive evidence is not yet available, extended post-hospital discharge (up to 6 weeks) and prolonged primary thromboprophylaxis (up to 45 d) in those managed as outpatients may have a more favorable risk–benefit ratio in COVID-19 given the noted increase in thrombotic complications during the acute phase, and this is an area of active investigation ( NCT04508439 , COVID-PREVENT ( NCT04416048 ), ACTIV4 ( NCT04498273 ) and PREVENT-HD ( NCT04508023 )) 106 , 107 . Elevated d -dimer levels (greater than twice the upper limit of normal), in addition to comorbidities such as cancer and immobility, may help to risk stratify patients at the highest risk of post-acute thrombosis; however, individual patient-level considerations for risk versus benefit should dictate recommendations at this time 86 , 108 , 109 , 110 . Direct oral anticoagulants and low-molecular-weight heparin are preferred anticoagulation agents over vitamin K antagonists due to the lack of need to frequently monitor therapeutic levels, as well as the lower risk of drug–drug interactions 108 , 109 . Therapeutic anticoagulation for those with imaging-confirmed VTE is recommended for ≥3 months, similar to provoked VTE 72 , 111 . The role of antiplatelet agents such as aspirin as an alternative (or in conjunction with anticoagulation agents) for thromboprophylaxis in COVID-19 has not yet been defined and is currently being investigated as a prolonged primary thromboprophylaxis strategy in those managed as outpatients (ACTIV4 ( NCT04498273 )). Physical activity and ambulation should be recommended to all patients when appropriate 102 . Cardiovascular sequelae Epidemiology and clinical manifestations Chest pain was reported in up to ~20% of COVID-19 survivors at 60 d follow-up 3 , 21 , while ongoing palpitations and chest pain were reported in 9 and 5%, respectively, at 6 months follow-up in the post-acute COVID-19 Chinese study 5 . An increased incidence of stress cardiomyopathy has been noted during the COVID-19 pandemic compared with pre-pandemic periods (7.8 versus 1.5–1.8%, respectively), although mortality and re-hospitalization rates in these patients are similiar 112 . Preliminary data with cardiac magnetic resonance imaging (MRI) suggest that ongoing myocardial inflammation may be present at rates as high as 60% more than 2 months after a diagnosis of COVID-19 at a COVID-testing center, although the reproducibility and consistency of these data have been debated 113 . In a study of 26 competitive college athletes with mild or asymptomatic SARS-CoV-2 infection, cardiac MRI revealed features diagnostic of myocarditis in 15% of participants, and previous myocardial injury in 30.8% of participants 114 . Pathology and pathophysiology Mechanisms perpetuating cardiovascular sequelae in post-acute COVID-19 include direct viral invasion, downregulation of ACE2, inflammation and the immunologic response affecting the structural integrity of the myocardium, pericardium and conduction system. Autopsy studies in 39 cases of COVID-19 detected virus in the heart tissue of 62.5% of patients 115 . The subsequent inflammatory response may lead to cardiomyocyte death and fibro-fatty displacement of desmosomal proteins important for cell-to-cell adherence 116 , 117 . Recovered patients may have persistently increased cardiometabolic demand, as observed in long-term evaluation of SARS survivors 118 . This may be associated with reduced cardiac reserve, corticosteroid use and dysregulation of the renin–angiotensin–aldosterone system (RAAS). Myocardial fibrosis or scarring, and resultant cardiomyopathy from viral infection, can lead to re-entrant arrhythmias 119 . COVID-19 may also perpetuate arrhythmias due to a heightened catecholaminergic state due to cytokines such as IL-6, IL-1 and tumor necrosis factor-α, which can prolong ventricular action potentials by modulating cardiomyocyte ion channel expression 120 . Autonomic dysfunction after viral illness, resulting in postural orthostatic tachycardia syndrome and inappropriate sinus tachycardia, has previously been reported as a result of adrenergic modulation 121 , 122 . Management considerations Serial clinical and imaging evaluation with electrocardiogram and echocardiogram at 4–12 weeks may be considered in those with cardiovascular complications during acute infection, or persistent cardiac symptoms 76 , 123 . Current evidence does not support the routine utilization of advanced cardiac imaging, and this should be considered on a case-by-case basis. Recommendations for competitive athletes with cardiovascular complications related to COVID-19 include abstinence from competitive sports or aerobic activity for 3–6 months until resolution of myocardial inflammation by cardiac MRI or troponin normalization 124 , 125 . Despite initial theoretical concerns regarding increased levels of ACE2 and the risk of acute COVID-19 with the use of RAAS inhibitors, they have been shown to be safe and should be continued in those with stable cardiovascular disease 126 , 127 . Instead, abrupt cessation of RAAS inhibitors may be potentially harmful 128 . In patients with ventricular dysfunction, guideline-directed medical therapy should be initiated and optimized as tolerated 129 . Withdrawal of guideline-directed medical therapy was associated with higher mortality in the acute to post-acute phase in a retrospective study of 3,080 patients with COVID-19 (ref. 130 ). Patients with postural orthostatic tachycardia syndrome and inappropriate sinus tachycardia may benefit from a low-dose beta blocker for heart rate management and reducing adrenergic activity 131 . Attention is warranted to the use of drugs such as anti-arrhythmic agents (for example, amiodarone) in patients with fibrotic pulmonary changes after COVID-19 (ref. 132 ). Neuropsychiatric sequelae Epidemiology and clinical manifestations Similar to chronic post-SARS syndrome, COVID-19 survivors have reported a post-viral syndrome of chronic malaise, diffuse myalgia, depressive symptoms and non-restorative sleep 133 , 134 . Other post-acute manifestations of COVID-19 include migraine-like headaches 135 , 136 (often refractory to traditional analgesics 137 ) and late-onset headaches ascribed to high cytokine levels. In a follow-up study of 100 patients, approximately 38% had ongoing headaches after 6 weeks 138 . Loss of taste and smell may also persist after resolution of other symptoms in approximately one-tenth of patients at up to 6 months follow-up 5 , 20 , 22 , 26 . Cognitive impairment has been noted with or without fluctuations, including brain fog, which may manifest as difficulties with concentration, memory, receptive language and/or executive function 139 , 140 , 141 . Individuals with COVID-19 experience a range of psychiatric symptoms persisting or presenting months after initial infection 142 . In a cohort of 402 COVID-19 survivors in Italy 1 month after hospitalization, approximately 56% screened positive in at least one of the domains evaluated for psychiatric sequelae (PTSD, depression, anxiety, insomnia and obsessive compulsive symptomatology) 143 . Clinically significant depression and anxiety were reported in approximately 30–40% of patients following COVID-19, similar to patients with previous severe coronavirus infections 11 , 12 , 15 , 143 , 144 . Anxiety, depression and sleep difficulties were present in approximately one-quarter of patients at 6 months follow-up in the post-acute COVID-19 Chinese study 5 . Notably, clinically significant PTSD symptoms were reported in approximately 30% of patients with COVID-19 requiring hospitalization, and may present early during acute infection or months later 143 , 144 . A real-world, large-scale dataset analysis of 62,354 COVID-19 survivors from 54 healthcare organizations in the United States estimated the incidence of first and recurrent psychiatric illness between 14 and 90 d of diagnosis to be 18.1% 145 . More importantly, it reported the estimated overall probability of diagnosis of a new psychiatric illness within 90 d after COVID-19 diagnosis to be 5.8% (anxiety disorder = 4.7%; mood disorder = 2%; insomnia = 1.9%; dementia (among those ≥65 years old) = 1.6%) among a subset of 44,759 patients with no known previous psychiatric illness. These values were all significantly higher than in matched control cohorts of patients diagnosed with influenza and other respiratory tract infections. Similar to other critical illnesses, the complications of acute COVID-19, such as ischemic or hemorrhagic stroke 146 , hypoxic–anoxic damage, posterior reversible encephalopathy syndrome 147 and acute disseminated myelitis 148 , 149 , may lead to lingering or permanent neurological deficits requiring extensive rehabilitation. Additionally, acute critical illness myopathy and neuropathies resulting during acute COVID-19 or from the effect of neuromuscular blocking agents can leave residual symptoms persisting for weeks to months 36 , 150 . Pathology and pathophysiology The mechanisms contributing to neuropathology in COVID-19 can be grouped into overlapping categories of direct viral infection, severe systemic inflammation, neuroinflammation, microvascular thrombosis and neurodegeneration 139 , 151 , 152 , 153 . While viral particles in the brain have previously been reported with other coronavirus infections 154 , there is not yet compelling evidence of SARS-CoV-2 infecting neurons. However, autopsy series have shown that SARS-CoV-2 may cause changes in brain parenchyma and vessels, possibly by effects on blood–brain and blood–cerebrospinal fluid barriers, which drive inflammation in neurons, supportive cells and brain vasculature 155 , 156 . Furthermore, levels of immune activation directly correlate with cognitive–behavioral changes 157 . Inflammaging (a chronic low-level brain inflammation), along with the reduced ability to respond to new antigens and an accumulation of memory T cells (hallmarks of immunosenescence in aging and tissue injury 158 ), may play a role in persistent effects of COVID-19. Other proposed mechanisms include dysfunctional lymphatic drainage from circumventricular organs 159 , as well as viral invasion in the extracellular spaces of olfactory epithelium and passive diffusion and axonal transport through the olfactory complex 160 . Biomarkers of cerebral injury, such as elevated peripheral blood levels of neurofilament light chain, have been found in patients with COVID-19 (ref. 161 ), with a more sustained increase in severe infections 162 , suggesting the possibility of more chronic neuronal injury. Post-COVID brain fog in critically ill patients with COVID-19 may evolve from mechanisms such as deconditioning or PTSD 141 . However, reports of COVID-19 brain fog after mild COVID-19 suggest that dysautonomia may contribute as well 163 , 164 . Finally, long-term cognitive impairment is well recognized in the post-critical illness setting, occurring in 20–40% of patients discharged from an ICU 165 . Management considerations Standard therapies should be implemented for neurologic complications such as headaches, with imaging evaluation and referral to a specialist reserved for refractory headache 166 . Further neuropsychological evaluation should be considered in the post-acute illness setting in patients with cognitive impairment. Standard screening tools should be used to identify patients with anxiety, depression, sleep disturbances, PTSD, dysautonomia and fatigue 76 , 141 . Renal sequelae Epidemiology and clinical manifestations Severe acute kidney injury (AKI) requiring renal replacement therapy (RRT) occurs in 5% of all hospitalized patients and 20–31% of critically ill patients with acute COVID-19, particularly among those with severe infections requiring mechanical ventilation 167 , 168 , 169 , 170 . Early studies with short-term follow-up in patients requiring RRT showed that 27–64% were dialysis independent by 28 d or ICU discharge 169 , 171 . Decreased estimated glomerular filtration rate (eGFR; defined as <90 ml min −1 per 1.73 m 2 ) was reported in 35% of patients at 6 months in the post-acute COVID-19 Chinese study, and 13% developed new-onset reduction of eGFR after documented normal renal function during acute COVID-19 (ref. 5 ). With adequate longer-term follow-up data, those patients who require RRT for severe AKI experience high mortality, with a survival probability of 0.46 at 60 d and rates of renal recovery reportedly at 84% among survivors 170 . Pathology and pathophysiology SARS-CoV-2 has been isolated from renal tissue 172 , and acute tubular necrosis is the primary finding noted from renal biopsies 173 , 174 and autopsies 175 , 176 in COVID-19. COVID-19-associated nephropathy (COVAN) is characterized by the collapsing variant of focal segmental glomerulosclerosis, with involution of the glomerular tuft in addition to acute tubular injury, and is thought to develop in response to interferon and chemokine activation 177 , 178 . Association with APOL1 risk alleles suggests that SARS-CoV-2 acts as a second hit in susceptible patients, in a manner similar to human immunodeficiency virus and other viruses 177 . Thrombi in the renal microcirculation may also potentially contribute to the development of renal injury 179 . Management considerations While the burden of dialysis-dependent AKI at the time of discharge is low, the extent of the recovery of renal function remains to be seen. As a result, COVID-19 survivors with persistent impaired renal function in the post-acute infectious phase may benefit from early and close follow-up with a nephrologist in AKI survivor clinics, supported by its previous association with improved outcomes 180 , 181 . Endocrine sequelae Epidemiology and clinical manifestations Diabetic ketoacidosis (DKA) has been observed in patients without known diabetes mellitus weeks to months after resolution of COVID-19 symptoms 182 . It is not yet known how long the increased severity of pre-existing diabetes or predisposition to DKA persists after infection, and this will be addressed by the international CoviDiab registry 183 . Similarly, subacute thyroiditis with clinical thyrotoxicosis has been reported weeks after the resolution of respiratory symptoms 184 , 185 . COVID-19 may also potentiate latent thyroid autoimmunity manifesting as new-onset Hashimoto’s thyroiditis 186 or Graves’ disease 187 . Pathology and pathophysiology Endocrine manifestations in the post-acute COVID-19 setting may be consequences of direct viral injury, immunological and inflammatory damage, as well as iatrogenic complications. Pre-existing diabetes may first become apparent during the acute phase of COVID-19 and can generally be treated long term with agents other than insulin, even if initially associated with DKA. There is no concrete evidence of lasting damage to pancreatic β cells 188 . Although some surveys have shown ACE2 and transmembrane serine protease (TMPRSS2; the protease involved in SARS-CoV-2 cell entry) expression in β cells 189 , the primary deficit in insulin production is probably mediated by factors such as inflammation or the infection stress response, along with peripheral insulin resistance 188 . So far, there is no evidence that COVID-19-associated diabetes can be reversed after the acute phase, nor that its outcomes differ in COVID-19 long haulers. COVID-19 also presents risk factors for bone demineralization related to systemic inflammation, immobilization, exposure to corticosteroids, vitamin D insufficiency and interruption of antiresorptive or anabolic agents for osteoporosis 190 . Management considerations Serologic testing for type 1 diabetes-associated autoantibodies and repeat post-prandial C-peptide measurements should be obtained at follow-up in patients with newly diagnosed diabetes mellitus in the absence of traditional risk factors for type 2 diabetes, whereas it is reasonable to treat patients with such risk factors akin to ketosis-prone type 2 diabetes 191 . Incident hyperthyroidism due to SARS-CoV-2-related destructive thyroiditis can be treated with corticosteroids but new-onset Graves’ disease should also be ruled out 184 . Gastrointestinal and hepatobiliary sequelae Significant gastrointestinal and hepatobiliary sequelae have not been reported in COVID-19 survivors 22 . Prolonged viral fecal shedding occurs in COVID-19, with viral ribonucleic acid detectable for a mean duration of 28 d after the onset of SARS-CoV-2 infection symptoms and persisting for a mean of 11 d after negative respiratory samples 192 , 193 , 194 , 195 . COVID-19 has the potential to alter the gut microbiome, including enrichment of opportunistic infectious organisms and depletion of beneficial commensals 196 , 197 . The ability of the gut microbiota to alter the course of respiratory infections (gut–lung axis) has been recognized previously in influenza and other respiratory infections 198 . In COVID-19, Faecalibacterium prausnitzii , a butyrate-producing anaerobe typically associated with good health, has been inversely correlated with disease severity 196 , 199 . Studies are currently evaluating the long-term consequences of COVID-19 on the gastrointestinal system, including post-infectious irritable bowel syndrome and dyspepsia ( NCT04691895 ). Dermatologic sequelae Dermatologic manifestations of COVID-19 occurred after (64%) or concurrent to (15%) other acute COVID-19 symptoms in an international study of 716 patients with COVID-19 (ref. 200 ), with an average latency from the time of upper respiratory symptoms to dermatologic findings of 7.9 d in adults 201 . Only 3% of patients noted a skin rash at 6 months follow-up in the post-acute COVID-19 Chinese study 5 . The predominant dermatologic complaint was hair loss, which was noted in approximately 20% of patients 5 , 26 . Hair loss can possibly be attributed to telogen effluvium resulting from viral infection or a resultant stress response 5 . Ongoing investigations may provide insight into potential immune or inflammatory mechanisms of disease 202 . Multisystem inflammatory syndrome in children (MIS-C) Epidemiology and clinical manifestations MIS-C, also referred to as pediatric inflammatory multisystem syndrome temporally associated with SARS-CoV-2 (PIMS-TS), is defined by the presence of the following symptoms in people <21 years old (or ≤19 years old per the World Health Organization definition): fever; elevated inflammatory markers; multiple organ dysfunction; current or recent SARS-CoV-2 infection; and exclusion of other plausible diagnoses 203 , 204 . Clinical presentations of MIS-C include fever, abdominal pain, vomiting, diarrhea, skin rash, mucocutaneous lesions, hypotension and cardiovascular and neurologic compromise 205 , 206 . Overlapping features have been noted with Kawasaki disease, an acute pediatric medium-vessel vasculitis 207 . However, comparison of Kawasaki disease and MIS-C cohorts demonstrates distinctive epidemiologic and clinical characteristics. While 80% of Kawasaki disease cases occur in children <5 years of age and primarily of Asian descent 207 , patients with MIS-C are typically >7 years, encompass a broader age range and are of African, Afro-Caribbean or Hispanic origin 206 , 208 . A comparable incidence of coronary artery aneurysm and dilation has been noted among MIS-C and Kawasaki disease (20 and 25%, respectively) 206 . Neurological complications of MIS-C, such as headache, altered mental status, encephalopathy, cranial nerve palsies, stroke, seizure, reduced reflexes, and muscle weakness, appear to be more frequent than in Kawasaki disease 209 , 210 . A pooled meta-analysis of MIS-C studies reported recovery in 91.1% and death in 3.5% of patients 205 . Ongoing studies are evaluating long-term sequelae in these children ( NCT04330261 ). Pathology and pathophysiology The timing of the emergence of MIS-C (which was lagging approximately 1 month behind peak COVID-19 incidence in epicenters in Spring 2020 211 ) and the finding that most patients are negative for acute infection but are antibody positive suggest that MIS-C may result from an aberrant acquired immune response rather than acute viral infection 208 . Insights into the pathophysiology of MIS-C may be derived in part from Kawasaki disease and toxic shock syndrome, with possible mechanisms of injury related to immune complexes, complement activation, autoantibody formation through viral host mimicry, and massive cytokine release related to superantigen stimulation of T cells 205 , 211 . Management considerations Current recommendations include immunomodulatory therapy with intravenous immunoglobulin, adjunctive glucocorticoids and low-dose aspirin until coronary arteries are confirmed normal at least 4 weeks after diagnosis 206 . Therapeutic anticoagulation with enoxaparin or warfarin and low-dose aspirin is recommended in those with a coronary artery z score ≥ 10, documented thrombosis or an ejection fraction < 35%. Studies such as the Best Available Treatment Study for Inflammatory Conditions Associated with COVID-19 (ISRCTN69546370) are evaluating the optimal choice of immunomodulatory agents for treatment. Serial echocardiographic assessment is recommended at intervals of 1–2 and 4–6 weeks after presentation 212 . Cardiac MRI may be indicated 2–6 months after diagnosis in those presenting with significant transient left ventricular dysfunction (ejection fraction < 50%) in the acute phase or persistent dysfunction to assess for fibrosis and inflammation. Serial electrocardiograms and consideration of an ambulatory cardiac monitor are recommended at follow-up visits in patients with conduction abnormalities at diagnosis. Special considerations Racial and ethnic considerations Acute COVID-19 has been recognized to disproportionately affect communities of color 27 , 213 , 214 , 215 , 216 . A total of 51.6% of survivors in the post-acute COVID-19 US study were Black 20 , while the BAME group comprised 19–20.9% in the UK studies 22 , 24 . Only one study from the United Kingdom evaluated the association of race/ethnicity and reported that individuals belonging to the BAME group were more likely to experience dyspnea than White individuals (42.1 versus 25%, respectively) at 4–8 weeks post-discharge 24 . Rates of PTSD were similar in BAME and White participants in this study. Emerging data also suggest that COVAN may be the predominant pattern of renal injury in individuals of African descent 177 . MIS-C is also known to disproportionately affect children and adolescents of African, Afro-Caribbean or Hispanic ethnicity 206 , 208 . Larger studies are required to ascertain the association between sequelae of post-acute COVID-19 and race and ethnicity. These important differences noted in preliminary studies may be related to multiple factors, including (but not limited to) socioeconomic determinants and racial/ethnic disparities, plausible differences in the expression of factors involved in SARS-CoV-2 pathogenesis, and comorbidities. Higher nasal epithelial expression of TMPRSS2 has been reported in Black individuals compared with other self-reported races/ethnicities 217 . However, caution is warranted that ongoing and future studies integrate and analyze information along multiple axes (for example, clinical and socioeconomic axes, resource deficits and external stressors) to prevent inaccurate contextualization 218 . The National Institute on Minority Health and Health Disparities at the National Institutes of Health has identified investigation of short- and long-term effects of COVID-19 on health, and how differential outcomes can be reduced among racial and ethnic groups, as a research priority 216 . Nutrition and rehabilitation considerations Severe COVID-19, similar to other critical illnesses, causes catabolic muscle wasting, feeding difficulties and frailty, each of which is associated with an increased likelihood of poor outcome 36 . Malnutrition has been noted in 26–45% of patients with COVID-19, as evaluated by the Malnutrition Universal Screening Tool in an Italian study 219 . Protocols to provide nutritional support for patients (many of whom suffered from respiratory distress, nausea, diarrhea and anorexia, with resultant reduction in food intake) continue to be refined 220 . All post-acute COVID-19 follow-up studies that incorporated assessments of health-related quality of life and functional capacity measures have universally reported significant deficits in these domains, including at 6 months in the post-acute COVID-19 Chinese study 3 , 5 , 20 . Given the severity of the systemic inflammatory response associated with severe COVID-19 and resultant frailty, early rehabilitation programs are being evaluated in ongoing clinical studies (Table 2 ). They have previously been validated to be both safe and effective in critically ill patients with ARDS 221 , 222 , 223 and in preliminary studies in COVID-19 (ref. 224 ). Model COVID-19 rehabilitation units such as those in Italy are already routinely assessing acute COVID-19 survivors for swallowing function, nutritional status and measures of functional independence 219 . Patient advocacy groups Unique to this pandemic is the creation and role of patient advocacy groups in identifying persistent symptoms and influencing research and clinical attention. Such groups include COVID Advocacy Exchange ( ), the National Patient Advocate Foundation COVID Care Resource Center ( ), long-haul COVID fighters Facebook groups, the Body Politic COVID-19 Support Group ( ), Survivor Corps ( ) and Patient-Led Research for COVID-19 ( patientresearchcovid19.com ). Surveys conducted by these groups have helped to identify persistent symptoms such as brain fog, fatigue and body aches as important components of post-acute COVID-19. Additionally, they have been instrumental in highlighting the persistence of symptoms in patients with mild-to-moderate disease who did not require hospitalization 225 . Active engagement with these patient advocacy groups, many of whom identify themselves as long haulers, is crucial 226 . Dissemination of contact information and resources of these groups can occur at pharmacies, physician offices and in discharge summaries upon hospital discharge. Conclusions and future directions The multi-organ sequelae of COVID-19 beyond the acute phase of infection are increasingly being appreciated as data and clinical experience in this timeframe accrue. Necessary active and future research include the identification and characterization of key clinical, serological, imaging and epidemiologic features of COVID-19 in the acute, subacute and chronic phases of disease, which will help us to better understand the natural history and pathophysiology of this new disease entity (Table 2 ). Active and future clinical studies, including prospective cohorts and clinical trials, along with frequent review of emerging evidence by working groups and task forces, are paramount to developing a robust knowledge database and informing clinical practice in this area. Currently, healthcare professionals caring for survivors of acute COVID-19 have the key role of recognizing, carefully documenting, investigating and managing ongoing or new symptoms, as well as following up organ-specific complications that developed during acute illness. It is also imperative that clinicians provide information in accessible formats, including clinical studies available for participation and additional resources such as patient advocacy and support groups. Moreover, it is clear that care for patients with COVID-19 does not conclude at the time of hospital discharge, and interdisciplinary cooperation is needed for comprehensive care of these patients in the outpatient setting. As such, it is crucial for healthcare systems and hospitals to recognize the need to establish dedicated COVID-19 clinics 74 , where specialists from multiple disciplines are able to provide integrated care. Prioritization of follow-up care may be considered for those at high risk for post-acute COVID-19, including those who had severe illness during acute COVID-19 and/or required care in an ICU, those most susceptible to complications (for example, the elderly, those with multiple organ comorbidities, those post-transplant and those with an active cancer history) and those with the highest burden of persistent symptoms. Given the global scale of this pandemic, it is apparent that the healthcare needs for patients with sequelae of COVID-19 will continue to increase for the foreseeable future. Rising to this challenge will require harnessing of existing outpatient infrastructure, the development of scalable healthcare models and integration across disciplines for improved mental and physical health of survivors of COVID-19 in the long term.
Physicians across the country have analyzed the emerging scientific data about the long-term effects of COVID-19, creating an initial knowledge base about the clinical experiences of so-called "long-haulers"—patients with COVID-19 who experience prolonged symptoms and/or the emergence of new ones well after the initial viral infection has resolved. A comprehensive review published today in Nature Medicine offers an initial glimpse of the multi-organ effects of long-term COVID-19 and suggests a framework for the care of COVID-19 long-haulers through dedicated, multidisciplinary clinics. "It was important to respond to our patients' concerns and pay close attention to the symptoms they were experiencing beyond the acute phase of COVID-19," said Kartik Sehgal, MD, a lead author of the new study and a medical oncologist at Dana-Farber Cancer Institute and Brigham and Women's Hospital, and instructor in medicine at Harvard Medical School. "While our knowledge in this area is still evolving and will continue to evolve over the next few years, we have provided a comprehensive resource for physicians caring for patients who have recovered from acute COVID-19 and scientists currently investigating the potential prolonged effects of COVID-19." Sehgal and his colleagues at hospitals affiliated with Harvard University and Columbia University worked at the frontlines of the pandemic last spring and have witnessed first-hand the myriad health effects that the responsible virus, known as SARS-CoV-2, can inflict. That includes the recognition that COVID-19 is not simply an illness of the lungs or respiratory tract but can also affect many other organ systems. Sehgal was among the co-authors of a study published last summer in Nature Medicine that summarized the vast constellation of multi-organ symptoms associated with the COVID-19. As the pandemic wore on, the team heard from their own patients as well as patient advocacy groups about surprisingly persistent symptoms—complications that extended for weeks, even months, beyond the initial infection. "COVID-19 is the first infectious disease that I've come across that has such an effect on a wide variety of organs. It's changed my clinical practice. No matter what the patient comes in for, I now ask if they ever had COVID-19. It changes the possible range of diagnoses," added Elaine Y. Wan, MD, the Esther Aboodi Assistant Professor of Medicine in Cardiology and Cardiac Electrophysiology at Columbia University Vagelos College of Physicians and Surgeons and senior author of the study. These observations prompted the authors to undertake a comprehensive review of the published literature on long-term effects of COVID-19 so they could develop a better understanding of the condition themselves and share it with the physician and scientific community. They found that the most common symptoms of long COVID include fatigue, shortness of breath, brain fog, loss of sense of smell or taste, anxiety, depression, and post-traumatic stress disorder (PTSD). Based on the limited number of studies published to date, at least one-third of patients who required hospitalization for COVID-19 have experienced one of these long-term side effects. While there is much more work yet to be done, this finding is consistent with what has been found in the few studies of survivors of the SARS epidemic in 2003 as well as the MERS outbreak in 2012, which involved viruses that are closely related to SARS-CoV-2. "The medical needs of patients with COVID-19 don't stop at the time of hospital discharge and they also don't necessarily stop after three to four weeks, even for those who didn't require hospitalization," said Sehgal. "It is important for physicians to be aware of these possible symptoms and complications and to have resources available for early recognition to provide the best possible care." Because the symptoms of long COVID are not typically restricted to just one organ system, the researchers stress the importance of multidisciplinary care, such as a dedicated COVID-19 clinic that includes specialists with expertise in diverse areas of clinical medicine. "It is important to not forget about the mental health effects of COVID-19 in these patients, while taking care of their physical symptoms," said Sehgal. Some academic medical centers have already established such an effort. In response to the clinical need and the emerging research findings, the Brigham Lung Center has recently established a COVID Recovery Center to coordinate and deliver multi-disciplinary care for patients with long-term symptoms after infection. Moving forward, Sehgal and his colleagues believe it will also be essential to develop mechanisms for appropriately sharing and pooling patient data across organizations and institutions, including from patient advocacy groups, which played a major role in highlighting the health struggles of long haulers. "There are a lot of questions we'll need to answer," said Sehgal. "The only way we can do that quickly is by systematically collecting and harnessing all of the data that's out there."
10.1038/s41591-021-01283-z
Earth
Stalagmites trace climate history and impact from volcanic eruptions
Björn Klaes et al, High-resolution stalagmite stratigraphy supports the Late Holocene tephrochronology of southernmost Patagonia, Communications Earth & Environment (2022). DOI: 10.1038/s43247-022-00358-0 Journal information: Communications Earth & Environment
http://dx.doi.org/10.1038/s43247-022-00358-0
https://phys.org/news/2022-03-stalagmites-climate-history-impact-volcanic.html
Abstract Volcanic ash layers are important markers for the chronostratigraphy of paleoclimate and paleoenvironmental archives at the southern tip of South America. However, this requires that tephras are well-dated. We report geochemical data from stalagmite MA1 formed in a non-karst cave near Mt. Burney volcano in southernmost Patagonia (~53°S). High-resolution LA-ICP-MS analyses, SEM imagery, EPMA data, and NanoSIMS enable to identify volcanogenic signals during the last 4.5 kyrs from sub-annual trace element variations and tephra particles in distinct laminae. Our new 230 Th/U-chronology of MA1 provides precise dating of tephra from Mt. Burney (MB) and, probably, Aguilera (A) at 4,216 +93 / −193 yrs BP (MB 2 ), 2,291 ± 33 yrs BP (MB 3 ), 853 +41 / −60 yrs BP (MB 4 ) and 2,978 +91 / −104 yrs BP (A 1 ). This unique high-resolution record holds potential to date further eruptions from Southern Andean volcanoes for the tephrochronology in this critical region, and potentially also large-volume explosive volcanism off South America. Introduction Severe and long-lasting impact on regional ecosystems from explosive volcanic eruptions and ash (tephra) fallout has been observed in fragile environments such as the super-humid South Patagonian Andes 1 , 2 . While geochemical fingerprinting links tephra glass shard compositions to their volcanic source and represents a valuable tool in tephrochronology 3 , the dating of distinct tephra layers enables a reconstruction of the regional history of volcanic activity and provides isochronous and stratigraphic marker horizons for the correlation of paleoenvironmental archives over large geographical areas 4 . Despite intensive research and rapid advances in the development of tephrochronological methods in the past few decades, the accurate identification of distinct tephra layers and the precise synchronization of single volcanic events recorded in geological archives from different depositional environments remains challenging 4 , 5 . In southernmost Patagonia, the ages of widespread Holocene tephra layers from Plinian eruptions from volcanoes of the Patagonian Andes 6 in the southern segment of the Southern (41.5–46°S) and the Austral (49–55°S) Volcanic Zones were mainly deduced from 14 C-dated materials of terrestrial outcrops, peat and marine/lacustrine sediment cores 3 , 5 , 7 , 8 . This tephrochronological framework constitutes the principal reference for the stratigraphy of most geological records used in paleoclimate/paleoenvironmental studies from this climatically unique and poorly explored region 9 . But accurate age constraints for sedimentary climate and environmental archives are particularly difficult to obtain because age uncertainties and problems with the correlation of tephra layers in regional soils and sediment or peat core archives are due to variable composition, unclear stratigraphy, and diverging chronologies based on 14 C-dated carbonates, bulk organic matter (OM) or single plant remains 3 , 4 . In particular, 14 C-dated components of soils and sediments can have age offsets of hundreds of years due to reservoir effects caused by, e.g., the re-sedimentation and bioturbation of older OM or carbonate dissolution 10 , 11 , which have serious implications for age models. Large errors in age determination of volcanic ash layers also occur from the interpolation of bracketing 14 C dates 4 and/or age constraints based on estimated sedimentation/peat accumulation rates 12 , 13 . Further challenges include the correct identification of the source volcano through the characteristic physical/textural properties as well as the mineralogical and geochemical composition of tephra pumice and glass shards 8 . Site-dependent, potentially intense weathering processes in acidic peatland ecosystems 14 can compromise geochemical fingerprinting methods for tephra correlation 3 . Chemical weathering over long periods strongly alters the composition of volcanic glass and embedded phenocrysts, leading to changes in the concentration of major and trace elements 15 , 16 , 17 . Despite their increasingly important role as high-resolution terrestrial archives for the reconstruction of past climate and environmental changes, only a few speleothem records have so far contributed to regional tephrochronologies worldwide 18 , 19 , 20 , 21 , 22 . This is due to the difficulty of detecting unequivocal volcanic signals in speleothems. Furthermore, speleothem layers containing tephra glass shards or even small pumice particles have — to our knowledge — not yet been documented. Paine et al. 23 , for example, emphasized that the site-specific hydrological regime and soil/vegetation perturbation may attenuate the chemical signals from explosive volcanism that are preserved in stalagmites. Moreover, chemical signals of volcanic ash deposition can outlast the actual eruptive events by millennia 1 . These effects contribute to complicate the identification of volcanic eruptions by the sole stalagmite trace element profiling. Nevertheless, speleothems can be very useful for such studies because 230 Th/U dating allows exceptionally accurate and precise dating and variations in chemical elements can be detected with up to (sub-) annual resolution. Therefore, speleothems have great potential for the high precision dating of volcanic events comparable to ice core tephra records 18 , 24 . Combined with stable isotope data (δ 18 O, δ 13 C) and trace element proxies, speleothem records enable both the dating of volcanic activity and the evaluation of possible environmental and climate impacts following the documented eruptions 18 , 19 , 20 , 21 . In this work, we analyze a non-karst speleothem (stalagmite MA1 25 ) to better constrain the Late Holocene tephrostratigraphy in southernmost Patagonia and provide evidence for soil–environmental changes related to tephra deposition under extreme climate conditions. This speleothem formed in a small cave ca. 40 km S of Mt. Burney volcano and recorded sub-annual trace element variations detected by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). These high-resolution compositional profiles were combined with a refined 230 Th/U-chronology age model constructed from published 25 and additional new 230 Th/U dates. Based on this improved age model, our new LA-ICP-MS data, and principal component analysis (PCA), we (i) detect and analyze pyroclastic particles incorporated into the stalagmite and (ii) correlate these particles to pristine tephra of Mt. Burney volcano by electron microprobe (EPMA) glass composition measurements. This allows (iii) to precisely date the MB 2 , MB 3 , and MB 4 tephra using the improved age model for the speleothem archive based on 230 Th/U-ages. Furthermore, (iv) the MA1 record documents the environmental response to changes in the hydrological/pedological regime caused by the eruptions and their deposits. We also identify a chemical signal of the A 1 eruption of Aguilera and provide a new age estimate for this eruption as well. In addition, (v) EPMA measurements, scanning electron microscopy (SEM), and nanoscale secondary ion mass spectrometry (NanoSIMS) of detrital components preserved in distinct laminae confirm the identification of pristine and reworked tephra particles and their alteration products. Within the context of the data reported here, previously published stable isotope data (δ 18 O, δ 13 C) of MA1 25 provide additional constraints on short-term perturbations of the local surface hydrology and soil composition following the eruptions. Environmental context and characteristics of stalagmite MA1 The MA1 stalagmite was obtained from the Marcelo Arévalo (MA) cave 25 , situated at 20 m a.s.l. in the Magellanic moorlands at 52°41.50′S/73°16.28′W (Fig. 1a ). The cave formed in a nearly vertical tectonic fracture zone cross-cutting a plutonic sequence of coarse-crystalline granite with mylonitic orthogneiss and rare mafic dykes 26 . Granite and orthogneiss mainly comprise quartz, plagioclase, and mica. The most prominent accessory minerals are pyrite, apatite, and zircon. Rarely occurring mafic intrusives are mainly composed of hornblende, plagioclase, clinopyroxene, and a few mica. Accessory phases include, e.g., apatite and zircon. The hydrological system in this fracture zone directly connects the cave with a small catchment (~2500 m 2 ) at 80 m a.s.l. that is characterized by acidic (pH 3.9–5.7), waterlogged peatland soils, and surrounding bare rock outcrops 26 (Fig. 1b ). The super-humid climate in this region is controlled by pronounced seasonal variations of the southern westerly wind belt 27 , 28 , 29 (SWW), present annual precipitation of ~3800 mm yr −1 30 , and mean annual temperatures of 5.3 °C 30 (recorded at the automatic weather station Arévalo; Fig. 1c ). Reconstructed annual precipitation during the Late Holocene reached up to 6500 mm yr −1 25 . Monitoring data between 2005 and 2008 reveal that cave temperatures do not differ from atmospheric values and that average relative humidity is 94% 25 . The drip water has a pH of 7.5 and elevated average drip rates (~150–250 cm 3 d −1 ) measured at the MA1 sampling site vary within the seasonal trend 25 . An estimated maximum delay of only 4–6 weeks between major rainfall events and related drip rate increase 25 indicates a relatively high transmissivity of the fractured aquifer system due to good connectivity of the cave with the local surface hydrology in combination with the extraordinary precipitation. Stalagmite MA1 shows a rather constant mean growth rate of 67 μm yr −1 (minor variations of 25–100 μm yr −1 ) 25 and continuous growth over the last 4.5 kyrs BP (before present, year 1950 of the common era). A significant hiatus, interrupting the growth of MA1, has not been observed 25 . Along the growth axis, MA1 is dominated by compact calcite, whereas marginal areas and distinct sections rich in OM and detrital silicate components (volcanic glass, basement-derived mineral grains; Fig. S2 ) are more porous. However, microscopic analyses provided no evidence for CaCO 3 dissolution and/or re-crystallization 31 , 32 , 33 in these laminae. Fig. 1: Location of the study area and site characteristics. a The study site at the windward side of the southernmost Andes, ~40 km S of the Mt. Burney volcano. Inserted isopach lines indicate the thickness and distribution of tephra deposits from the MB 2 and A 1 eruption 7 . In addition, Reclus and Aguilera volcanoes are shown. b Marcelo Arévalo 25 (MA) cave and its environmental context. The hydrological connection between the cave and its catchment is schematically illustrated. c Mean monthly precipitation and temperatures measured at the automatic weather station Arévalo 30 . Values were averaged from the time spans 2011–2012 and 2015–2016. The background map in a was generated with GeoMapApp 77 , 78 . Full size image The cave site is located only 40 km S of the Mt. Burney volcano (52°19′S/73°22′W, Fig. 1a ), the most active volcano of the Austral Volcanic Zone during the Late Quaternary 5 . Originating from this volcano, ten Holocene Plinian eruptions dispersed tephra layers across wide areas of southernmost South America 3 . After the large-volume MB 2 eruption, regional pristine ecosystems suffered millennium-scale damage caused by SO 2 -induced acidification associated with the deposition of >10 cm of volcanic ash 1 , 2 . By contrast, the tephra deposits from the notably smaller Mt. Burney eruptions with a volcanic explosivity index below 5 3 are only a few millimeters thick 12 , 13 and probably had minor impacts on the environment. All volcanic eruptions we address in this paper are identified and named according to the database published by Fontjin et al. 3 . Results Chronology We used StalAge 34 to construct a new age-depth model for the MA1 speleothem compared to the previous study by Schimpf et al. 25 (Fig. 2 ). Our age-depth model (Table S1 ) is now based on 16 published and 2 new 230 Th/U-ages and a slightly modified detrital correction factor for Th 35 . The applied StalAge model for MA1 covers the entire 280 mm length along the growth axis of the speleothem, representing about 4.8 kyrs. For details of the 230 Th/U-age constraints, detritus correction, and the construction of the age model, see the “Methods” section. Fig. 2: New age-depth model of the MA1 stalagmite. a StalAge 34 age-depth model, including the corresponding 95% confidence limits (thin red lines). Uncorrected ages and ages after detritus correction 35 are shown with standard deviations as 2 σ error bars (see Table S1 for details). b Photo of MA1 including the 18 sampling points for 230 Th/U dating. Full size image Element concentrations determined by LA-ICP-MS Concentrations of S, P, U, Si, Sr, and Zr — selected by virtue of the well-known composition of Mt. Burney tephra 7 , 8 , 36 and the environmental impact after the deposition of MB 2 ash 1 — were measured on multiple overlapping thin sections prepared along the entire length of MA1 (4.5–0.6 kyrs BP) and are presented in Figs. 3 – 5 . Average elemental concentrations were determined from a total of 59,200 LA-ICP-MS measurements for S (132 ppm), P (2212 ppm), U (2.4 ppm), Si (1400 ppm), Sr (197 ppm), and Zr (3.2 ppm). Against this background, we observe numerous high amplitude spikes identified as short (few months up to three years) and strong variations in the deposition of these elements. Such peak concentrations reached >10 ppm for U and Zr, >400 ppm for Sr, >1000 ppm for S and P, and easily exceeded 10,000 ppm in the case of Si (Figs. 3 – 5 ). These high amplitude spikes are well correlated for S, P, U, Si, Sr, and Zr and appear highly irregular with respect to frequency and intensity along with the profiles. Particular conspicuous spikes of S, P, U, Sr, and partly of Si and Zr, occur at 4.216, 2.978, 2.291, and 0.853 kyrs BP (orange bars; Figs. 3 and 4 ). These conspicuous spikes are followed by (i) a series of mostly smaller, but equally discrete, short-term peaks for the same elements decreasing in height or by (ii) more extended periods characterized by consistently elevated concentrations from ~4.20 to 4.14 kyrs BP, respectively (gray bars; Fig. 4 ). Fig. 3: Stable isotope and LA-ICP-MS data of MA1 between 4.5 and 0.6 kyrs BP combined with the geochemistry of incorporated volcanic glass and its alteration products. a δ 18 O and δ 13 C records 25 of the MA1 stalagmite combined with concentrations of S, P, U, Sr, and Zr. 14 C-ages of tephra layers from Late Holocene eruptions of volcanoes of the Southern and Austral Andean Volcanic Zone 3 are indicated. Orange bars highlight times with compositional trace element peaks at 4.216, 2.978, 2.291, and 0.853 kyrs BP. The section labeled with n.a. has not been analyzed. b – d EPMA analyses of volcanic glass and glass alteration products detected in MA1 together with single-glass shard and bulk tephra data from regional tephrostratigraphic marker horizons 5 , 7 , 8 , 13 , 36 . Full size image Fig. 4: Stable isotope and LA-ICP-MS data of MA1 for time spans enclosing eruptions of Mt. Burney and Aguilera volcanoes. a – d δ 18 O and δ 13 C records 25 of stalagmite MA1 combined with concentrations of S, P, U, Sr, and Zr ( a 0.9–0.7 kyrs BP; b 2.4–2.0 kyrs BP; c 3.1–2.9 kyrs BP; d 4.3–4.0 kyrs BP). 14 C-ages of tephra layers from Late Holocene eruptions of Cerro Hudson, Aguilera, and Mt. Burney 3 are indicated. Orange bars highlight times with compositional trace element peaks at 4.216, 2.978, 2.291, and 0.853 kyrs BP. Gray bars mark time spans characterized by elevated incorporation of micrometer-scale detrital volcanic glass mobilized during storm events. Full size image Fig. 5: Variations and relationships of Si and Zr concentrations in MA1 determined by LA-ICP-MS. a – e Si and Zr records are shown in combination with corresponding binary plots for different time spans ( a 4.5–0.6 kyrs BP; b 0.9–0.7 kyrs BP; c 2.4–2.0 kyrs BP; d 3.1–2.9 kyrs BP; e 4.3–4.0 kyrs BP). The section labeled with n.a. has not been analyzed. Orange bars highlight times with compositional trace element peaks at 4.216, 2.978, 2.291, and 0.853 kyrs BP. Gray bars mark time spans characterized by elevated incorporation of micrometer-scale detrital volcanic glass mobilized during storm events. The binary plots indicate a correlation of LA-ICP-MS measurements on a mixing line between two endmembers: the Si- and Zr-free CaCO 3 and Si- and Zr-rich volcanic glass from Mt. Burney and Aguilera with Si/Zr ratios of 3300 and 3600 7 , 8 , respectively. Vectors pointing outwards from this mixing line towards quartz and/or silica gels (SiO 2 ) and zircon (ZrSiO 4 ) are included. The red rectangles mark the analytical background (bgr) of the LA-ICP-MS measurements (Si <5000 ppm and Zr <3 ppm). Full size image As would be expected for elements that are contributed from detrital particles sourced from tephra, Si should be closely related to Zr. Indeed, Fig. 5 shows that a large number of LA-ICP-MS measurements fall on a tight correlation trend that projects from the origin with a Si/Zr ratio of 3300 toward the composition of Mt. Burney glass with average values of 77 wt.% SiO 2 and 110 ppm Zr 7 , 8 . A 1 glass from Aguilera has a similar Si/Zr ratio of 3600 (75.7 wt.% SiO 2 and 97 ppm Zr on average 8 ), however, high Si and Zr peaks are absent in the expected section from 3.1 to 2.9 kyrs BP (Fig. 5d ). By using these Si/Zr ratios, we calculated mixing lines between two endmembers: the Si- and Zr- rich Mt. Burney/Aguilera glass and the Si- and Zr-free CaCO 3 matrix of the speleothem (blue lines in binary plots in Fig. 5 ). Concentrations for Zr and Si that are higher than the concentrations in the tephra are an artifact of normalization to Ca in LA-ICP-MS measurements that include distinct but unknown amounts of a non-carbonate detrital component (i.e., basement-derived minerals, which are lower in Ca compared to CaCO 3 ). This artifact in the concentration data is unavoidable and does not affect the characteristic Si/Zr ratio of the volcanic glass mixing line. However, a large number of analyses fall off this correlation and can be attributed to silicate minerals from basement rocks. One additional component is particularly rich in Si (and low in Zr) and identifies quartz or silica gels, whereas a few data points tend towards high Zr and are likely caused by traces of detrital zircon. These minerals have also been identified on SEM micrographs. In order to further characterize elemental peaks of Si, Zr, S, and other elements in the speleothem record and to provide for tests that these may represent volcanic signals, we analyzed the data set using a PCA 22 , 37 . The first three principal components (PC) explain 80.7% of the variability within the entire data set (Figs. 6 and S1 and Table S2 ). High scores for PC-1 (44.4%), PC-2 (19.4%), and PC-3 (16.9%) reflect the correlated trace element peaks as discussed above and displayed in Figs. 3 – 5 . PC-1 has high loadings of S, P, U, and Si, whereas the PC-2 loadings mirror mainly Si and Sr. Zirconium is dominantly loaded on PC-3. High PC scores, particularly of PC-1, suggest distinct depositional events at 4.216, 2.978, 2.291, and 0.853 kyrs BP. However, all PCs show high variability in score values from 4.20 to 4.15 kyrs BP and synchronous spikes during the periods 2.96–2.92, 2.27–2.17, and 0.79–0.73 kyrs BP. Fig. 6: Scores of the principal components PC-1, PC-2, and PC-3 performed on the LA-ICP-MS data set (S, P, U, Si, Sr, and Zr). The PCA scores of PC-1, PC-2, and PC-3 are shown for the time span from 4.5 to 0.6 kyrs BP. The section labeled with n.a. has not been analyzed. Orange bars mark periods in which the PC main spikes are synchronous with compositional trace element peaks at 4.216, 2.978, 2.291, and 0.853 kyrs BP (Figs. 3 – 5 ). Full size image Composition of detrital silicate components Major element compositions of detrital silicate particles embedded in MA1 were analyzed by EPMA (Fig. S2 and Tables S3 – S5 ). We detected pristine rhyolitic glass fragments in laminae at 4.21, 2.29, and 0.85 kyrs BP (78.12 wt.% SiO 2 , 1.77 wt.% K 2 O, 3.46 wt.% Na 2 O, and 0.29 wt.% MgO on average). These compositions are identical to matrix glasses from pumice originated from Mt. Burney, but differ significantly from known compositions of other documented wide-spread tephras from Hudson and Reclus volcanoes in the Southern and Austral Volcanic Zones 5 , 7 , 8 (Fig. 3b, c ). A few oscillatory zoned plagioclase and euhedral clinopyroxene crystals are also typical for Mt. Burney tephra 36 (Fig. S2c, e ). Other mineral components detected in laminae at 2.29, 1.79, 0.85, and 0.73 kyrs BP (Fig. S2 and Table S4 ) include ilmenite, apatite, pyroxene, biotite, plagioclase, quartz, and zircon. The occurrence of quartz at 0.76 kyrs BP is also confirmed by X-ray diffractometry, showing a peak at 3.344 Å (Fig. S2g ). The composition, intergrowth textures, and shape of these mineral grains indicate that they originate from a granitic/gneissic bedrock. Additional EPMA measurements on Si-rich particles found in porous fabrics can neither be related to the composition of Mt. Burney glass nor to bedrock minerals (Table S5 ). These particles show a large range in composition with no relation to any mineral stoichiometry. They are, on average, significantly lower in SiO 2 (61.39 wt.%), TiO 2 (0.08 wt. % or below detection limit) Na 2 O (0.50 wt.%) and K 2 O (0.47 wt.%) and higher in Al 2 O 3 (24.65 wt.%), FeO (5.92 wt.%), MgO (3.73 wt.%) and CaO (3.06 wt.%) compared to glass from Mt. Burney 5 , 7 , 8 . In particular, measurements from the section between 3.8 and 3.0 kyrs BP show very high Al 2 O 3 /TiO 2 (>200) and FeO/TiO 2 (>30) ratios. These values strongly exceed the comparatively low Al 2 O 3 /TiO 2 and FeO/TiO 2 ratios of the pristine volcanic glass (47.9 and 5.1, respectively; Fig. 3d ). As detailed below, we tentatively interpret these compositions as alteration products from silicate glass weathering. Detection of tephra particles by SEM and NanoSIMS The SEM micrographs of detrital siliciclastic components obtained after mild leaching of the CaCO 3 matrix of isolated laminae at 4.15 and 2.20 kyrs BP where PC scores and concentrations of Si and Zr are high are shown in Fig. 7a–c . Some layers were particularly rich in fragments of pyroclastic material, comprised of either vesicular glass shards (2.20 kyrs BP; Fig. 7a ) or phenocrysts glazed with a thin coating of volcanic glass (4.15 kyrs BP; Fig. 7b, c ). On thin sections, SEM micrographs reveal incorporation of a vitreous silicate component at 4.21, 2.29, and 0.85 kyrs BP (Fig. S2 ). These images document the preservation of tephra that was included in the growing speleothem. Fig. 7: Glass shards and phenocrysts from Mt. Burney tephra detected in laminae of MA1. SEM micrographs show a glass shard ( a 2.20 kyrs BP) and phenocrysts coated with residual, vesicle-rich volcanic glass ( b , c 4.15 kyrs BP) incorporated in MA1. d – f NanoSIMS secondary ion mappings indicate the presence of more intact fine-grained glass fragments framed by rims of elevated 27 Al 16 O − counts in OM-rich accumulation pools from laminae at ~3.92 and ~2.29 kyrs BP. Full size image Detritus-rich inclusions were further analyzed in situ by NanoSIMS (Figs. 7d–f and 8 ). The 30 × 30 μm measurement spots focused on non-carbonate materials accumulated in lamina sections at ~3.92 and ~2.29 kyrs BP. Elevated 28 Si − and 16 O − secondary ion counts indicate an enrichment of irregularly shaped, in part porous siliciclastic particles of 5–20 μm size with 12 C − and 12 C 14 N − detected in pores (Fig. 7d–f ). The particles themselves show a low incidence of 27 Al 16 O − counts and are framed by thin rims that are significantly enriched in 27 Al 16 O − . The matrix surrounding these micrometer-scale particles is characterized by the spatial distribution of strong 12 C − and 12 C 14 N − signals with dispersed Fe and Al as indicated by spots of 56 Fe 16 O − and 27 Al 16 O − . Fig. 8: NanoSIMS secondary ion mappings of detritus-rich lamina sections at ~3.50, ~3.11, and ~0.73 kyrs BP. a , b Alteration products (AP + OM) deposited during the millennium-scale acidification phase following the MB 2 eruption 1 . In both cases, at ~3.50 ( a ) and ~3.11 ( b ) kyrs BP, 16 O − , 28 Si − , 27 Al 16 O − , 56 Fe 16 O − ion counts indicate that silica gels and Al-/Fe-(hydr)oxides from the catchment were incorporated together with OM (12C − and 12C14N − ). c Bedrock-derived silicates from laminae at ~0.73 kyrs BP with significantly high 16 O − and 28 Si − secondary ion counts. Here, 27 Al 16 O − and 56 Fe 16 O − show a completely different spatial distribution compared to a , b or more intact glass shards displayed in Fig. 7 . Full size image NanoSIMS measurements on inclusions at ~3.50 and ~3.11 kyrs BP did not detect such pristine siliciclastic particles (Fig. 8a, b ). By contrast, areas with higher 16 O − , 28 Si − , 27 Al 16 O − , and 56 Fe 16 O − counts appear as fine flaky structures. These structures occur scattered throughout the observed areas and correlate with high 12 C − , 12 C 14 N − , and 32 S − signals, here interpreted as indicative of OM. At ~0.73 kyrs BP, particles with strong localized signals of 28 Si − and moderate 16 O − in the absence of other measured elements (Fig. 8c ) are interpreted as phenocrysts or grains of bedrock-derived mineral detritus, such as quartz (cf. identified minerals; Fig. S2 and Table S4 ). However, in this case, thin 27 Al 16 O − rims have not been observed. Discussion Identifying volcanic input by compositional spikes of characteristic elements in the MA1 speleothem record provides means for precise dating and the correlation of volcanic ash inputs into Patagonian soils and environmental archives 9 . In order to identify such volcanic signals, we need to consider the direct primary elemental deposition from tephra, volcanic volatiles, and leached components. In addition, there may be a secondary response to tephra deposition on peatland soils that potentially affects nutrient cycling 38 , chemical leaching 15 , 16 , and soil C stocks 39 , 40 . Pristine peatland ecosystems such as those of southernmost Patagonia may be particularly sensitive to volcanic impacts from tephra-loading under extreme climate conditions 1 , 41 . Finally, the improved age constraints of tephra layers that we detected in MA1 allow addressing the impact that volcanic eruptions had on this super-humid environment. There is direct evidence for tephra deposition and alteration during the growth of stalagmite MA1. The principal elemental input from tephra comes from fine-grained glassy pyroclastic particles and is documented by synchronous and sharp peaks of Zr and Si in the MA1 record. These peaks occur at and after 4.216, 2.978, 2.291, and 0.853 kyrs BP. Our geochemical fingerprinting based on EPMA data of preserved glass shards (Fig. 3b, c ) and its characteristic Si/Zr ratio of about 3300 7 , 8 (Fig. 5 ) identify this ash component as Mt. Burney glass at 4.216, 2.291, and 0.853 kyrs BP. Only for the section around 2.978 kyrs BP, when the Aguilera A 1 eruption is known to have occurred, our LA-ICP-MS measurements did not identify a pristine glass component with high Si and Zr content and a Si/Zr ratio of 3600 8 that could indicate the presence of volcanic glass from Aguilera. Furthermore, the presence of tephra fragments in the stalagmite is verified by SEM micrographs of particles extracted from the speleothem (Figs. 7a–c and S2 ). The structure of observed shards and vesicle-rich glass coatings on phenocrysts resemble characteristics of the well-documented Mt. Burney tephra 1 , 7 , 8 , 13 , 36 . Indications of recurring tephra input over a time span of 70 years are found also directly after these ashfall events by continuously elevated Zr and Si contents that are well above CaCO 3 background values at 4.20, 4.18, 4.14, 2.94, 2.26, 2.20, 2.19, 0.79, and 0.77 kyrs BP (Fig. 5 ). This extended input of Zr and Si is most likely related to the deposition of fine-grained ash particles (nanotephra) that cannot be resolved by optical microscopy observations in the CaCO 3 matrix. Higher input of such fine-grained glass following the eruption events over decades can be explained by repeated periods of increased tephra reworking and mobilization into the aquifer during distinct phases characterized by storm-related high rainfall 9 , 27 . These periods coincide with decreased δ 18 O and δ 13 C values, which is in accord with higher precipitation rates 25 at that time. Owing to differences in Late Holocene rainfall 29 shown by variable δ 18 O and δ 13 C levels 25 , a climate-induced influence on the hydrological soil-to-stalagmite transport is suggested, explaining the slightly dissimilar response to tephra fallout recorded by stalagmite MA1 during 4.5–4.0, 3.1–2.9, 2.4–2.0 and 0.9–0.7 kyrs BP (Figs. 3 and 4 ). Micrometer-sized volcanic glass fragments with surface alteration rims were also observed by NanoSIMS (Fig. 7d–f ). These particles differ considerably from crystalline, bedrock-derived silicate grains (Fig. 8c ) with respect to their texture and 28 Si − and 16 O − secondary ion count levels at their location, which asserts them as volcanic glass. This is in accord with an expected low 27 Al 16 O − signal and 56 Fe 16 O − counts near to zero for vitreous material, which does not exclude that they contain greater amounts of Al or Fe 42 , 43 . The rims rich in 27 Al 16 O − around these particles suggest the formation of amorphous, secondary alumino-silicate phases with higher ionization rates in comparison with more crystalline structures 43 , 44 , 45 . We interpret the 12 C − and 12 C 14 N − signals detected on the particles as reflecting enrichments of OM 42 in abundant micro-vesicles. This is consistent with the results of recent studies that documented pronounced glass leaching and the subsequent formation of Al-(hydr)oxides related to tephra weathering in other humid and acidic environments elsewhere 16 , 46 , 47 . The close textural relation between 27 Al 16 O − enrichments and OM ( 12 C − , 12 C 14 N − , and 32 S − 48 , 49 ) surrounding the observed particles (Fig. 7d–f ) suggests that leaching processes, e.g., microbially mediated weathering, also occur in-situ within the bio-chemical micro-environment existing at the stalagmite surface after the deposition of the tephra particles. Based on microscopic observations, we suggest that the tephra may have been fragmented mechanically while passing through 40 m of the fractured aquifer system. Subsequently, residual vitreous particles accumulated together with soil-derived OM in small pools formed at the stalagmite surface (Fig. S5e ). In these accumulation pools, glass dissolution by microbes 50 , 51 is probably encouraged. A thin glaze of CaCO 3 that differs from the surrounding calcite fabric preserved the organic and inorganic components within these bio-geochemical micro-environments (Fig. S5e ). Our data constrain the chemical signals from tephra deposition: Apart from the direct observation of pristine glass shards and their in-situ alteration products, there is also evidence of secondary effects in soils as a response to volcanic fallout. Various volcanogenic chemical compounds dissolve rapidly in surface environments after the direct wet deposition of acid rain from the eruption plume and/or after leaching by water from freshly deposited tephra 15 , 38 . Typical chemical compounds leached from pristine tephra are sulfates and a range of soluble salts containing other mobile and volatile metals 38 . Here we focus on S, P, Sr, and U to trace solutes potentially derived from Mt. Burney and Aguilera tephra for which the composition is well known 7 , 8 , 36 . Amongst elements released from volcanic ash after deposition, S and Sr but also P are known to reach high concentrations in surface waters 38 and U was also efficiently applied to document long-term leaching after the MB 2 eruption 1 . The low pH of 3.9–5.4 in soils of the MA catchment 26 promotes the mobilization of water-soluble elements from volcanic glass 15 , 17 , 38 . We, therefore, interpret the short-term release of Sr and U as a signal of tephra leaching in acidic waterlogged peat soils 1 , 14 at 4.216, 2.978, 2.291, and 0.853 kyrs BP (Figs. 3 and 4 ). We associate the periodically increasing Sr and U concentrations in periods following the eruptions with the storm phase-induced incorporation of (i) leached elements from tephra in the catchment or (ii) glass and its weathering residues. The high peaks of Sr could additionally be explained by a contribution from plagioclase phenocrysts 21 , 22 embedded in MA1 (e.g., glass-coated phenocrysts; Fig. 7b, c ), which are—next to volcanic glass—a dominant component of tephra from Mt. Burney and Aguilera 7 , 36 . These chemical signals also determine the high amplitude spikes of PCs 1–3 (Figs. 6 and S1 ). We use our PCA results (Figs. 6 and S1 and Table S2 ) to discriminate different compositional domains in stalagmite MA1. PC-1 mainly reflects high concentrations of presumably volcanogenic SO 4 2− and PO 4 3− 15 , 18 , 22 . We expect a simultaneous mobilization of S- and P-bearing species adsorbed to or contained in pedogenic OM 52 , 53 due to the perturbation of peat soil hydro-chemistry after ashfall 1 , 2 , 41 . Accordingly, high amounts of 32 S − , 12 C − , and 12 C 14 N − secondary ions counts are correlated (Figs. 7 and 8 ) and imply a direct linkage between soil OM and S in the stalagmite 48 , 54 , 55 , 56 even though OM is obviously not directly associated with the composition of fresh tephra 38 . In addition, PC-1 implies a relatively high contribution of siliciclastic components in the form of fine-grained detritus in combination with intense leaching, mobilization, and deposition of mobile lithophile elements from rock substrate or tephra in soils. PC-2 and PC-3 primarily mirror leaching processes and siliciclastic components without a concurrent deposition of volcanogenic SO 4 2− and PO 4 3− . According to Jamieson et al. 22 , volcanic signals in a speleothem do not automatically produce a clear corresponding chemical excursion in the stalagmite stratigraphy. However, in our particular case, all three PCs are indicative of volcanic impacts on the soil-to-stalagmite transfer and deposition system in the form of pristine glass, elements leached from the eruption plume, or products issuing from tephra alteration. As would be expected from loadings of PC-1, the main spikes of the PC-1 record coincide with peaks of the lithophile elements (Si, Sr, Zr, and U) as well as S and P concentrations (Figs. 3 – 5 ). Such high amplitude compositional spikes, significantly exceeding average concentrations, are comparable to characteristic volcanogenic signals observed in other speleothem records 18 , 20 , 21 , 22 , 23 . Therefore, it is reasonable to infer that these distinct compositional peaks mark times of trace element mobilization related to volcanic events that occurred at 4.216, 2.978, 2.291, and 0.853 kyrs BP (Figs. 3 – 6 and S1 ). Chemical signals appear to indicate a prolonged chemical footprint of tephra deposition (i.e., elevated Si, Sr, and Zr; Figs. 3 – 5 ) in speleothem laminae from 3.8 to 3.0 kyrs BP, similar to that at 4.216, 2.978, 2.291, and 0.853 kyrs BP. However, there is no direct evidence for tephra deposition from nearby Mt. Burney, Reclus, or Aguilera volcanoes at that time 3 , 5 , 8 . Despite our intensive search by optical microscopy, wavelength dispersive spectroscopy (WDS) element mapping, and SEM, we could not detect any intact vitreous particles between 3.8 and 3.0 kyrs BP. The measured compositions of Si-rich particles from this section of MA1 cannot be associated with the composition of fresh volcanic glass or stoichiometric basement-derived minerals (Fig. 3d and Table S 5 ). Alkali metal contents are exceedingly low and pronounced Al-/Fe-(hydr)oxide enrichment, as suggested by high Al 2 O 3 /TiO 2 and FeO/TiO 2 ratios, is observed. We, therefore, categorize these high Al-Fe-Si compounds to be the result of glass decomposition where newly formed alteration products were partly mixed with surrounding CaCO 3 (higher CaO values during EPMA analysis). Based on our data, we argue that the environmental impact on soils from volcanic depositions is modulated by climate variations. Kilian et al. 1 documented a millennium-scale acidification phase in regional soils following the Plinian MB 2 eruption. This may have caused efficient chemical alteration of tephra and transport of leached elements from the catchment onto the speleothem. Accordingly, we observe correlated patterns of 12 C 14 N − , 32 S − , 28 Si − , 27 Al 16 O − , and 56 Fe 16 O − counts in NanoSIMS images of the speleothem during the period following the MB 2 eruption (Fig. 8a, b ). This indicates the accumulation of silicate weathering products together with OM 44 , 48 , 49 . These weathering products are typical for the decomposition of vitreous tephra in acidic soils and are likely comprised of organo-mineral complexes enriched in Al-/Fe-(hydr)oxides and silica gels 46 , 47 . These compounds preferentially form in highly reactive aquatic redox environments 57 subjected to such extremely variable climate conditions 28 , 30 . We also observe variations of δ 18 O and δ 13 C values 25 by 1–1.5‰ in the record following the events at 4.216, 2.978, 2.291, and 0.853 kyrs BP. First, the δ 18 O and δ 13 C values decrease for ca. 10 years followed by higher values lasting for ~20 years (Fig. 4 ). We interpret these isotopic changes as a combined effect of varying rainfall intensity and short-term perturbations of the hydro-chemical soil system within the catchment—subsequent to and genetically related to events of tephra deposition 21 . Environmental changes such as variations in redox-pH conditions, water-table fluctuations, and nutrient release from ash deposition may strongly affect plant decomposition, and thus, the delivery of dissolved organic matter (DOM) and organo-mineral compounds from peat soils 57 , 58 , 59 . The DOM then can percolate into cave drip waters and be recorded by the δ 13 C signatures of speleothems 20 , 54 , 55 , 56 , 57 . A close relationship between the abundance of OM that derives from DOM or mineral-associated OM in the drip water and 12 C − , 12 C 14 N − and 32 S − signals in detritus-rich laminae of MA1 is shown by the NanoSIMS measurements 48 , 49 (Figs. 7 and 8 ). Therefore, a link between detrital and volcanic input in MA1 and the fundamental changes in soil chemistry and hydrology in the catchment is clearly established. Finally, an import of elements such as S and Sr in the MA catchment area can also occur by sea salt aerosols 25 , 60 , 61 . It is known that sea salt contribution from the nearby Pacific Ocean through surface deposition and transfer to cave drip waters is responsible for overall elevated concentrations compared to more continental speleothems 21 , 22 , 24 , 25 , 62 . Such speleothems contain concentrations of, i.e., Sr ranging between 10 and 20 ppm 22 , 62 . Consistent and strong SWW-induced wind velocities and frequent storm events in the region 9 , 27 cause a constant deposition of sea spray-derived elements. These elements are characterized by short residence times in waterlogged peatland soils 14 , and thus, we argue that the high annual precipitation 9 , 28 , 30 leads to a persistent input of excess sea salt S and Sr signatures in MA1 compared to other speleothems. Therefore, sea salt-derived S and Sr contribution should be of secondary importance for the distinct high amplitude spikes of these elements. Their deposition would not result in the documented compositional spikes and a close correlation that we observe with other mobile and immobile lithophile trace elements (e.g., Si, U, and Zr, Figs. 3 – 5 ) would not be expected. Using the evidence from primary and secondary volcanic signals in the speleothem record, these data have implications for tephrochronology because our results prove that the MA1 stalagmite recorded distinct volcanic events for which we can determine precise 230 Th/U-ages from the refined age-depth model based on the StalAge algorithm 29 with errors represented by 95% confidence limits (Fig. 2a ). Our speleothem record shows no distinct hiatus and because of the proximal location and high temporal resolution, it also comprises volcanogenic signals from so far insufficiently dated, notably smaller Mt. Burney eruptions with unknown ash dispersal and regional environmental impact (MB 3 and MB 4 ). It is important to note that — apart from stratigraphic correlation — a differentiation between fresh ash depositions and rainfall-induced soil erosion/reworking of older tephra layers 1 , 63 is necessary. Evidence for a distinct major tephra reworking event is high peaks of Si and Zr without the recording of a compositional volcanic signal at the same time. In addition, phases of increased reworking should also correlate with paleo-precipitation data (δ 18 O and δ 13 C) and known past environmental conditions. For instance, a major reworking event is recorded during the termination of a well-documented cold phase 9 at ~2.5 kyrs BP. Due to the environmental damage caused by the millennium-scale acidification 1 and the subsequent cold phase 9 , a series of storm events with extreme rainfall may have mobilized MB 2 ash from bare, sparsely vegetated surfaces at that time (Fig. S3 ). With respect to the rapid alteration of volcanic glass in acidic soils under humid climate 64 , the reworked material from older, surface-near tephra should not contain pristine glass and high S concentrations would not be expected. Indeed, the peaks of S at ~2.5 kyrs BP are considerably reduced by a factor of ten in comparison with the distinct volcanic events that we detected (Fig. 3 ). Accordingly, the LA-ICP-MS data from a section older than the MB 2 eruption (4.8–4.6 kyrs BP) indicate similar depositional events comprising Mt. Burney ash (Fig. S4 ). Unlike the other tephra signals discussed above, no published 14 C-ages exist for this time, and thus, a correlation with a Mt. Burney eruption would be highly speculative. However, a possible eruption of Mt. Burney (MBK 3 3 ) was identified based on a 1 mm thick tephra layer found in only one local sediment core with a 14 C-age of >4.96 ± 0.09 kyrs BP 13 . Here, further interpretations with respect to primary ashfall or reworked inputs must await a more detailed analysis of this oldest section of MA1. We consider the most important findings of this study that, based on 230 Th/U-ages, three Mt. Burney eruptions can now be dated more accurately to 4216 +93 / −193 yrs BP (MB 2 ), 2291 ± 33 yrs BP (MB 3 ), and 853 +41 / −60 yrs BP (MB 4 ). The 230 Th/U-age for the MB 2 eruption is in good agreement with the 14 C-age of 4200 ± 50 cal yrs BP published by Breuer et al. 63 , which is based on macro-plant remains included in the >10 cm thick MB 2 ash deposit. Our new MB 3 age falls into the lowermost range of bracketing 14 C-ages presented by Biester et al. 12 and Kilian et al. 13 . In both papers, the MB 3 eruption was roughly dated to ~2020 ± 90 cal yrs BP (>1830 ± 40 and <2210 ± 90 cal yrs BP 12 , >1980 ± 40 and <2060 ± 90 cal yrs BP 13 ). This new age is, therefore, more precise than previous datings. A further Mt. Burney eruption (MB 4 ) that was only tentatively dated through stratigraphic correlation (~800 yrs BP 65 ) is now more precisely dated to 853 +41 / −60 yrs BP, based on the data of this study. Following our arguments developed above, our trace element profiling and PCA analysis (Figs. 3 – 6 and S1 ), indicate that an additional volcanic event occurred at a 230 Th/U-age of 2978 +91 / −104 yrs BP. This event could not be linked to a specific eruption by chemical correlation of tephra because no glass was detected in MA1 at this time. However, the A 1 eruption from the remote Aguilera stratovolcano has a poorly defined 14 C-age bracketed between ~3067 and 3339 cal yrs BP 7 and, therefore, it is the most likely candidate for this event. This is in accordance with the regional distribution of A 1 tephra 7 , which has also been discovered in lacustrine sediments nearby the MA cave site 13 . With this successful identification of volcanic eruptions, we propose that our high-resolution speleothem archive enable precise 230 Th/U-age constraints for a refinement of existing tephrochronologies, closing a significant gap in the reconstruction of the volcanic activity in South Patagonia. Potentially, the variations in LA-ICP-MS trace element concentrations in this stalagmite imply environmental impacts induced by further ashfall events from more distal volcanic centers of the Southern and Austral Volcanic Zones but are much more difficult to detect. For example, there are peaks of elevated S concentrations in MA1 (Figs. 3 and 4 ) at times when no eruption is documented by the nearby volcanoes. These distinct S peaks likely record atmospheric signals of more distant volcanoes, such as Reclus and Cerro Hudson and, potentially, other high-magnitude volcanic events in the southern hemisphere off South America. This opens the possibility of correlation with Antarctic ice cores 66 to evaluate large-scale southern hemispheric patterns of Late Holocene volcanism. We, therefore, conclude that the unique, sub-annually resolved speleothem LA-ICP-MS data presented here may constitute an important advancement in our knowledge about the Late Holocene tephrochronology of southernmost South America. This will allow a better understanding of the volcanic history in the region, and an improvement for the reconstruction of paleoclimate and paleoenvironmental conditions from regional geological archives. Besides well-preserved tephra layers, MA1 provides multiple climatically and environmentally important information due to its unique nature as a non-karst stalagmite with enclosed organic and inorganic detrital components. This information includes potential feedback signals to SWW dynamics and environmental impacts of regional volcanism back to ~4.8 kyrs BP, documenting the evolution of one of the most remote and pristine ecosystems worldwide 9 , 27 . Methods Chronology Stalagmite MA1 was dated at the Heidelberg Academy of Sciences. Sample preparation and analytical methods were described in detail by Schimpf et al. 25 . We included two additional samples (Lab. No. 3672 and 3436, Table S1 ) that were not considered in Schimpf et al. 25 due to their large content of detrital 232 Th. Stalagmite MA1 generally contains high amounts of detrital Th. This is obvious from the 230 Th/ 232 Th activity ratios that range from 4.257 to 93.70 (Lab. No. 3672 and 3653, Table S1 ). However, these values are well below the limit of >200 suggesting insignificant detrital contamination 67 . Detrital contamination of MA1 also results in age inversions (i.e., uncorrected ages that are not in stratigraphic order). For instance, the sample with Lab. No. 3435 at 268.5 mm distance from the top (dft) is significantly older (5.128 kyrs BP) than the sample below at 274 mm dft (4.803 kyrs BP). Schimpf et al. 25 assumed a 232 Th/ 238 U weight ratio of 4.8 ± 50% for the detritus, which resulted in a smooth age-depth relationship. This value agrees with the conventionally used value for the upper continental crust (3.8 ± 1.9 68 ). Here we used a different approach to account for detrital contamination, based on an algorithm recently published by Budsky et al. 35 . This algorithm calculates corrected ages assuming a specific 232 Th/ 238 U weight ratio of the detritus and 230 Th, 234 U, and 238 U in secular equilibrium, and then determines the 232 Th/ 238 U weight ratio of the detritus, which results in a minimum sum of age inversions (in years). For stalagmite MA1, this approach results in a 232 Th/ 238 U weight ratio of the detritus of 3.69, which is close to the mean value of the upper continental crust 68 and in agreement within the error with the value used by Schimpf et al. 25 . As suggested by Budsky et al. 35 , we assumed a conservative uncertainty of ±50% for the determined 232 Th/ 238 U ratio of the detritus. Figure 2a shows a comparison of the uncorrected and the corrected ages. The final age-depth model for MA1 (Fig. 2a ) was calculated in R 69 using the Monte-Carlo-based StalAge algorithm 34 , which has been specifically designed for speleothems. In contrast to the previously used Akima interpolation, StalAge provides 95% confidence limits, which allows to assess the uncertainty of the age model. Drip water pH The pH of cave drip water collected at the MA1 sampling site was measured on-site with a digital pH-meter during expeditions between 2006 and 2008. Mineralogical analyses The mineral composition of a powdered specimen from ~37 mm dft (Fig. S2 g) was identified by X-ray diffraction with a Siemens D500 X-ray diffractometer at the Geology Department of Trier University. The diffractogram was evaluated using the DIFFRAC plus 5.0 software from Bruker. We accepted the calcite peak at 3.036 Å (Cu-Kα radiation) as the internal standard. LA-ICP-MS data The MA1 stalagmite was analyzed with laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) at GZG, University of Göttingen. LA-ICP-MS elemental analyses were performed as two parallel ablation tracks on 14 overlapping, 30 μm thick and 40–50 mm long polished thin sections covering the growth axis of the stalagmite from 24 to 280 mm dft (Fig. S5a ). This approach accounts for lateral heterogeneities with respect to small-scale differences during stalagmite growth. We used the same thin sections as for EPMA presented in Schimpf et al. 25 . Figure S6 documents that the resin used for thin section manufacturing does not contain S, P, U, Sr, Si, or Zr in any significant amounts. Therefore, potential contamination caused by adhesive compounds affecting the LA-ICP-MS element concentrations presented in this study can be excluded. The ICP-MS used was a Perkin Elmer ELAN DRC II Q-ICP-MS coupled to a Lambda Physik COMPex 110 Argon Fluoride laser (193 nm) with a GeoLas optical bench by MikroLas and a small volume sample chamber. The ablation setup was operating with a beamwidth of 120 µm, a repetition rate of 10 Hz for the laser pulses (dwell time 40 ms per isotope, 1.078 s per sweep), and a scan speed of 20 μm s −1 . Background data were taken before and after the measurements for 2–3 min. The measurements were calibrated to the standards NIST610 and NIST612 by bracketing analyses and the internal standard 43 Ca was used to convert the measurements into absolute concentrations. Matrix effects between NIST610/NIST612 and CaCO 3 were considered to contribute less than 10% error. Measured isotopes were 27 Al, 29 Si, 31 P, 34 S, 43 Ca, 88 Sr, 90 Zr, and 238 U. Element concentrations have been normalized to the stoichiometric 400,000 ppm 43 Ca, assuming that the stalagmite is composed of 100% CaCO 3 . The background was subtracted and baseline drift corrected. For ablation spots on lamina with more than 10% detrital siliciclastic material (minerals/volcanic glass containing considerably lower Ca compared to speleothem CaCO 3 ), the internal normalization to 43 Ca will result in systematically higher values for lithophile elements. However, their elemental ratios (e.g., Si/Zr) will be unaffected. The analytical background of Si (<5,000 ppm) and Zr (<3 ppm) represents analytical scatter at low concentrations, and possibly, a weak signal caused by continuous tephra input from reworking. This background range was defined based on the limits of detection for Si and Zr 70 and the observed scatter in our LA-ICP-MS data set. The final LA-ICP-MS record was compiled by aligning the overlapping ablation tracks using a wiggle-matching technique based on the Al concentrations measured — taking small-scale variations of lamina thickness into account. It is characterized by sub-annual resolution of in total 59,200 LA-ICP-MS measurements covering a time span of 4212 years (4809–597 yrs BP). Finally, the consistency of the LA-ICP-MS data set with already published ICP-MS drill core data 25 was ensured by matching averaged concentrations of the LA-ICP-MS measurements with the contents previously determined by ICP-MS. For this purpose, we chose U as soluble, calcite-compatible element 25 and used the original drilling diameter of 1.5 mm in accordance with the exact distance from the top for every single coring point to calculate the average of the LA-ICP-MS concentrations for comparison (~290 LA-ICP-MS measurements per drill core). The LA-ICP-MS and ICP-MS U concentrations are in good agreement and correlate with r = 0.68 (Fig. S7 ). The principal component analyses (PCA) for the assessment of speleothem high-resolution LA-ICP-MS data was done according to Orland et al. 37 and Jamieson et al. 22 . The PCA was performed with Origin (version Pro 2019b) to further constrain the occurrence of signals in MA1 derived from volcanic activity and was calculated for the trace elements P, S, Sr, Si, Zr, and U after z -score normalization. EPMA analyses of silicate components in MA1 The major element compositions of silicate minerals and glass shards incorporated in laminae of MA1 that formed at 4.216, 2.291, 0.853 kyrs BP and shortly after were measured using a JEOL JXA-8900 RL EPMA equipped with five WDS spectrometers at GZG, University of Göttingen. On the polished thin sections, areas of interest (AOIs) with embedded silicates (Fig. S5d, e ) in porous fabrics were selected by (i) optical microscopy, (ii) SEM imagery, and (iii) WDS elemental mappings of Si, Al, and Fe (15 kV acceleration voltage, a beam current of 60 nA, 6 μm beam diameter, a grid of 500 × 500 steps, step size of 6–8 µm, and 60 s dwell time/step). Quantitative individual silicate analyses were performed at an acceleration voltage of 15 kV and a beam current of 15 nA. Depending on the size of the particles, the beam diameter was adapted between 2 and 10 µm. The counting times of the characteristic X-ray signals were chosen between 8 and 10 s (Na, K), 15 s (Si, Al, Mg, Ca, Fe), and 30 s (Ti, Mn, P). The known effect of alkali-loss under electron bombardment has been checked and was found to be insignificant. For measurements on small grains, counting times of Na and K were reduced to 8 s. Natural (wollastonite: Si, Ca; anorthite: Al; hematite: Fe; olivine: Mg; rhodonite: Mn; albite: Na; sanidine: K) and synthetic (TiO 2 : Ti; ScPO 4 : P) reference materials were used for primary calibration. Routine analyses on secondary standards were performed on the two reference glasses VG-2 and NKT-1G (see Table S6 for details). Matrix correction was applied by using the phi-rho- z algorithm of the CITZAF program 71 . We discarded glass measurements that contain more than 80 wt.% SiO 2 after normalization to 100 wt.%. SEM and NanoSIMS of incorporated detrital components Stalagmite sections formed during and after the two Plinian eruption periods of the Mt. Burney volcano at 2.3–2.0 kyrs BP, 4.2–3.8 kyrs BP and during the millennium-scale acidification phase (~3.50 and ~3.11 kyrs BP) after the MB 2 eruption 1 , and at ~0.73 kyrs BP include laminae rich in detrital components. Representative samples have been investigated with SEM and NanoSIMS. These samples contain the same detritus-rich laminae as analyzed by LA-ICP-MS. We chose NanoSIMS because this technique allows to differentiate embedded inorganic (e.g., 28 Si − , 27 Al 16 O − ) and organic (e.g., 12 C 14 N − ) components from ubiquitous, very low electrically conductive CaCO 3 72 , 73 . SEM imaging was conducted with a LEO 435VP and a JEOL JXA-8900 RL at the Geology Department of Trier University and the GZG, University of Göttingen. Replicate samples were analyzed (i) on thin sections and (ii) after dissolution in 5 M ultra-pure HCl and an acidic acetate buffer solution 74 to extract incorporated detritus from distinct laminae. An acceleration voltage of 15 kV was applied to produce high-resolution images in secondary electron and back-scattered electron mode. Samples for NanoSIMS analyses were taken from the identical blocks the thin sections were prepared of (Fig. S5a ). Prior to NanoSIMS measurements, the sample documentation and the selection of AOIs was guided by optical microscopy. We preferentially chose porous laminae, in which abundant organic and inorganic components are embedded in transition zones of different fabrics, so-called accumulation pools (Fig. S5d ). NanoSIMS analyses were carried out at the Chair of Soil Science of the Technical University of Munich with a Cameca NanoSIMS 50 L using a Cs + primary ion beam with an impact energy of 16 keV. The selected stalagmite samples were polished and coated with a conductive Au/Pd layer (ca. 30 nm, Polaron Emitech SC7640 sputter coater) to account for charging during the measurements. Charging was additionally compensated using the electron flood gun of the NanoSIMS. Contaminants and the Au/Pd coating layer were locally sputtered away using a high primary beam current (pre-sputtering/implantation), while the reactive Cs + ions were implanted into the sample. The secondary ion yields were enhanced until reaching a steady state. The primary beam (ca. 2 pA) was focused at a lateral resolution of ca. 120 nm and was scanned over the sample, with 12 C − , 16 O − , 12 C 14 N − , 28 Si − , 32 S − , 27 Al 16 O − , and 56 Fe 16 O − secondary ions collected on electron multipliers with an electronic dead time fixed at 44 ns. Due to the pure Cs + ionization, positively charged ions, such as 14 N + , 27 Al + , and 56 Fe + , were measured in combination with C − and O − ions, respectively 43 , 49 . To accurately separate mass isobars, a suitable mass resolution was achieved with appropriate slits and apertures (D1_3, ES_3, AS_2). The secondary ions were recorded using a dwell time of 1 ms pixel −1 , with 256 × 256 pixels for a 30 × 30 μm field of view with 30 planes per scan. Single planes were corrected for dead time, drift, and accumulated using the ImageJ software 75 in combination with the Open-MIMS plugin 76 . The sensitivity of SIMS spans more than five orders of magnitude and the SIMS secondary ion yield notably depends on the ionization potential and the electron affinity of each species 73 . The secondary ion intensities do not reflect the relative element concentrations of the sample since different species have largely different ionization probabilities 42 . Furthermore, the ionization probability is very specific to the composition and chemistry of the surrounding matter, the so-called matrix effect 73 . Due to this effect, the ion yield of one element can be enhanced while the ion yield of another element can be suppressed under the same sputtering conditions 49 , 73 . Caution is therefore needed when interpreting NanoSIMS data on a specific mineral phase. As the count rate of an ion is not directly proportional to the elemental concentration in the mineral, a linear interpolation based on its elemental concentration is not expected 42 , 43 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data sets generated during the current study can be obtained from the Zenodo repository ( ). The LA-ICP-MS data may also be found in the online version of this article (Supplementary Data 1 ).
The soils and vegetation of Patagonia's fjord regions form a unique and highly sensitive ecosystem that is closely linked to marine ecosystems, sediment deposition and carbon storage in the ocean. A research team, including the University of Göttingen, has been working on reconstructing the climate history of this region in this extremely wet, rainy and inaccessible fjord and island zone of the Patagonian Andes in southern Chile. Due to its location, the area is a key region for understanding the history of the southern westerly wind belt within the global climate system. The results were published in the journal Communications Earth & Environment. The research, in collaboration with the University of Trier, is based on extensive soil analyses and, above all, the detailed geochemical analyses of a stalagmite that is around 4,500 years old, which was recovered from an almost inaccessible cave. "This stalagmite is the southernmost limestone deposit of its kind ever found," says Professor Gerhard Wörner of the Geoscience Center at Göttingen University. "Its fine and detailed stratification enables us to document the chemical composition of the stalagmite at high temporal resolution." Since the stalagmite formed over a long time from surface waters that seeped into the cave, this geological "archive" makes it possible to reconstruct the climate-driven chemical processes in the peaty soils at the Earth's surface above the cave. Field work on the glaciers of Patagonia. Credit: R Kilian It turns out that the transport of chemical compounds from the peatlands to the fjords in southern Patagonian fjords are particularly closely coupled with natural processes in the delicate soil ecosystems, which react highly sensitively to climate fluctuations and the input of volcanic ash from nearby active volcanoes. "It was a surprise to discover actual remnants of volcanic dust from eruptions of nearby volcanoes in the soil. In fact, tiny volcanic particles were detected embedded in the stalagmite from the cave," Wörner explains. The effect of volcanic depositions can also be documented from geochemical anomalies in the stalagmite—such as the high presence of sulfur—and can even be attributed to individual volcanic eruptions by dating the stalagmite layers. These volcanic deposits are of fundamental importance for the chemical processes in the peatlands of Patagonia and have a particularly strong effect under the influence of the extreme precipitation in the region. "These effects range from substantial destruction of vegetation after large eruptions to a possible fertilizing effect on the ocean as a result of nutrients released after smaller eruptions," Wörner adds. Section through the MA-1 stalagmite from Arevalo Cave showing the fine layering of limestone. These deposits are a geochemical archive of the changing environmental conditions at the Earth's surface over more than 4,000 years. Credit: R Kilian/G Wörner
10.1038/s43247-022-00358-0
Biology
Cryo-electron microscopy opens a door to fight Epstein-Barr
Ana Cuervo et al. Structures of T7 bacteriophage portal and tail suggest a viral DNA retention and ejection mechanism, Nature Communications (2019). DOI: 10.1038/s41467-019-11705-9 Cristina Machón et al. Atomic structure of the Epstein-Barr virus portal, Nature Communications (2019). DOI: 10.1038/s41467-019-11706-8 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-11705-9
https://phys.org/news/2020-02-cryo-electron-microscopy-door-epstein-barr.html
Abstract Double-stranded DNA bacteriophages package their genome at high pressure inside a procapsid through the portal, an oligomeric ring protein located at a unique capsid vertex. Once the DNA has been packaged, the tail components assemble on the portal to render the mature infective virion. The tail tightly seals the ejection conduit until infection, when its interaction with the host membrane triggers the opening of the channel and the viral genome is delivered to the host cell. Using high-resolution cryo-electron microscopy and X-ray crystallography, here we describe various structures of the T7 bacteriophage portal and fiber-less tail complex, which suggest a possible mechanism for DNA retention and ejection: a portal closed conformation temporarily retains the genome before the tail is assembled, whereas an open portal is found in the tail. Moreover, a fold including a seven-bladed β-propeller domain is described for the nozzle tail protein. Introduction The order Caudovirales comprises the largest number of biological entities on Earth. They are bacterial viruses characterized by an icosahedral capsid, enclosing a double-stranded (ds) DNA, with a tail. These phages share a common assembly pathway of prohead formation and genome packaging with herpesviruses 1 , 2 , 3 . Mechanisms for DNA incorporation and ejection show a number of similarities based on the existence of a machinery built by several components, including the portal protein or head-to-tail connector, motor proteins that provide energy-dependent DNA translocation (terminases) 4 , and, in phages, the tail complex 5 . In the case of the Podoviridae family, bacteriophages have a short, non-contractile tail, which generally comprises an adaptor and a tubular nozzle or knob, with a plug to prevent DNA leakage. The other tail components are the fibers (or spikes), which are responsible mainly for bacterial receptor recognition 6 , 7 . Phage portal proteins are key viral components located in a single pentameric vertex of the capsid and they act as initiators of capsid assembly. They are also critical components of the DNA-packaging complex and are involved in tail assembly 1 , 2 , 8 . In spite of a lack of extensive sequence similarity, all portal structures solved to date for Caudovirales (phi29 9 , 10 ; SPP1 11 ; P22 12 ; T4 13 ) share common morphological features, including a conical channel along the longitudinal axis and a conspicuous ring made of 12 subunits 1 , 2 , 8 . DNA packaging into preformed proheads requires the interaction of the portal protein with the terminase, which generates forces involved in the processive translocation of dsDNA into the viral capsid, where it is stored at quasi-crystalline concentration 1 , 14 , 15 . Both in phages 16 , 17 , 18 and in herpesviruses 19 , the interaction of the packaging terminase occurs at a region of the portal protein that extends outside the capsid shell through the portal vertex. After completion of the packaging, the DNA stored inside the capsid undergoes considerable stress due to mechanical strain induced by bending, as well as extensive repulsive electrostatic interactions 20 , 21 , 22 . In phi29 and SPP1, the DNA interacts with positively charged residues in the central channel of the portal protein, which have been proposed to contribute to stabilize the DNA inside the capsid prior to tail assembly 9 , 22 , 23 . The DNA is permanently stabilized inside the capsid, after the release of the terminase, by the subsequent incorporation of a dodecameric adaptor complex and the rest of the tail machine 24 , 25 . In phage P22, the gp4 adaptor ring interacts at the outer tip of the portal protein, called the clip, and has a long C-terminal helix that extends onto the outer surface of the portal protein monomer–monomer interface 12 . Although the structure of the isolated adaptor protein of phage Sf6 (gp7) shows a very similar arrangement to that of P22 gp4 and other adaptor proteins 12 , 26 , it shows differences in the relative position of the first α-helix. These observations suggest that this conformational change might be related to the structure before and after assembly in the mature phage 26 . In phage T7, the formation of the tail starts by the assembly of the gp11 adaptor toroidal ring, after which protein gp12 assembles on the distal side of the adaptor to build the hexameric nozzle 27 . The interaction of the adaptor and the nozzle generates the six regions where the fibers (trimers of gp17) assemble to render the final functional viral particle 27 , 28 . Podoviridae nozzles present distinct conformations during DNA ejection. In T7, interaction of the fibers with the bacterial receptor triggers a conformational change by untwisting the nozzle monomers, which results in the opening of the channel required for DNA release 28 , 29 . Although the precise details and molecular mechanisms of DNA release are not fully known, similar conformational changes in the nozzle have also been characterized in phage P-SSP7, a relative of T7 30 . In P22, the tail machinery also undergoes conformational changes during bacterial adsorption 31 . Here we report a number of crystal and high-resolution cryo-electron microscopy (cryo-EM) structures of the T7 bacteriophage portal (gp8). The structures show two conformations—open and closed—of the portal and suggest a possible mechanism of the channel valve that regulates DNA passage. Moreover, we describe the atomic structure, determined by cryo-EM, of the 1.5 MDa T7 tail complex (gp8-gp11-gp12), thus characterizing the whole ejection channel. In particular, the tail nozzle (gp12) shows an unexpected fold with six β-propellers, which are essential to tightly close the channel gate in the mature phage. All these structures, associated with different states of the infection cycle, support a mechanism underlying DNA retention inside the capsid and its ejection during infection. Results Structure of the T7 bacteriophage portal Several X-ray data sets from various crystal forms of the T7 portal in its dodecameric (12mer) and tridecameric (13mer) forms were collected. Although the portal protein is always found as a dodecamer in virions, assemblies containing 11–13 subunits have been described for other portals when expressed under non-physiological conditions 13 , 32 . Despite extensive efforts, all attempts to determine the phases of any of the data using heavy-atom methods failed. The T7 portal structure was finally solved by combining cryo-EM and X-ray crystallography. An initial map of the tridecameric form of the gp8 protein at 5.8 Å resolution was obtained by cryo-EM (gp8-13mer EM ; Supplementary Table 1 and Supplementary Fig. 1 ). The ring-shaped volume allowed the ab-initio building of a partial poly-alanine model composed mainly of α-helices, which contained 36% of the structure. This partial model was later used for phasing a 3.4 Å resolution gp8-13mer crystallographic data set by molecular replacement and a full atomic model was built into the resulting electron density map (gp8-13mer cryst ; Supplementary Table 2 and Supplementary Fig. 2 ). The gp8 monomeric structure was then used for phasing the gp8-12mer 3.6 Å resolution crystallographic data (gp8 closed , see Methods section), which yielded an atomic model of the dodecameric portal (Supplementary Table 2 ). The overall shape of the T7 portal protein shows a ring-like assembly formed by 12 subunits and with an axial central channel. The external diameter of the particle is 170 Å, whereas its height is 110 Å. The diameter of the channel (measured between opposite Cα atoms) is 23 Å at its narrowest section, which would hinder the passage of a B-DNA molecule (Fig. 1 ). Fig. 1 Structure of the T7 bacteriophage portal. a Ribbon representation of the gp8 closed structure of the portal with rainbow coloring by monomer. The dimensions of the particle are indicated. Left, lateral view; right, axial view. b Ribbon representation of a monomer of the gp8 closed structure, colored by domains, with relevant secondary structure elements and structural features indicated. c Electrostatic potential on the external (up) and inner channel (down) surfaces of the gp8 closed portal. A thin positively charged channel ring, formed by R368, is indicated. d Detail of the superposition of gp8 closed (blue ribbon) and gp8 open structures (orange ribbon). The conformational change of the channel valve is indicated with an arrow and maximum torsion angle is shown. e Comparison of the two conformations by showing two opposed monomers. Left, closed conformation shown in blue ribbon. Right, open conformation shown in orange ribbon. The crown domain is not visible for the open conformation and has not been shown for the closed connector for the sake of clarity Full size image The structure contains four domains equivalent to those found in other viral portals: the stem, the clip, the wing, and the crown (Fig. 1b ). The wing, which has a unique conical shape not found in any of the previously described bacteriophage portals, is the largest domain and it protrudes outwards at the middle section of the assembly. It contains the N-terminal end of the protein, which is located at the outer surface. The wing is built of six α-helices, three of them forming an up–down α-bundle, and a β-sandwich formed by two perpendicular β-sheets of seven and three anti-parallel β-strands, respectively. This fold is reminiscent of the SH3 domain, as already described for the phi29 portal 9 . In addition, there is a two-stranded anti-parallel β-sheet at the wing/crown cleft. The clip, which is found at the “bottom” of the particle and points toward the exterior of the viral capsid, contains three β-strands, which perform intra- and inter-subunit interactions, and a short α-helix. The stem connects the wing and the clip, each monomer comprising two tilted α-helices (α7 and α9) in an anti-parallel disposition, forming a double-layer 24-helix ring around the channel, in the particle. A 39-residue helix, α10, perpendicular to the channel axis, connects the channel with the outer part of the wing domain. The tunnel loop, a singular feature also found in other portal particles, is between α9 and α10, and only partially observable in the gp8 dodecameric structure due to its intrinsic flexibility. Finally, the C-terminal part of the protein forms the crown domain, which in the mature virus is located inside the capsid shell and interacts with the core proteins. A deep cleft separates the wing and crown domains, which are connected only by a hinge at G434 between strand β13 of the wing and helix α11 of the crown. This feature confers some freedom of movement to the crown domain around that point, which is also supported by the fact that monomers from the dodecameric and tridecameric structures are very similar, except for the relative position of the crown domain respect to the rest of the protein (root mean squared deviation (RMSD) 0.849 Å). The crown was predicted to contain long helical structures, resembling the barrel domain described for other viral connectors 12 , but it was found to be partially disordered in all our structures, with the last 42 residues not visible in the density maps. The upper part of the crown may become ordered only upon interaction with the core viral proteins. Regarding the shape of the portal central channel, there are two cavities, “upper” and “bottom”, with a conical and an inverted conical shape, respectively, separated by the protrusion of α10 toward the center of the channel. The electrostatic potential of the protein is markedly negative, both on the external and inner surfaces, except for the exterior of the clip and the tunnel loop, due to R368 (Fig. 1c ). α10 acts as a valve, opening and closing the channel A second cryo-EM data set yielded a higher resolution map than the initial one used for phasing. This map at 4.1 Å resolution shows a different conformation of the 12mer portal (gp8 open ; see below, Supplementary Table 1 and Supplementary Fig. 3 ). The overall structure is similar to that already described (RMSD 1.03 Å), with the exception of the conformation of the channel α10 helix, which is kinked in the middle and deviates “upwards,” instead of pointing perpendicular toward the channel axis (Fig. 1d ). The kink occurs at a region containing two adjacent glycine residues (G386 and G387), which provide the necessary flexibility to change the orientation of the N-terminal half of the helix. The movement represents a large swing of 90°. The density corresponding to the tunnel loop is partially visible and reveals that it is in an extended conformation, allowing the connection of the now more distant α9 and α10 helices. Although it was previously hypothesized that a kink on the tunnel loop helix may be related to its ability to adopt distinct conformations during DNA packaging and retention 11 , here we observe experimentally such notable conformational change. This conformational change results in a substantial increase in the channel diameter, from 23 Å (α10 extended) to 53 Å (α10 kinked). Therefore, two clearly different conformations in the T7 connector were defined: one closed (gp8 closed ) and the other open (gp8 open ) (Fig. 1e ). In gp8 open , the kinked part of helix α10 occupies the “upper” cavity of the channel or α10 housing cavity (Fig. 1b ), thus defining a larger conical channel instead of the two inverted conical cavities observed in gp8 closed . Although the first structure described in this work (gp8 closed ) would impede the DNA passage through it, the second structure (gp8 open ) would allow it. The existence of two well-defined portal conformations allows this protein to act as a valve at the portal pore, regulating the passage of DNA into the capsid. Structure of the T7 tail machine The T7 tail machine is composed of four proteins 27 : the portal (gp8), the adaptor (gp11), the nozzle (gp12), and the fibers (gp17). We solved the structure of the fiber-less tail (1.5 MDa) by cryo-EM at 3.3 Å resolution (gp8-gp11-gp12; Fig. 2a , Supplementary Table 1 , and Supplementary Fig. 4 ). This complex shows a tubular conical shape 293 Å long and 175 Å wide, organized in two 12-fold rings (gp8 and gp11) and a 6-fold nozzle (gp12). The structure presents two invaginations on the external surface. The first, placed between the portal and the adaptor, serves for capsid docking, whereas the second, between the adaptor and the nozzle, is the interaction surface of the fibers (Fig. 2 ). Fig. 2 Structure of the T7 tail. a Ribbon representation of the cryo-EM T7 tail structure (gp8-gp11-gp12). Gp8 portal, gp11 adaptor, and gp12 nozzle proteins are shown in purple, green, and orange, respectively. b Longitudinal cut of the electrostatic potential surface Full size image The central channel of the tail is closed at the hexameric nozzle at different gates that retain the DNA inside the capsid in the mature phage. This channel is mainly negatively charged, a feature that has been proposed to be essential to avoid DNA sticking during ejection 8 (Fig. 2b ). The portal protein present in the tail was traced using the gp8-free protein as template, which was solved previously (see above). The channel valve of the portal was found in its open conformation, which should allow the free passage of DNA into the ejection channel. Superimposition of the gp8 closed structure found in the free portal with the gp8 open portal present in the tail complex revealed, in addition to the channel valve movement, additional movements affecting various substructures of the protein (Fig. 3a , Supplementary Video 1 , and Supplementary Video 2 ). The largest movement occurs at the crown (6 Å) and at the hinge within the wing/crown cleft, correlated with the opening/closing of the portal valve, as the helix valve α10 in the gp8 open conformation would clash with the hinge in gp8 closed conformation. There is also movement of the clip (5 Å), caused by the interaction with the gp11 adaptor in the assembled tail. This movement seems to be transferred “upwards” through the stem helix α9 (2 Å), possibly pushing the valve up to its open state. Fig. 3 Atomic structures of T7 tail proteins. a Superposition of the gp8 open as in the tail complex (orchid purple) and gp8 closed (blue) portal monomers. Distances and angles of the moving regions are indicated. b Ribbon representation of gp11 adaptor protein monomer. The different domains are colored and labeled. c Ribbon representation of gp12 nozzle protein monomer. The different domains are colored and labeled Full size image The gp11 adaptor protein is assembled to the portal clip region, forming a 12-fold conical ring. The structure of the monomer is composed of five α-helices and five β-strands, divided into three domains as follows (Fig. 3b ): (i) an α-helix bundle (α1–α4), which creates a wide central channel in the dodecamer with a marked electronegative surface 40 Å in diameter in its narrowest section, and no constraints (Fig. 2b ); (ii) a fiber dock, which constitutes one of the fiber interaction regions (see below for a second fiber dock in the tail nozzle); and (iii) a C-terminal embracing helix, which surrounds the portal protein stem. Despite the lack of sequence homology, all adaptor proteins described to date present the same organization of four α-helices in an up–down bundle (Supplementary Fig. 5 ) 12 , 26 , 33 , 34 , 35 . In the case of T7, the α-helix bundle is stabilized by a disulfide bond between α3 and α4 (C133–C171). The fiber dock domain is less common and is not present in HK97, SPP1, P22, or Sf6 bacteriophages, probably due to the different morphology of the tail and fiber interactions in these viruses. The fiber dock has a triangular shape and is formed by five anti-parallel β-strands disposed in a small jelly-roll β-barrel. The C-terminal α5 that forms the embracing helix points toward the portal protein; this helix is replaced by a flexible stretch in other bacteriophages (Supplementary Fig. 5 ) 26 . The ring assembly relies mainly on electrostatic interactions between monomers that have a bipolar surface, with one side mainly electronegative and the other electropositive (Supplementary Fig. 5 ) 26 . The gp12 nozzle protein forms a hexamer attached to the bottom of the adaptor protein. Each monomer is composed of 62 β-strands and 2 α-helices, one of them forming part of the nozzle tip together with a 3 10 helix (Fig. 3c ). The gp12 fold shows no structural similarity with any bacteriophage tail protein previously reported. It is organized around a large central β-propeller domain. Three other domains are also present: (i) the platform, which interacts with the adaptor protein; (ii) the fiber dock, which is the fiber interaction domain (the fiber is fitted in between the nozzle fiber dock and the adaptor fiber dock); and (iii) the nozzle tip domain at the most distal part (Fig. 3c ). The central β-propeller domain has a diameter of ~40 Å, arranged into seven blades, each with the characteristic four anti-parallel β-strands (Fig. 3c and Supplementary Fig. 6 ). The propeller is open at two points. The “upper” opening point is stabilized by the so-called velcro closure in blade 1 36 , where the last β-strand of the blade is actually the first strand in the sequence of the propeller, thus gluing the N- and C-terminus of the domain. The propeller is further stabilized in this region by a disulfide bond between residues C504 and C554, which belong to blades 7 and 1, respectively (Supplementary Fig. 6 ). The “bottom” opening is in fact a loop of blade 3 that extends out and leads to the nozzle tip domain. It is stabilized, among other interactions, by hydrophobic interactions at the base of the propeller (Y146, Y281, and W297) (Supplementary Fig. 6 ). Although there is no significant sequence homology, the P22 tail protein gp10 shows by structure prediction the possible existence of a β-propeller domain (I-TASSER, data no shown). As this protein is one of the most conserved genes in all Podoviridae 37 , it is possible that β-propeller domains might play a common role in DNA ejection. The nozzle tip domain is a β-barrel (with an α-helix instead of one of the barrel β-strands), followed by a two-turn 3 10 helix, which is the very tip of the tail and thus protrudes toward the bacterial outer membrane during infection. A similarity fold search of this domain using Dali 38 retrieved, among others, the gp11 protein of T4 bacteriophage base plate, even though they do not have any sequence similarity. T4-gp11 is essential for DNA ejection and plays a key role during fiber attachment 39 . In addition to the nozzle tip β-barrel, there is a three-stranded β-sheet that acts as an intervening structure between the nozzle tip and the β-propeller. The fiber dock connects the end of the β-propeller to the platform domain and is situated at the outer surface of the assembly (Fig. 3c ). This domain is triangular and is composed of a small α-helix and six anti-parallel β-sheets, forming a jelly-roll β-barrel. Finally, the C-terminal domain forms the platform, which includes nine long anti-parallel β-sheets forming another, larger, jelly-roll β-barrel. Of note, three of the strands of this barrel belong to the 45-residue N-terminus of the protein. This observation suggests a possible circular permutation event where the C-terminus of the gene has moved to the beginning of the sequence. In the hexameric nozzle structure, it is apparent that the elongated protomers are highly twisted. The twist is left handed and about 45° when comparing proximal and distal sections of the nozzle. Characterization of tail portal and adaptor interaction The presence of the portal ring is necessary to build the gp11 dodecameric ring, otherwise gp11 behaves as a monomer 27 . Gp11 interacts at the “bottom” of the gp8 portal by strangling its clip domain between the embracing helix, the C-terminus of the preceding helix (which forms the internal channel surface of the adaptor), and the fiber dock, knitting an extensive network of electrostatic interactions and hydrogen bonding. In fact, the clip domains fit inside the inner channel of the adaptor, at its wider “upper” side. As the monomers are inclined in different directions in gp8 and gp11, a single gp11 monomer interacts with four gp8 monomers (Fig. 4a ), as also described for bacteriophage P22 12 . A gp11 monomer presents a higher surface of interaction with gp8 (1771 Å 2 ) than with its adjacent gp11 monomer (1435 Å 2 ). This observation, together with the poor hydrophobic inter-monomer contacts in gp11, would explain why this protein readily forms an oligomeric ring on gp8 but is a monomer in its absence. There are two main interaction regions between gp8 and gp11 (Fig. 4a ): one at the clip of the portal, where residues in the gp11 fiber dock (E173, E175, and D177) interact with the gp8-positive belt in the clip domain (Q307, R309, R310, and K313) (Fig. 4b ), and the other one at the portal stem, where the gp11 embracing helix residue (R196) reaches portal α7 (residues D275 and D278) (Fig. 4b ). A similar interaction at the portal “bottom” was described for the T4 portal-terminase complex 13 , suggesting that both the terminase and the adaptor proteins use a similar docking mode onto the portal. The second interaction point might be accessible only after full DNA packaging into the capsid, when DNA pressure extrudes the T7 portal from the capsid 40 (see below). Fig. 4 Tail portal-adaptor interaction. a Ribbon representation of the gp8 portal and gp11 adaptor proteins in the tail complex. Left, binary gp8-gp11 complex; right, close-up of the interaction region. In order to facilitate the interpretation of the image, only one subunit of the gp11 oligomer is shown in the close-up view. The adaptor subunits are shown in dark green and light blue; the four subunits of gp8 interacting with gp11 subunit highlighted in the right panel are shown in plum, magenta, purple, and pink, whereas the rest of the subunits are indicated in grey. b Surface charge distribution of the portal/adaptor interaction. In order to facilitate the interpretation, a single monomer of gp11 protein is shown. Left, view from the outside the complex showing the portal electrostatic surface and a ribbon representation of gp11. Right, view from the inside of the channel, showing the adaptor electrostatic surface, and a ribbon representation of the portal protomers Full size image gp12-nozzle assembly and its role in securing DNA The nozzle protein is assembled as a tubular hexamer at the virus tail, lowering the 12-fold symmetry of the portal and the adaptor to the 6-fold symmetry characteristic of the tail machinery 27 . Gp11 adaptor and gp12 nozzle present a large network (1252 Å 2 ) of electrostatic interactions between them. Although the “bottom” part of the adaptor is mainly electronegative, the “upper” part of gp12 is highly electropositive (Fig. 5a ). In order to switch the symmetry from 12 to 6, a gp12 platform domain has to interact with two gp11 monomers (Supplementary Fig. 6b ). The two gp11 monomers engage for the interaction of acidic residues from the loop between helices α1 and α2 (E26, D35, D39), which is found in two alternating conformations laying on distinct regions of the gp12 platform; thus, each of the two contiguous gp11 monomers interacts with a different set of positive residues of gp12 (Supplementary Fig. 6b ). Fig. 5 Structural characterization of the nozzle protein. a Electrostatic potential surfaces of the interacting region between the gp11 adaptor (top) and gp12 nozzle (bottom). b Ribbon representation of gp12 protein structure in the tail complex in orange, with the four closing gates highlighted in dark blue. Diameters of the channel at each of the gates when measured from Cα to Cα are as follows: gate 1, 18.3 Å; gate 2, 8.6 Å (from carbonyl O to carbonyl O); gate 3, 13.8 Å; and gate 4, 23 Å. c Detail of the gp12 closing gate 2, with the map density shown in mesh and the protein atomic model in orange ribbon. Left, lateral view; Right, axial view Full size image The oligomeric assembly strategy of gp12 differs greatly to that of gp11 and gp8, the latter two showing a bipolar distribution of the electrostatic charges in either side of the protomer. In contrast, gp12 charge distribution is more heterogeneous and the inter-subunit interactions are held mainly at three distal points, namely the platform, β-propeller, and nozzle, leaving the rest of the structure relatively loose and free to potential movements. The strongest interaction point between monomers is located at the β-propeller domain, through the protruding loops between the blades. The β-propellers are placed with their planes radial-wise, defining a section of the particle that resembles a six-pointed star (Fig. 5a ). In the mature virus, the gp12 nozzle protein is the main molecule responsible for closing and securing the DNA inside the tail. This task is mediated by four closing gates with negatively charged residues found along the gp12-internal channel (Fig. 5b ). The first DNA barrier is located at the platform level, where the aperture is 18.3 Å. It is followed by the most constricted part of the channel, at the β-propeller level, with two narrowings of 8.6 Å and 13.8 Å, respectively. The 8.6 Å aperture gate is remarkable, as it is formed by the main chain carbonyl group of D478, at the tip of a loop of blade 6 of the β-propeller (Fig. 5c ). This is a tight loop without any flexibility, well stabilized by several hydrophobic residues, where there is no room for local movement to increase the aperture. The opening of this gate must involve the displacement of the large β-propeller domains or entire protomers. Finally, there is one further narrowing (gate 4) at the nozzle tip with an aperture of 23 Å. Discussion A number of reports describe several conformations of the portal protein that might be related to its function. The portal complex of P22 is found in two conformations in proheads and mature heads, with different affinities for other viral components, such as the capsid, scaffold, or terminase 24 , 41 . In the case of phage SPP1, rearrangements of the portal protein subunits are essential for DNA translocation 42 and they are related to the interaction with DNA in the inner channel of the portal assembly 23 . These changes, leading to a reorganization of the portal channel, suggest that, rather than being a passive DNA pore, the portal has an active role during packaging, serving as a sensor device in the determination of the amount of DNA to be packaged 23 , 42 , 43 . The importance of subtle conformational changes in the context of DNA translocation has also been claimed in modeling studies. In this regard, the distribution of regions differing in the degree of stiffness, thus allowing specific compressions and DNA-dependent distortions, as well as the existence of quasi-equivalent contacts, have been proposed to play important roles in DNA packaging 43 , 44 , 45 . Recently, it was also described for bacteriophage P23–45 that the portal may adopt different conformations depending on the stage of virus assembly, with synchronized movements of different domains communicating the inner part of the capsid to the exterior 46 . In this study, we have described two distinct conformations, namely open and closed, of the T7 bacteriophage portal. Docking of the closed conformation found in portal crystals into proheads from two different viral systems (T7 and P23–45) shows that this conformation is compatible with the portal region of the viral proheads before DNA packaging (Supplementary Fig. 7 ). Comparison of the two portal conformations has highlighted the α10 helix-tunnel loop as a region of the protein that is likely to act as a channel valve during viral morphogenesis. The closure of the T7 portal protein, as observed here, does not allow the passage of the genome and therefore the portal valve is likely to be in the open conformation during DNA packaging. Although we cannot rule out the loss of rotational symmetry of the particle during DNA translocation or that some of the monomers differ to others in their α10 helix-tunnel loop conformation, the present structures do not provide any indication for distinct conformations in any given particle: the valve is either fully open or fully closed. Thus, the hypothesis of an undulating belt tightly embracing the DNA 11 as it runs along the channel cannot be inferred from the present study. Once DNA translocation ends, the portal is extruded from the capsid, thus exposing the clip and stem domains (Fig. 6a ) 40 . This exposure in turn allows the portal to interact with the adaptor gp11, replacing the terminase. The accessibility of the newly exposed surfaces, together with the symmetry matching (C12) between the adaptor and the portal, might favor this interaction over the terminase. The terminase detachment signal may be caused by a conformational transition of the portal, mediated by a sensing signal induced by the DNA pressure 24 , thus allowing portal interaction with the adaptor and the assembly of the rest of the tail components 1 , 25 . The flexible crown domain, whose movement is observed in our structures, may be involved in the process of sensing DNA pressure 24 , 47 . The movement of the crown would then be transmitted to the channel valve, closing it. Both movements, crown and valve, are correlated in our structures. Thus, during the brief time where no protein is docked to the portal, the valve is closed, thereby preventing leakage of the packaged DNA. Temporary retention of the DNA inside the capsid in the absence of other proteins was reported for phi29 bacteriophage 22 , an observation that supports our hypothesis and the role of the channel valve. In the case of Epstein–Barr herpesvirus, the recently determined portal structure 48 shows an equivalent diaphragm-like structure in the channel that could play a similar valve role for DNA retention during packaging. An open question is whether the valve closes onto the DNA, which may occupy the whole portal channel up to the clip when the translocation finishes. The valve would thus trap the DNA by the interaction of the arginine side chains (R368) of the tunnel loop with the DNA phosphates. The observation that the closed conformation of the channel valve leaves an aperture that is very similar to the diameter of the DNA suggests this hypothesis. Fig. 6 Proposed model for T7 bacteriophage DNA securing inside the capsid. a Overlapping of the “free” portal into the prohead (left) and the tail complex into the mature virus (right) from central sections through the reconstructions described in ref. 40 . b Scheme showing the T7 bacteriophage assembly pathway. The capsid is shown in pink, the portal (gp8) in purple, the core complex in light blue, the terminase in gray, the adaptor (gp11) in green, the nozzle (gp12) in orange, the fibers in dark blue, and the DNA in black. The portal channel valve can be either open or closed. When the portal is in the prohead, the terminase–portal interaction stabilizes the open conformation, allowing DNA packaging. When the terminase leaves the complex, the portal channel valve closes, thus preventing DNA leakage from the capsid. The interaction of the portal with the adaptor protein re-establishes the open conformation of the portal channel valve, permitting the DNA to slip along the tail channel up to the nozzle, ready for ejection. In the mature virus, the gates of the nozzle protein close, retaining the DNA in the tail channel. These gates are closed until the reorganization of the nozzle (untwisting), which is triggered by the interaction of the tip and the fibers with the host membrane. The gates then open and the viral genome is ejected Full size image Although the free portal can be found in the two conformations (open and closed), the interaction with the adaptor protein clearly stabilizes the open conformation (probably through the clip-α9-α10 path), allowing the DNA momentarily retained inside the portal to slip further through the channel, pass the wide adaptor section, and reach the nozzle. In the mature virus, the genome is retained inside the ejection channel and stopped from leaking at the nozzle level by the four gates. As the channel is tightly blocked, in particular at the stiff gate 2 made by the β-propellers, which shows minimal aperture, the genome is fully secured inside the capsid, and thus a stable infective virus is ready. Closing of the DNA ejection channel by β-propeller loops might be also present in other phages. In the case of P22, there is a highly conserved hexameric tail protein (gp10) 37 , which connects the adaptor and the tail needle 31 , which also shows propeller-like domains by structure prediction studies. When a new host is reached, the interaction of the tail fibers with the bacterial outer membrane receptor triggers a conformational change in the nozzle, which results in the opening of the channel required for DNA release 28 , 29 . Our structure of the tail shows that gate 2 cannot be opened by a local (i.e., loop) movement. A large displacement of the protomers is necessary, involving the separation of the six β-propeller domains and the consequent expansion of the channel. In a previous low-resolution cryo-EM study 29 , we observed this large conformational movement, which involves the untwisting of the six elongated protomers. The present high-resolution structure further supports this observation. All together, our results allow us to propose a model for the roles of each tail protein during T7 bacteriophage DNA packaging, retention, and ejection (Fig. 6b ). From a methodological point of view, this work illustrates the power of the correlative combination of X-ray crystallography and cryo-EM techniques, in order to solve challenging molecular structures. Methods Protein expression and purification The T7 bacteriophage gp8 gene was inserted into the pET28a vector (Novagen), between the NcoI and NotI restriction sites. The protein was expressed in Escherichia coli BL21(DE3) grown at 37 °C, after induction with 0.4 mM of isopropyl β- d -1-thiogalactopyranoside (IPTG) at an optical density (OD) of ~0.6, for 3 h at 37 °C or overnight at 16 °C. Cells were resuspended in 20 mM Tris-HCl pH 8.0, 500 mM NaCl, 3 mM β-mercaptoethanol, 20 mM imidazole, 40 µg/ml DNase I, and Complete Protease Inhibitor Cocktail Tablets (Roche), and lysis was performed using a cell disruptor (Constant Systems, Ltd.). The sample was then clarified after centrifugation for 30 min at 30,000 × g and purified by a three-step protocol 13 . Briefly, the protein was loaded onto a HisTrap HP column equilibrated in buffer A (20 mM Tris-HCl pH 8.0, 500 mM NaCl, 3 mM β-mercaptoethanol, 20 mM imidazole) and eluted in buffer A with 350 mM imidazole; the protein was then loaded onto a Sephacryl S-400 column, followed by a Superose 6 column, equilibrated in buffer B (20 mM Tris-HCl pH 8.0, 500 mM NaCl, 5 mM dithiothreitol, 2 mM ethylenediaminetetraacetic acid). The sample was concentrated with a 30,000 Da molecular weight cutoff Vivaspin (GE Healthcare). In some cases, the protein was dialyzed in TMS buffer (50 mM Tris-HCl pH 7.8, 100 mM NaCl, 10 mM MgCl 2 ) before grid preparation. gp8 , gp11 , and gp12 genes were cloned in tandem in the p-RSETB vector 27 . The tail complex was expressed in E. coli C43 after induction with 1 mM IPTG for 3 h at OD 600 ~0.4. Cells were resuspended in TMS buffer with Complete Protease Inhibitor Cocktail Tablets (Roche) and sonicated. The complex was purified in two steps: proteins were first loaded onto a HisTrap HP and eluted in TMS buffer with 200 mM imidazole, and then onto a Superose 6 column in TMS. The complex was concentrated using a 50 k Amicon Ultra (Millipore). Cryo-EM sample preparation and imaging R2/2 Quantifoil grids were glow-discharged for 1 min for the gp8 protein, while they were cleaned with acetone and treated with 0.1% w/v poly- l -lysine (Sigma) for 1 min in water for the tail complex. Next, 3 µl of purified sample (at 3 mg/ml or 1.5 mg/ml for gp8, and 0.8 mg/ml for the tail complex) was pipetted onto the grids and incubated for 3 min at 22 °C, at 95% humidity. Grids were then blotted for 3.5 s, with forces between −3 and −5 on a FEI Vitrobot Mark IV (FEI), and plunge-frozen in liquid ethane. A gp8 grid at 3 mg/ml was transferred to a FEI Talos Arctica (FEI) electron microscope operated at 200 kV at the Cryo-EM Centro Nacional de Biotecnología-Centro de Investigaciones Biológicas CSIC facility in Madrid (Spain). A total of 1065 movies fractioned in 26 frames were recorded in an automated fashion on a Falcon II (FEI) detector, using EPU (FEI) with a pixel size of 1.37 Å/pix, fraction exposure time of ~0.058 s/frame, and a total accumulated dose of ~22.8 e − /Å 2 (~0.88 e − /Å 2 /frame). Gp8 grids at 1.5 mg/ml in TMS buffer were transferred to a FEI Titan Krios (FEI) electron microscope operated at 300 kV at the European Molecular Biology Laboratory in Heidelberg (Germany). Images were recorded in an automated fashion on a Gatan K2 Summit (Gatan) detector, with a pixel size of 1.04 Å/pix using SerialEM 49 . A total of 4517 movies were collected fractioned in 40 frames, with a fraction exposure time of 0.5 s and a total accumulated dose of ~39.4 e − /Å 2 (0.985 e − /Å 2 /frame). Tail complex grids were transferred to a FEI Titan Krios (FEI) electron microscope operated at 300 kV at the Electron Bio-Imaging Centre (eBIC), Diamond Light Source in Didcot (UK). Images were recorded in an automated fashion on a Gatan K2 Summit (Gatan) detector, with a pixel size of 1.048 Å/pix using EPU (FEI). A total of 2744 movies were collected fractioned in 40 frames, with a fraction exposure time of 0.2 s and a total accumulated dose of ~33.6 e − /Å 2 (0.84 e − /Å 2 /frame). Image processing and map calculation Cryo-EM data processing was performed using the Scipion software framework 50 . Dose-fractionated image stacks were motion corrected and dose-weighted using MotionCor2 51 . Defocus was estimated using ctffind4 and xmipp3 52 , 53 for the gp8 Talos Arctica data set, and GCTF program 54 for the gp8 and tail complex Titan Krios data sets, and Contrast transfer function (CTF) was corrected in the reconstruction process by RELION. Particles were picked using xmipp3 53 for gp8 and gautomatch for the tail complex. Extracted particles were classified using RELION 2D and 3D 55 , 56 , and initial volumes were built using Ransac 53 . In the case of the gp8 Titan data, 2D average visual inspection was used to select classes corresponding to frontal and partial frontal views of the dodecameric particles and to discard those corresponding to tridecamers. The particles used in the final reconstruction were selected after 3D classification of the selected frontal and partial frontal views plus the lateral views. Although C12 or C13 symmetries were applied for gp8 reconstruction, C6 symmetry was applied for the gp8gp11gp12 complex. Final volumes were obtained using RELION Auto-refine with 12,642 and 32,388 particles for gp8 Talos Arctica and Titan Krios acquisition data sets, respectively, and with 92,382 particles for the tail complex. RELION Post-processing was used for the gp8 Talos Arctica data set. In all the cases, structure resolutions were estimated from RELION FSC curves with the 0.143 cutoff criterion 57 , 58 and local resolutions were computed with MonoRes 59 . Final volumes were post-processed with LocaldeBlur using MonoRes volume as input in the case of the gp8 Titan Krios and tail data sets 60 . X-ray data collection and crystallographic processing Crystals of gp8 portal protein were obtained by hanging-drop vapor diffusion in a number of conditions, but only a few of them diffracted. Gp8-13mer cryst crystals were grown at 8.5 mg/ml protein sample in 15 % tacsimate, 0.1 M HEPES pH 7.0, 12% (w/v) PEG 3350 at 20 °C. Gp8 closed crystals were grown at 4.4 mg/ml protein sample in 0.2 M CaCl 2 , 0.1 M sodium acetate pH 4.6, and 30% (w/v) PEG 400 at 20 °C. All crystals were mounted in loops and flash frozen in liquid nitrogen, using a cryoprotective buffer. X-ray diffraction data were collected at ID29 and ID14-1 beamlines at the European Synchrotron Radiation Facility in Grenoble (France). Data were collected at 1.0679 Å and 0.9340 Å, and processed using XDS 61 . A partial model (36% of the structure) of the gp8-13mer was built ab initio, using a 5.8 Å resolution cryo-EM map, and later used to obtain the initial crystallographic phases for the gp8-13mer crystallographic data by molecular replacement and phase extension. A monomer from the structure was used to phase the gp8 closed crystallographic data and as a template to trace the cryo-EM models of gp8 open and gp8 within the tail complex. Gp11 was traced on the cryo-EM tail map using T7 from gp11 protein threading model 27 and TTPA crystal structure 33 from KP32 as guides. Gp12 tail protein was traced ab initio using PSIPRED 62 secondary structure prediction as a guide. All molecular replacement procedures were performed with PHASER 63 , and both cryo-EM and crystallographic models were traced in Coot 64 . During crystallographic model building, it was crucial to calculate density-modified maps, taking into account the presence of non-crystallographic symmetry 65 . PHENIX real-space refinement 66 and REFMAC5 67 were used to refine all the models, the latter within the CCP-EM suite for cryo-EM data 68 , 69 . All the models were validated using MolProbity 70 . Figures were prepared with Chimera 71 . In all electrostatic potential surfaces, blue represents 10 kcal/(mol e − ) positive potential, whereas red represents −10 kcal/(mol e − ) negative potential. Movies were prepared with the morphing option, interpolating the movement between two given conformations of the same protein with Chimera 71 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The electron microscopy maps were deposited in the Electron Microscopy Data Bank (EMDB) with accession codes EMD-4667 , EMD-4669 , and EMD-4706 . Atomic coordinates and crystallographic structure factors were deposited in the Protein Data Bank (PDB) under accession codes 6QWP , 6QX5 , 6QXM , and PDB 6R21 . All relevant data are available from the authors upon request.
The Epstein-Barr virus is one of the most widespread human viruses. Part of the herpesvirus family, it causes glandular fever (infectious mononucleosis), cancer and autoimmune diseases. At present, there is no treatment for infections caused by this virus. In work recently published in Nature Communications, scientists from the Institute for Research in Biomedicine (IRB Barcelona) and the Molecular Biology Institute of Barcelona (IBMB-CSIC) in Spain used cryo-electron microscopy (cryo-EM) to reveal the structure of a key protein, known as a portal, in the Epstein-Barr virus. Similarities between herpesviruses and tailed bacteriophages (viruses that infect bacteria) suggest that these two types of organism may be related. In a second paper published in the same journal, the team solved the structure of the portal protein in bacteriophage T7, using a combination of cryo-EM and X-ray crystallography. These results allowed them to infer how the Epstein-Barr virus portal works and may help in the development of a treatment for this virus. In 2018, we brought you the news that high-resolution cryo-EM at eBIC had uncovered new information about a critical feature of the Herpes Simplex Virus. Cryo-EM has now worked its magic on the related Epstein-Barr virus, paving the way towards ways to defeat this untreatable virus. The herpesvirus family is enormous and includes eight human pathogens. The Epstein-Barr virus infects B-cells (a type of white blood cell) and also the epithelial cells that make up skin and also line the inside of the throat, blood vessels and organs. It causes glandular fever (infectious mononucleosis) and can cause several kinds of cancer and autoimmune diseases. All herpesviruses infect in a similar way. Once the virus has entered a cell and reached the nucleus, it releases its DNA. This DNA can lie dormant for many years until specific conditions trigger replication. When the virus replicates, the DNA is introduced into a new viral shell (capsid), forming a new virus capable of attacking other cells. The virus uses a protein called a portal for packaging its DNA into the viral capsid and to release it to the host cell during infection. As the portal plays a critical role in replication and infection, it makes an attractive target for the development of new anti-viral drugs. The portal: an open and shut case? The similarities between the capsid structure and viral DNA packaging mechanism of herpesviruses and tailed bacteriophages suggest that they may be related. Although researchers have been able to determine the structure of portal proteins from bacteriophages successfully, the study of herpesvirus portals has been more challenging. Scientists from the Institute for Research in Biomedicine (IRB Barcelona) and the Molecular Biology Institute of Barcelona (IBMB-CSIC) have now used cryo-EM at eBIC to reveal the structure of the portal protein in the Epstein-Barr virus at a resolution of 3.5 Å. In a second study, the same team used a combination of cryo-EM and X-ray crystallography to characterise the structure of the portal protein in bacteriophage T7. Their work illustrates the power of using these techniques in conjunction to solve challenging molecular structures. The bacteriophage also uses its portal to package its DNA inside a pro-capsid. The tail components then assemble on the portal to make an infectious virus. The ejection conduit remains tightly sealed until infection, when the channel opens to deliver the DNA to the host cell. All of the portals analysed to date for the Caudovirales family of bacteriophages share common structural features. In search of antivirals Miquel Coll is head of the Structural Biology of Protein & Nucleic Acid Complexes and Molecular Machines Lab at IRB Barcelona and a professor at IBMB-CSIC. He says; "Understanding the structure of the portal protein could aid the design of inhibitors for the treatment of herpesvirus infections such as Epstein-Barr. As this protein is only found in herpesviruses, these inhibitors would be virus-specific and may be less toxic for humans." Cristina Machón and Montserrat Fàbrega, postdoctoral fellows at IRB Barcelona and IBMB-CSIC are first authors on both papers. They say that "solving the structure of the portal protein of bacteriophage T7 has allowed us to infer how the portal from Epstein-Barr virus works." The drugs currently used to treat herpesvirus infections target the viral DNA polymerase. They are not very efficient, with serious side effects and the appearance of viral resistance after prolonged treatment. There is no specific treatment for the Epstein-Barr virus. Knowledge of the atomic structures of portal proteins will be extremely valuable, allowing the structure-driven design of compounds targeting their function—highly specific anti-virals that should cause fewer side effects.
10.1038/s41467-019-11705-9
Earth
Almost one-in-three people globally will still be mainly using polluting cooking fuels in 2030
Oliver Stoner et al, "Household cooking fuel estimates at global and country level for 1990 to 2030," Nature Communications (2021). DOI: 10.1038/s41467-021-26036-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-26036-x
https://phys.org/news/2021-10-one-in-three-people-globally-polluting-cooking.html
Abstract Household air pollution generated from the use of polluting cooking fuels and technologies is a major source of disease and environmental degradation in low- and middle-income countries. Using a novel modelling approach, we provide detailed global, regional and country estimates of the percentages and populations mainly using 6 fuel categories (electricity, gaseous fuels, kerosene, biomass, charcoal, coal) and overall polluting/clean fuel use – from 1990-2020 and with urban/rural disaggregation. Here we show that 53% of the global population mainly used polluting cooking fuels in 1990, dropping to 36% in 2020. In urban areas, gaseous fuels currently dominate, with a growing reliance on electricity; in rural populations, high levels of biomass use persist alongside increasing use of gaseous fuels. Future projections of observed trends suggest 31% will still mainly use polluting fuels in 2030, including over 1 billion people in Sub-Saharan African by 2025. Introduction For 3 billion people 1 living in low-income and middle-income countries (LMICs), the simple act of cooking is a major health and safety risk. The inefficient combustion of solid fuels (wood, coal, charcoal, dung, and crop waste) and kerosene in simple stoves and devices produces high levels of household air pollution (HAP). Chronic exposure to HAP increases the risk of noncommunicable disease including ischemic heart disease, stroke, chronic obstructive pulmonary disease, lung cancer, as well as pneumonia 2 . Overall, HAP exposure accounts for some 3.8 million premature deaths annually 3 . Use of open fires or poorly balanced pots is also a major cause of burns and scalds in LMICs, while kerosene and charcoal use in the home is a major source of poisonings from either ingestion or carbon monoxide exposure 2 . Households that rely on polluting energy systems frequently have to travel great distances to gather fuel—sometimes traveling hours each week—putting them at increased risk of musculoskeletal injury and violence 4 . Fuel collection is often tasked to women and children, perpetuating the negative socioeconomic and gender inequities of energy poverty by taking away time that could be spent on other activities likes schooling, income-generation, and socializing. 4 Polluting cooking practices are also an important cause of environmental degradation and climate change: the black carbon from cooking, heating and lighting is responsible for 25% of anthropogenic global black carbon emissions 5 , and around 30% of wood fuels harvested globally are unsustainable 6 . In recognition of these significant burdens, the global community has prioritized achieving universal access to clean cooking, enshrined in the 2030 Agenda for Sustainable Development 7 as one of three targets for Sustainable Development Goal (SDG) 7, to “ensure access to affordable, reliable, sustainable, and modern energy”. As part of its mandate to monitor and inform policy towards this goal, the World Health Organization (WHO) publishes estimates of exposure to HAP 8 and related disease burdens 3 . Historically, such estimates were calculated using the estimated population mainly using solid fuels 9 for cooking. However in 2014, the WHO published the first-ever normative guidelines on the fuels and technologies that can be considered “clean” for health 2 , which highlight the importance of stove and fuel performance in combination, while also recommending against or discouraging the use of certain fuels—notably unprocessed coal and kerosene, a liquid fuel previously considered clean that emits high levels of harmful pollution. Since then, tracking of “solid fuels” has been replaced with “polluting fuels and technologies”—where polluting fuels consists of unprocessed biomass (wood, crop residues, and dung), charcoal, coal, and kerosene (Fig. 1 ), and polluting technologies refers to those stoves with emission rates higher than the recommended rates included in WHO Guidelines. Meanwhile, estimates of the proportion of the population mainly using clean fuels and technologies—where clean fuels consists of gaseous fuels (liquified petroleum gas or LPG, natural gas, biogas), electricity, alcohol, and solar energy (Fig. 1 )—inform monitoring of progress towards universal access to clean cooking 1 . Acknowledging the very limited survey data on the technologies used for cooking, and the limited availability of truly clean-burning (for health) biomass stoves in LMICs, this analysis focuses only on the fuels used rather than stove technologies. Fig. 1: Cooking fuel categorization. Classification of cooking fuels within the scope of the global household energy model as clean or polluting. Full size image While the aggregate indicators “polluting fuel use” and “clean fuel use” are effective for summarizing and communicating the global extent of polluting cooking, and progress towards global goals, fuel-specific estimates are needed to optimally inform policies and decision-making on how to achieve the greatest reductions in HAP exposure as quickly as possible. These data in combination with local expert knowledge on challenges of affordability, availability, infrastructure, and cultural preferences are critical to maximizing the health benefits from the clean cooking transition. Fuel-specific estimates are also desirable to refine estimates of HAP exposure and health burdens at regional, country level, and sub-national levels, fully taking into account the varying harm and types of pollution associated with different fuels (notably, carbon monoxide is currently absent from burden of disease calculations 10 ). Using a new model based on individual/specific fuel categories 11 (detailed in “Methods” section), we report estimates of main cooking fuel use at country, regional, (SDG and WHO regions) and global levels, for each year from 1990 to 2020, with urban/rural disaggregation. We provide estimates of aggregate clean and polluting fuel use, and report for the first time estimates for six specific fuel categories: electricity, gaseous fuels, kerosene, unprocessed biomass, charcoal, and coal. For brevity, gaseous fuels and unprocessed biomass are from here onwards called “gas” and “biomass”, respectively. We also report future projections of all estimates up to 2030 representing a possible scenario, where trends seen in recent decades continue. We provide all estimates as Supplementary Data for download: Supplementary Data 1 contains estimates at country level; Supplementary Data 2 contains estimates at SDG region and global levels; and Supplementary Data 3 contains estimates at WHO region level. In this article, we will often refer to percentages or populations mainly using different fuels for cooking. This is because the vast majority of existing household surveys, the primary input data for the model, do not capture use of fuels other than the one used most often by the household. Stove-stacking, where a household uses more than one fuel and stove type in parallel with their main fuel, e.g., use of LPG as the main fuel alongside use of a traditional biomass stove 12 , is common around the world 13 . Eventually, enough surveys capturing this information will be available to enable comprehensive global estimates (i.e., by country and year) which quantify stove-stacking. Until then, we are limited to quantifying main fuel use and we must recognize that the absolute number of people who use polluting fuels for cooking (and are therefore exposed to high levels of household air pollution) is certainly higher than just the population using them as their main fuel for cooking. Results Progress towards universal clean fuel use The percentage of the global population mainly using polluting fuels for cooking has declined steadily over the last three decades, as illustrated in the right panel of Fig. 2 , from 53% [45–60] in 1990 to 36% [30–43] in 2020. If observed trends continue, this percentage is expected to decline further to 31% in 2030. However, the percentage of the population mainly using polluting cooking fuels does not tell the whole story, as rising populations have contributed to an absolute number of people mainly using polluting fuels, which has deviated little from 3 billion people since 1990 (2.8 billion [2.4–3.1] in 1990, 3.0 billion [2.8–3.3] in 2000, 3.0 billion [2.7–3.3] in 2010, and 2.8 billion [2.3–3.3] in 2020). This number is projected to drop only to 2.7 billion people by 2030. Fig. 2: Global use of clean and polluting fuels as the main fuel for cooking. Estimated (posterior median) global populations mainly using clean and polluting fuels for cooking (shaded area), shown alongside the estimated (posterior median) percentage of the global population mainly cooking with polluting fuels (solid line), with 95% posterior uncertainty intervals (dotted lines). Full size image Strictly at a global scale, the percentage of people in rural areas mainly using polluting fuels for cooking (central panel of Fig. 2 ) decreased only slightly between 1990 and 2010, from 75% [60–83] to 71% [66–76], but progress has since accelerated so that the estimated percentage cooking mainly with polluting fuels in 2020 is 61% [52–69]. This is projected to decrease further to around 50% in 2030. These reductions have been matched by substantial decreases in the absolute rural population mainly using polluting fuels, from a high of 2.5 billion [2.2–2.6] in 2003, to 2.1 billion [1.8–2.4] in 2020 and then a projected 1.7 billion in 2030. Conversely, following a decrease from 1990 to 2020, the percentage of the global urban population mainly using polluting fuels appears to have plateaued at 17% [13–25] in 2020—projected to be 18% in 2030—while the absolute urban population mainly using polluting fuels is even projected to increase from 0.7 billion [0.5–1.1] in 2020 to 0.9 billion in 2030. The stagnation in the global population mainly using polluting and clean fuels disguises an important regional trends. In 1990, more than three quarters of people in the Central Asia and Southern Asia region and more than half of people in the Eastern Asia and South-eastern Asia region mainly used polluting fuels for cooking (Fig. 3 ). Both of these regions have made significant progress over the last three decades in transitioning towards universal use of clean fuels as the main fuel for cooking. However, these successes are overshadowed by alarmingly little progress in the Sub-Saharan Africa region, where use of polluting fuels as the main fuel for cooking has only dropped from 90% [87–92] in 1990 to 84% [82–86] in 2020. If observed trends continue, this is projected to only drop to 81% [76–85] in 2030, meaning four in five Sub-Saharan African people will continue to suffer the health and socioeconomic burdens of polluting cooking (this figure would likely be higher if it accounted for stove stacking). Fig. 3: Regional use of polluting fuels as the main fuel for cooking. Estimated (posterior median) percentage of the global population mainly cooking with polluting fuels in each SDG region, with 95% uncertainty intervals (shaded). Full size image Once again, to truly understand the human cost of polluting cooking, it is more telling to consider the absolute number of people mainly using polluting fuels (Fig. 4 ). The number of people mainly cooking with polluting fuels is rising at an alarming rate in Sub-Saharan Africa and is projected to exceed 1 billion people by as soon as 2025. Fig. 4: Regional populations mainly using polluting fuels for cooking. Estimated (posterior median) population mainly cooking with polluting fuels in each SDG region. Full size image In the year 2000, out of those mainly cooking with polluting fuels, 3 in 4 (75%) lived in either Central Asia and Southern Asia or Eastern Asia and South-eastern Asia, and only 1 in 5 (19%) resided in Sub-Saharan Africa, as illustrated in Fig. 5 . In 2020, around 1 in 3 (34%) lived in Sub-Saharan Africa and this is projected to approach to 1 in 2 (44%) by 2030. Fig. 5: Regional breakdown of the global population mainly using polluting fuels for cooking. Estimated (posterior median) regional populations mainly using polluting fuels as a proportion of the estimated (posterior median) overall global population mainly using polluting fuels. Full size image The changing fuel mix in low-income and middle-income countries Analysis of specific fuel use at regional, country, and sub-national levels can help to better estimate the impacts of current policies for household energy use as well as inform the future development of policies and programs. Here, we discuss some of the most notable trends across LMICs. Among LMICs (Fig. 6 ), use of gaseous fuels as the main cooking fuel increased consistently from 31% [23–41] in 1990 to 49% [41–56] in 2020, overtaking unprocessed biomass fuels as the dominant main cooking fuel type in the last decade. Use of electricity as the main cooking fuel also rose, from 4% [3–7] in 1990 to 8% [4–14] in 2020, with a considerably larger increase in urban areas where infrastructure tends to be better established. Fig. 6: Cooking fuel use in LMICs. Estimated (posterior median) percentage of the population in LMICs mainly using each fuel type, with 95% uncertainty intervals. Full size image Between 1990 and 2010, increases in the use of clean fuels as the main cooking fuel appear to be principally explained by considerable decreases in the use of coal and kerosene as the main fuel. Use of coal as the main fuel in rural areas has dropped from 12% [3–25] in 1990 to 5% [3–8] in 2010 then to 2% [1–6] in 2020. Use of kerosene as the main fuel has also decreased: in urban areas it dropped from 10% [8–12] in 1990 to 4% [3–5] in 2010 then to 2% [1–3] in 2020, while in rural areas it dropped from 3% [2–5] in 1990 to 1% [0–2] in 2020. However, from around 2010 onwards use of biomass as the main fuel has also started to decrease consistently, primarily in rural areas where use of unprocessed biomass as the main fuel has dropped from 68% [63–73] in 2010 to 60% [51–68] in 2020. Although globally use of kerosene as the main fuel has dwindled, it persists in urban areas of LMICs in both Oceania (15% [7–35] in 2018) and in Sub-Saharan Africa (6% [4–9] in 2020). Globally the proportion mainly using charcoal is low (4% [3, 4] in 2020), but in urban areas of Sub-Saharan Africa (Fig. 7 ) it has overtaken biomass as the most popular main fuel (30% [25–35] in 2020). If observed trends continue into the next decade, in urban areas of LMICs use of gaseous fuels as the main fuel is projected to start falling as more people switch to electricity as their main fuel, and eventually level-off overall. Fig. 7: Cooking fuel use in Sub-Saharan Africa. Estimated (posterior median) percentage of the population in Sub-Saharan Africa mainly using each fuel type (lines), with 95% uncertainty intervals (shaded areas). Plots for other regions are included in the Supplementary Information. Full size image Case studies: regional and country analyses of fuel use Here, we demonstrate how our estimates can be used for detailed analysis at the regional, national, and sub-national level, using Sub-Saharan Africa and Ghana as case studies. In 2020, main fuel use in urban areas of Sub-Saharan Africa (Fig. 7 ) is highly pluralistic, consisting of charcoal (30% [25–35]), biomass (28% [24–34]), gaseous fuels (20% [17–23]), and electricity (13% [11–15]). In rural areas, however, use of biomass (86% [82–89]) and charcoal (8% [6–11]) as the main fuel constitutes a near duopoly, with only 6% using any of the other fuels as the main fuel. If observed trends continue into the next decade, use of kerosene as the main fuel is projected to diminish to around 2% of the Sub-Saharan African population in 2030, with only a few countries maintaining high levels in 2030: 36% in Equatorial Guinea, 44% in Djibouti, and 76% in Sao Tome and Principe. In fact, in Sao Tome and Principe use of kerosene as the main fuel is projected to increase. Meanwhile, modest decreases in the use of biomass as the main fuel are likely to be largely offset by increases in the use of charcoal as the main fuel. Concerningly, very little progress is projected to be made in the use of gaseous fuels or electricity as the main cooking fuel either in urban or rural parts of Sub-Saharan Africa. Zooming in to the national and sub-national level, Fig. 8 shows modeled estimates for main fuel use in Ghana alongside observed values from available household survey data—these are plotted to illustrate how the model captures non-linear fuel use trends, survey variability, and associated uncertainty. Fig. 8: Cooking fuel use in Ghana. Estimated (posterior median) percentage of the urban population (left), the rural population (center) and overall population (right) of Ghana mainly using each fuel type, with central estimates as lines. Points show available survey data. The 95% uncertainty intervals shown as shaded areas combine model uncertainty and survey variability: where data are plentiful, the uncertainty is small and the intervals capture the vast majority of survey points, where survey data are limited or unavailable, in particular when projecting into the future, the uncertainty grows, and our uncertainty intervals are wider. Plots for other LMICs are included in the Supplementary Information. Full size image In Ghana, the plurality of people mainly used biomass fuels in 2020 (38% [25–52]), with a further 30% [19–43] mainly relying on charcoal (Fig. 8 ). Use of biomass as the main fuel remains high in rural areas, despite dropping from 90% [80–97] in 1990 to 68% [51–82] in 2020. Although main use of charcoal was steadily rising in rural areas between 1990 (8% [2–18]) and 2010 (17% [10–26]), there is some evidence that this has stalled. In urban areas, meanwhile, main use of gaseous fuels has risen consistently from 5% [2–10] in 1990 to 44% [28–61] in 2020. This is likely the result of concerted government efforts (starting around 1990) to promote the use of LPG as a substitute for the widely used charcoal and firewood 14 . Increased use of gas as the main cooking fuel has come at the expense of biomass, which dropped about 14% points between 1990 and 2010, and charcoal, which dropped about 15% points between 2010 and 2020. Indeed, there is some evidence (65% probability) that in 2020 more people mainly used gaseous fuels than any other fuel in urban areas of Ghana. If observed trends continue, main use of gaseous fuels is projected to rise to 46% by 2030, meaning about 1 in 2 people in Ghana will still rely mainly on polluting fuels for cooking. Discussion Previous estimates of clean versus polluting/solid fuel use for cooking have played a vital role in informing global efforts to address the global energy injustice of household air pollution. However, by combining increasingly detailed survey data with advanced statistical modeling approaches, we have produced new estimates based on specific fuels. These estimates offer more detailed assessment of progress towards global goals and work to maximize the utility of data capturing household energy use and its impacts on health for policymaking. In particular, a greater understanding of what fuels people are using specifically can help pre-empt barriers to future adoption of clean cooking (e.g., affordability constraints or cultural preferences). Here, we used a novel Bayesian hierarchical modeling approach to comprehensively and reliably estimate the use of six fuel types (as the main cooking fuel)—as well as overall clean and polluting fuel use—under realistic and plausible constraints, from 1990 to 2020. We also presented future projections of existing trends up to 2030, representing a “business-as-usual” scenario to motivate new policy and providing a baseline against which the effects of new interventions can be assessed. Our analysis shows that, although there has been progress towards clean household energy, the global community is far off track from reaching universal access to clean cooking by 2030. The global proportion mainly using polluting fuels dropped by an estimated 17% between 1990 and 2020, although the absolute number of people using polluting fuels has deviated little from 3 billion over the last three decades. Global progress in urban areas is now static, although use of clean fuels as the main cooking fuel is increasing in rural areas (particularly gas and electricity). Indeed, our business-as-usual scenario projects that 2.7 billion people—just under 1 in 3—will continue to mainly rely on polluting cooking fuels in 2030. A deeper regional analysis has highlighted the emergence of Sub-Saharan Africa as having the largest population mainly using polluting fuels for cooking after 2020, which is likely to exceed 1 billion people by 2025 under a business-as-usual scenario; the need for greater focus and resources to implement policies and programs promoting the adoption of clean cooking in Sub-Saharan Africa cannot be overstated. Analysis at the level of specific fuels reveals further insights, such as the global elimination of kerosene and coal for cooking, and the emergence of charcoal as the most popular fuel in urban Sub-Saharan Africa. While the availability of a complete set of estimates by specific fuel represents a significant step forward in the monitoring and understanding of polluting fuel use for cooking, these estimates do not take into account the technology used for cooking nor supplementary cooking fuels and technologies—due to a lack of data from nationally-representative surveys. Moving forward, access to technological solutions like low-emission advanced combustion biomass cookstoves should be monitored in national surveys to facilitate inclusion in global analyses. These new surveys should follow the example of the Core Questions on Household Energy Use, jointly developed by the WHO and the World Bank’s Energy Sector Management Assistance Program (ESMAP) to track SDG Target 7.1 15 . Our estimates also do not currently account for stove-stacking, where households use more fuels than just the main cooking fuel. This is an important issue, noting for instance that a household using clean fuels 51% of the time will still suffer significant negative health and social impacts from using polluting fuels 49% of the time, despite being counted as “mainly using clean fuels”. Quantifying the health impacts of stove-stacking will rely on: enhanced and harmonized data collection capturing the fuels and technologies used in the home for all major end-uses including cooking, heating, and lighting; and robust epidemiological evidence quantifying health risk from specific fuels and technologies used. Enhanced monitoring efforts paired with future modeling that accounts for stove-stacking will improve understanding of exposure to total household air pollution, thus better informing policy and programmatic decision-making, as well as the global monitoring of health and environmental impacts. Methods Household survey data and selection criteria Data used in this analysis are drawn from the WHO’s Household Energy Database 16 , a regularly updated compilation of nationally-representative household survey data for WHO Member States from various sources, detailed in Supplementary Table 1 . Surveys in the database were downloaded manually and collated using Microsoft Excel (version 16.50) and occasionally Stata/SE (version 15.1). The version of the database used for this analysis (30th January 2020) comprises 1353 surveys collected from a total of 170 countries (including high income countries) between 1960 and 2018. For this analysis we exclude surveys from before 1990, and only include data from surveys providing individual fuel breakdowns and with less than 15% of the population in total categorized as “missing”, “not cooking in the household”, or “mainly cooking with “other fuels”. There was no differentiation in the model between surveys that reported only household-weighted or population-weighted fuel use estimates. Where surveys reported both household-weighted and population-weighted estimates, only population-weighted estimates were used, in order to best estimate the population reliant on different cooking fuels. Using this selection criteria, 1136 surveys—collected from 153 countries—were used for modeling. Supplementary Table 1 shows both the number of surveys in the database and the number used for modeling from each data source. Meanwhile, Supplementary Table 2 shows the number of survey data points excluded for failing to meet inclusion criteria. Surveys included in the database are inconsistent in the questions posed to households about cooking (typical questions by survey source are included in Supplementary Table 1 ). Most survey questions focus on the main or primary type of cooking fuel or energy rather than the cooking device, and thus the database version included in this study does not contain comprehensive data on solid fuel stove type (e.g., forced draft, brand information). Almost all surveys only assess the primary, or main, cooking fuel, or energy source which constrains the analysis to the primary fuel and technology used for cooking, although it is well documented that households often “stove-stack” or use multiple stoves and/or fuels 17 , 18 , 19 . Most surveys report the percentage of respondents mainly using each fuel separately for urban and rural areas. The definitions of urban and rural may vary by country, and we adopt these reported values directly rather than applying any standard definition of urban and rural. The WHO Household Energy Database contains data on the proportion of households mainly using a wide variety of cooking fuels, including alcohol fuels (e.g., ethanol), biogas, charcoal, coal, crop residues, dung, electricity, kerosene, liquid petroleum gas (LPG), natural gas, solar energy, and wood. However, surveys are not always consistent in the fuel options they present to respondents. In particular, some surveys combine fuels into a single option (notably natural gas and LPG are often combined into the category “gas”). The result of this is that the time series of survey data for certain individual fuels can be unstable or unreliable in some countries. Where appropriate in terms of similarity of health impacts, and relevance to policymakers, these issues can be remedied by combining affected fuels into a single category for modeling purposes. Here, we combine wood, crop residues, and dung into the category “biomass”, representing the combined use of unprocessed/raw biomass fuels, and we combine LPG, natural gas and biogas into the category “gas”—refer to Fig. 1 for a visual representation of these categories. Although solar and ethanol are considered clean fuels, they have been included under the category “other fuels”, due to the sparse number of data points available for these fuels (105 total data points for solar energy ranging between 0 and 0.8%; seven total data points for ethanol ranging between 0 and 0.14%). We therefore estimate the population mainly using six fuel types: 1. biomass, 2. charcoal, 3. coal, 4. kerosene, 5. gas, and 6. electricity. A final category, “other fuels” represents the aggregate use of minor clean fuel types, e.g., solar and ethanol. Estimates for overall “polluting” and overall “clean” fuel use are then derived by aggregating estimates of relevant fuel types. “Other fuels” were not modeled individually but are included in the aggregate “clean” category. The global household energy model Previous statistical models for estimating fuel use have focussed on a single variable, i.e., solid fuel use or polluting fuel use 9 , 20 . Instead, we sought to model how a strongly related set of variables (the proportion of the population using each individual fuel type) changes over time, under the key constraint that as the use of one fuel increases the sum of the others must decrease, so that the total never exceeds 100%. No standard statistical procedure is available to achieve this while also properly quantifying the uncertainty associated with estimates for each fuel, which merited the development of the bespoke Global Household Energy Model 11 (GHEM), a state-of-the-art Bayesian hierarchical approach 21 to jointly estimating the use of individual fuels for cooking. Trends in the proportions using each fuel type are modeled together for both urban and rural areas of each country using smooth functions of time (thin-plate splines) as the only covariate. Estimates produced by the model are realistic in the sense that, for each country, urban, rural, and overall fuel use is linked by estimates of the survey sample urban proportion (including for years without surveys), also based on smooth functions of time. The model outputs Bayesian “posterior” probability distributions for fuel use in a given year and country, which can be used to answer questions like “What is the probability that the use of coal exceeds 10% in urban areas of Mongolia?”. For reporting purposes, summaries of these distributions can be taken to provide both point estimates (e.g., means or medians, the latter being what we present here and in the Supplementary Information/Data) and measures of uncertainty (e.g., 95% prediction intervals (PIs)—which mean there is a 95% probability that fuel use lies within the given range). Here, we use the term “uncertainty interval” to describe central 95% posterior credible/prediction intervals. GHEM is implemented using custom code (fully provided in Supplementary Software 1 ) in the R programming language (version 4.0.0) and the NIMBLE 22 software package (version 0.10.1) for Bayesian statistical modeling with Markov chain Monte Carlo (MCMC). We also used the following R packages for our analysis: abind (1.4-5); coda (0.19-3); doParallel (1.0.15); ggfan (0.1.3); ggplot2 (3.3.0); grid (4.0.0); gridExtra (2.3); mgcv (1.8-33); openxlsx (4.1.5); Rcolorbrewer (1.1-2); readxl (1.3.1); reshape2 (1.4.4); rgdal (1.5-16); scales (1.1.0); and tidyverse (1.3.0). The version of GHEM used for this analysis differs from the previously published version 11 in that no regional structures were assumed a-priori. Non-informative prior distributions were assumed for all model parameters 11 . We ran four MCMC chains from distinct randomly generated sets of initial values, using different random number generator seeds for each chain. We ran the chains for 80,000 iterations, discarding the first 40,000 from each chain as “burn-in” and then thinning by a factor of 40 to reduce system memory usage. The result is a total of 4000 posterior samples for each model parameter, which are used to calculate posterior medians and central 95% posterior credible/prediction intervals. The probability distributions assumed for input survey data do not allow for inputs where the sum of the percentage mainly using all mutually exclusive fuel categories exceeds 100% (110 surveys, with a median total excess of 0.01%), which can occur due to rounding at different stages of data collection. For these surveys, fuel use values were uniformly scaled (divided by the sum of mutually exclusive categories), to have a total of 100%. Countries classified as high-income according to the World Bank country classification 23 (60 countries) are assumed to have fully transitioned to clean household energy and are reported as >95% access to clean fuels and technologies 1 . In addition, no estimates are provided for LMICs where no surveys were available or suitable for modeling post-1990 (Bulgaria, Cuba, Lebanon, and Libya). Modeled estimates for the use of overall clean, overall polluting and specific fuels are therefore provided for a total of 130 countries—128 LMICs plus two countries with no World Bank income classification (Cook Islands and Niue). Population data from the United Nations Population Division (2019 version) were used to derive the population-weighted regional and global aggregates. We present aggregate estimates for the eight SDG regions, as well as for the six WHO regions. LMICs without suitable survey data were excluded from all regional calculations and high-income countries were excluded from regional calculations for specific fuels—this means our regional estimates for specific fuels (e.g., gas) refer only to LMICs in those regions. Values of 100% clean fuel use were used for high income countries when calculating regional aggregates of clean and polluting fuel use. Future projections We also project observed trends in fuel use into the future using GHEM. These future projections were developed by extrapolating observed trends, representing a “business-as-usual” scenario assuming no new policies or interventions. The degree of uncertainty associated with such projections depends on a number of factors which vary by country, including the number of surveys conducted near present day and how changeable the trends are estimated to be over the available data period (1990–2018)—for example, projections for a country where trends are linear may display less uncertainty than a country with sudden changes in fuel use (e.g., Indonesia). The model has been validated 11 for making fuel use predictions up to 5 years beyond the end year of the data. Hence for years close to the end of the data period (e.g., 2019, 2020, 2021), point estimates and 95% prediction intervals can be interpreted as predictions of what may happen based on trends in the data. Further into the future, uncertainty tends to grow beyond practical levels but point estimates remain useful for policy purposes with a specific interpretation: what may happen if observed trends continue and no new policies or interventions are introduced. Health impacts Our estimates of the populations mainly using polluting fuels for cooking are used by the WHO to estimate the global burden of disease from household air pollution 3 . Future WHO burden of disease estimates are anticipated to be calculated based on estimated populations mainly using specific fuels and technologies for cooking. Other institutions have also developed burden of disease estimates for household air pollution based on cooking fuels, all with varying results but ultimately telling the same message: millions of premature deaths annually and hundreds of millions of years of healthy life lost due to exposure to household air pollution 24 , 25 , 26 . Disclaimer The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability To reproduce the results in this study, the relevant version of the WHO Household Energy database from the 30th of January 2020 is available from the corresponding author or directly from the WHO (householdenergy@who.int) on reasonable request, on the condition that the correspondence states what the data set will be used for. The data generated in this study are provided as Supplementary Data. Future updates to household cooking fuel estimates will be posted to the WHO Global Health Observatory data repository ( ). Code availability Custom R code (tested using R version 4.0.0) to reproduce the results in this study is provided for download as Supplementary Software 1 . Running the code may require up to 64 gigabytes of system memory. Please note the analysis relies on Markov Chain Monte Carlo and Monte Carlo simulation methods, which are both stochastic in nature. This means that figures and quoted statistics have the potential to differ slightly each time the code is executed.
In 2030, almost 1 in 3people around the world will still be mainly using polluting cooking fuels and technologies, a major source of disease and environmental destruction and devastation, new research warned. This rises to more than 4 in 5 in sub-Saharan Africa, where the number of people mainly using polluting fuels is growing at an alarming rate. A new study, carried out by U.K. researchers and the World Health Organization (WHO), has estimated that just under 3 billion people worldwide—including more than 1 billion in sub-Saharan Africa—will still mainly be using polluting fuels such as wood fuels and charcoal at the end of the decade. These dirty fuels are a source of major health risks as they produce high levels of household air pollution—chronic exposure to which increases the risk of heart disease, pneumonia, lung cancer and strokes, amongst others. While the overall percentage of the global population mainly using polluting cooking fuels has been steadily decreasing since 1990, this trend is already showing signs of stagnation. Six in in ten people in rural areas are still reliant on biomass fuels such as wood and charcoal. Reports by the WHO and others have attributed household air pollution from these fuels to millions of deaths per year—comparable to the death toll from outdoor air pollution. At the same time, fuel collection is often tasked to women and children, reducing opportunities for education, or income generation. Polluting fuels are also an important cause of environmental degradation and climate change, with the black carbon from residential biomass cooking estimate to account for 25 percent of anthropogenic global black carbon emissions each year. The researchers insist the pivotal new study shows that, although progress has been made, the quest to deliver universal access to clean cooking by 2030 is "far off track." They believe that global leaders and policy makers need to make significant advancements, in the short-term future, to help combat the health and environmental risks of household air pollution. The study is published in Nature Communications on October 4, 2021. The lead author of the study, Dr. Oliver Stoner, who carried out the research at the University of Exeter but is now at the University of Glasgow said: "Analyzing global trends suggests incremental progress in the direction of clean cooking fuels, but the simple reality is that there can be no global success while the number of people using polluting fuels in sub-Saharan Africa grows by 10s of millions every year." Heather Adair-Rohani, technical lead on Health and Energy in the Department of Environment, Climate Change and Health at the WHO headquarters in Geneva, and a senior author on the study, stressed the importance of tackling the root causes household air pollution, "Accelerating access to clean cooking solutions must be a developmental priority. Ensuring the sustained adoption of clean cooking solutions can prevent disease and improve the livelihoods of the poorest populations as well as protect our climate." The crucial need to provide access to clean cooking globally was enshrined in the 2030 Agenda for Sustainable Development, adopted by all United Nations member states, as one of three targets for Sustainable Development Goal (SDG) 7, to "ensure access to affordable, reliable, sustainable and modern energy." As part of its mandate to monitor and inform policy towards this goal, WHO publishes estimates of exposure to HAP and related disease burdens, which have traditionally examined use of polluting fuels as a group, without distinguishing between the different fuels used. For the new study, the researchers used sophisticated modeling combined with increasingly detailed household survey data to give a more accurate portrayal of the extent polluting cooking fuels are still used. The research provides comprehensive and reliable estimates for the use of six types of fuel—electricity, gaseous fuels, kerosene, biomass, charcoal, coal—as well as overall clean and polluting fuel use from 1990 to 2020, and subsequent predictions up until 2030. Together with the article, all estimates are published open access, to enable a new wave of research and policy aimed at tackling household air pollution. Among the research findings are: The absolute number of people using polluting fuels has deviated little from 3 billion over the last three decades. Projections show that 2.7 billion people—just under 1 in 3—will continue to mainly rely on polluting cooking fuels in 2030. Sub-Saharan Africa is now the largest regional population mainly using polluting fuels for cooking, expected to rise above 1 billion people in the next five years under a business-as-usual scenario. Charcoal has become the most popular fuel in urban sub-Saharan Africa. Dr. Stoner added: "While our analysis already paints a bleak picture, we don't yet know the full extent to which the COVID-19 pandemic has threatened or even undone recent progress." "Household cooking fuel estimates at global and country level for 1990 to 2030," is published in Nature Communications on October 4, 2021.
10.1038/s41467-021-26036-x
Medicine
How Roquin controls the activity of immune cells
Gesine Behrens et al, Disrupting Roquin-1 interaction with Regnase-1 induces autoimmunity and enhances antitumor responses, Nature Immunology (2021). DOI: 10.1038/s41590-021-01064-3 Journal information: Nature Immunology
http://dx.doi.org/10.1038/s41590-021-01064-3
https://medicalxpress.com/news/2021-11-roquin-immune-cells.html
Abstract Roquin and Regnase-1 proteins bind and post-transcriptionally regulate proinflammatory target messenger RNAs to maintain immune homeostasis. Either the sanroque mutation in Roquin-1 or loss of Regnase-1 cause systemic lupus erythematosus-like phenotypes. Analyzing mice with T cells that lack expression of Roquin-1, its paralog Roquin-2 and Regnase-1 proteins, we detect overlapping or unique phenotypes by comparing individual and combined inactivation. These comprised spontaneous activation, metabolic reprogramming and persistence of T cells leading to autoimmunity. Here, we define an interaction surface in Roquin-1 for binding to Regnase-1 that included the sanroque residue. Mutations in Roquin-1 impairing this interaction and cooperative regulation of targets induced T follicular helper cells, germinal center B cells and autoantibody formation. These mutations also improved the functionality of tumor-specific T cells by promoting their accumulation in the tumor and reducing expression of exhaustion markers. Our data reveal the physical interaction of Roquin-1 with Regnase-1 as a hub to control self-reactivity and effector functions in immune cell therapies. Main Post-transcriptional control of mRNA stability or translation through RNA-binding proteins (RBPs) represents an important level of gene regulation with crucial impact on immune cell fate decisions. This role becomes evident from combined genetic inactivation of alleles encoding for the Roquin-1 and Roquin-2 proteins with redundant functions in mice or in a human patient who developed a severe hyperinflammatory syndrome due to a homozygous nonsense mutation in the RC3H1 gene encoding ROQUIN-1 (refs. 1 , 2 , 3 ). Moreover, a single amino acid exchange (M199R), called sanroque mutation, in the murine Roquin-1 protein causes lupus-like autoimmunity. Regnase-1-deficient mice exhibit a comparable autoimmune phenotype with activated CD4 + and CD8 + T cells, accumulation of plasma cells, hypergammaglobulinemia and autoantibody production 4 , 5 . In response to T cell activation, Roquin-1, its paralog Roquin-2 and Regnase-1 are similarly regulated by MALT1-dependent proteolytic cleavage 5 , 6 . All three RBPs share a number of mRNA targets 3 , 5 , 6 , 7 , suggesting potential cooperation 6 , 8 . Roquin-1/2 proteins repress the expression of Regnase-1 (refs. 6 , 8 ). Mapping of binding sites of overexpressed Regnase-1 crosslinked to cellular mRNAs revealed the sequence determinants of Roquin-recognized stem loops of the constitutive decay element (CDE) 7 , 9 . Despite an extensive overlap in phenotypes and regulation of these RBPs, they have different functions. Roquin-1 interacts with components of the mRNA deadenylation and decapping machinery 9 , 10 , 11 , whereas Regnase-1 endonuclease cleaves target mRNAs 4 , 7 , 12 . Because Roquin-1 and Regnase-1 were found enriched in P bodies and at the endoplasmic reticulum (ER), respectively and differed in their requirements for regulation of reporter mRNAs, it was proposed that Roquin-1/2 and Regnase-1 proteins function in a compartmentalized manner independently of each other 7 , 13 . Owing to the prominent humoral autoimmunity occurring in mice with the sanroque mutation or Regnase-1 inactivation 4 , 14 and recapitulation of hallmark phenotypes by T cell-specific deletion of Roquin-1/2 or Regnase-1 (refs. 3 , 5 ), previous studies mostly focused on CD4 + helper T cells. Nevertheless, the ubiquitous expression of Roquin-1/2 and the prevalence of individual Regnase-1/2/3/4 paralogs suggest an importance of both RBP families in many types of cells and fate decisions 15 , 16 , 17 . More recently, inactivation of Regnase-1 in tumor-specific CD8 + T cells or chimeric antigen receptor (CAR)-T cells resulted in increased antitumor responses 18 , 19 , whereas an involvement of Roquin paralogs in CD8 + effector T cell responses has not yet been studied. In the context of defining tumor antigen-specific T cell antigen receptors (TCRs) or CARs, current efforts also try to modulate tumor-specific T cells. The aim is to bolster activation, prevent regulatory T (T reg ) cell-induced suppression, reprogram metabolism or break tumor-imposed exhaustion and make adoptive cell therapies (ACTs) efficient for different blood cancers as well as solid tumors. Here we explore how the interaction of Roquin-1 with Regnase-1 affects peripheral tolerance and how this program can be used in ACT. We show that both proteins bind to each other in a ternary complex on RNA. This interaction was important for the regulation of shared targets and controlled CD4 + and CD8 + T cell quiescence, metabolic programs, T cell activation, differentiation and effector functions. Weakening the physical interaction of Roquin-1 with Regnase-1 by introducing mutations into the mouse germline caused humoral autoimmunity but led to enhanced responses of cytotoxic T cells directed toward tumor-expressed antigens in the tumor setting. Results Roquin-1/2 and Regnase-1 maintain quiescence of T cells To address the functional relationship of Roquin-1/2 and Regnase-1 proteins we analyzed mice with T cell-specific deletion of the genes encoding Regnase-1 (ref. 20 ) ( Zc3h12a fl/fl ; Cd4- Cre, termed KO T ), Roquin-1 and Roquin-2 (refs. 3 , 21 ) ( Rc3h1 fl/fl ; Rc3h2 fl/fl ; Cd4- Cre, termed DKO T ) or a combination of all three genes ( Zc3h12a fl/fl ; Rc3h1 fl/fl ; Rc3h2 fl/fl ; Cd4- Cre, termed TKO T ). CD4 + and CD8 + T cells from all mutant mice showed a spontaneous reduction of naive T cells (CD62L + CD44 lo ) and an increase in effector memory T cells (CD62L – CD44 hi ) (Fig. 1a,b ). Accumulation of effector CD4 + and CD8 + T cells was not due to a reduction in T reg cells, which instead increased in frequencies in all knockout mice and also in numbers in KO T and TKO T mice (Fig. 1c,d ). Reconstituting lethally irradiated CD45.1/2 congenic mice with mixtures of wild-type (CD45.1) and TKO T (CD45.2) bone marrow (Extended Data Fig. 1a ) revealed that the increase in peripheral T reg cells (Fig. 1c,d ) was not cell intrinsic, as we found comparable frequencies of wild-type and TKO T T reg cells in mixed-bone-marrow chimeric mice that were increased compared to wild-type/wild-type chimeric mice (Extended Data Fig. 1b ). The increased T reg cell frequencies in TKO T but also KO T and DKO T mice presumably occurred secondary to tissue inflammation 3 , 5 , 6 , 22 , consistent with the observed infiltration of leukocytes into the lung of mutant mice (Extended Data Fig. 1c ). We then asked whether the observed activation of conventional CD4 + and CD8 + T cells also occurred in the presence of wild-type T reg cells. We therefore generated mice in which deletion of Roquin-1/2- and Regnase-1-encoding genes, individually or in combination, can be induced by tamoxifen treatment using the Cre-ERt2 transgene 23 . We adoptively transferred CD3 + T cells (CD45.2) from iKO ( Zc3h12a fl/fl ;Cre-ERt2), iDKO ( Rc3h1/2 fl/fl ;Cre-ERt2) or iTKO ( Rc3h1/2 fl/fl ; Zc3h12a fl/fl ;Cre-ERt2) mice into congenic (CD45.1) wild-type hosts that were then treated with tamoxifen to acutely delete Roquin-1/2- and Regnase-1-encoding alleles (Fig. 1e ). CD45.2 + T cells on day 8 after transfer showed increased frequencies of CD4 + and CD8 + T cells for all knockouts (Fig. 1f ). We determined a breakdown of quiescence in knockout CD4 + and CD8 + T cells by increased effector-memory phenotypes (Extended Data Fig. 1d ) and enhanced proliferation of these cells indicated by Ki67 staining (Fig. 1g and Extended Data Fig. 1e ). These effects were observed in all knockouts but were typically more pronounced upon combined inactivation of Roquin-1/2 and Regnase-1 iTKO T cells. We employed extracellular flux (Seahorse) analyses to determine metabolic reprogramming, a hallmark of T cell activation and functional adaptation, addressing whether inactivation of the different RBPs alters metabolic pathways. During restimulation of CD4 + T cells after in vitro deletion by 4′-OH-tamoxifen treatment we found increased glycolytic activities in all three knockouts compared to wild-type T cells, even under glucose-deprived conditions at baseline. Extracellular acidification rates (ECARs) were even more elevated upon glucose addition and glycolytic capacities were higher in all three knockouts, with the strongest effects in iTKO T cells (Fig. 1h and Extended Data Fig. 2a–c ). Mitochondrial respiration was also affected by Roquin-1/2 and Regnase-1 deficiencies and, at baseline, oxygen consumption rates (OCRs) were elevated in all knockouts with a highest increase in iTKO T cells. OCR changes were more pronounced than ECARs suggesting that knockout T cells fuel their energetic demands to a main extent from OXPHOS (Fig. 1h and Extended Data Fig. 2d,e ). Consecutively, maximal respiration rates and respiratory spare capacities were increased in all knockout genotypes but were highest in iTKO T cells (Extended Data Fig. 2d,e ). With regard to mitochondrial respiration, Regnase-1 deficiency contributed stronger to the deregulation of metabolism compared to Roquin-1/2 deficiency. Similar metabolic reprogramming was also observed for knockout CD8 + T cell cultures; however, iTKO T CD8 + T cells showed variable effects in glycolytic tests, especially in mitochondrial stress tests (Extended Data Fig. 2f,g and Supplementary Table 1 ). These data show that Roquin-1/2 and Regnase-1 inactivation leads to a general metabolic reprogramming with increased energy generation from mitochondrial respiration and enhanced glycolytic capacity. Fig. 1: Roquin-1/2 and Regnase-1 maintain quiescence of T cells. a – d , Flow cytometry analysis of CD4 + ( a ), CD8 + ( b ) subpopulations (WT, n = 9; DKO T and KO T , n = 6; TKO T , n = 5 mice analyzed in at least three independent experiments) and T reg cells ( c , d ) (WT, n = 15; DKO T , n = 9; KO T , n = 6; TKO T , n = 10 mice analyzed in at least three independent experiments) from spleens of 6–8-week-old WT, DKO T , KO T and TKO T mice. CM, central memory; EM, effector memory; WT, wild type. e , CD45.2 + CD3 + T cells from WT, iDKO, iKO and iTKO mice were adoptively transferred into WT CD45.1 + mice. Mice were treated with tamoxifen by oral gavage to induce deletion of floxed alleles. f , g , Frequency ( f ) and proliferation ( g ) of CD45.2 + , CD4 + and CD45.2 + , CD8 + T cells were analyzed by flow cytometry on day 8 after transfer ( n = 6 biological replicates). h , i , CD4 + T cells from WT, iDKO, iKO and iTKO mice were treated with 4ʹ-OH-tamoxifen in vitro to induce deletion of floxed alleles. T cells were kept under type 1 helper T cell conditions and expanded with IL-2-containing medium for 2 d. IL-2 was withdrawn overnight followed by restimulation with anti-CD3/28 before glycolytic ( h ) and mitochondrial stress testing ( i ) ( n = 3 independent experiments). 2-DG, 2-deoxyglucose; Rot, rotenone; AA, antimycin. Data are presented as mean ± s.e.m., analyzed by one-way analysis of variance (ANOVA) with Bonferroni ( d ) or Dunnett’s ( f , g ) post hoc test. Source data Full size image Roquin-1/2 and Regnase-1 control humoral autoimmunity Cd4 -Cre mediated deletion of Roquin-1/2 encoding alleles has been associated with follicular helper T (T FH ) cell (PD-1 hi Cxcr5 hi , Bcl6 + ) accumulation 3 , which we also detected in KO T and even stronger in TKO T mice (Fig. 2a ). In addition, all mutant mice showed an accumulation of germinal center (GC) B cells (GL7 + CD95 + ) (Fig. 2b ). We combined floxed alleles with an inducible Cd4 -Cre-ERt2 knock in allele 24 , 25 and adoptively transferred naive CD4 + T cells (CD45.2 + ) from wild-type, iKO, iDKO and iTKO mice into congenic (CD45.1 + ) wild-type mice that were then treated with tamoxifen (Fig. 2c and Extended Data Fig. 2h ). Knockout CD4 + T cells spontaneously differentiated into T FH cells in vivo 6 d after acute gene deletion (Fig. 2d,e ). Different from non-inducible deletion, the inducible inactivation of Regnase-1 (iKO T ) showed strongest T FH cell accumulation compared to iTKO and iDKO, suggesting an advantage of this genotype in the adoptive transfer. Indeed, at a late time point, 7 weeks after transfer, we observed significantly increased numbers of iKO T cells compared to wild-type and iDKO T cells, which also occurred, albeit to a lesser extent, in iTKO T cells (Fig. 2f ). Considering intermediate frequencies of dividing iKO CD4 + T cells compared to iDKO and iTKO genotypes early after transfer (Fig. 1f ), this finding suggested that Regnase-1 deficiency promotes survival of T cells. Associated with their increased persistence, iKO T cells were capable of inducing autoimmunity in wild-type host mice within 7 weeks (Fig. 2g,h ). Transfer of iKO T cells into congenic hosts resulted in accumulation of GC B cells and plasma cells (Fig. 2g and Extended Data Fig. 2i,j ) and induced anti-nuclear antibodies (ANAs) in the serum of recipient mice (Fig. 2h ). This phenotype was consistent with the appearance of autoantibodies in the serum of 6–8-week-old KO T and TKO T (but not DKO T ) mice (Fig. 2i and Supplementary Table 1 ). Together, these data demonstrate that the autoimmunity associated with the absence of Roquin-1/2 and Regnase-1 genes is caused by deviation of helper T cell functions, because autoimmunity can be transferred with CD4 + T cells, develops in the presence of wild-type T reg cells and originates from a normal T cell receptor repertoire. Fig. 2: Roquin-1/2 and Regnase-1 control T FH differentiation and humoral autoimmunity. a , b , Flow cytometry analysis of T FH ( a ) and GC B cell ( b ) subpopulations from spleens of 6–8-week-old WT, DKO T , KO T and TKO T mice (WT, n = 14; DKO T , n = 8; KO T , n = 4; TKO T , n = 12 analyzed mice in at least three independent experiments). c , Naive CD45.2 + CD4 + T cells from WT, iDKO, iKO and iTKO mice were adoptively transferred into congenic WT CD45.1 mice. Mice were treated with tamoxifen by oral gavage to induce deletion of floxed alleles. d , e , Adoptively transferred T cells were analyzed by flow cytometry for markers of T FH cell differentiation on day 8 after transfer ( n = 6 biological replicates). f – h , Frequencies of adoptively transferred WT, iDKO, iKO and iTKO CD45.2 + CD4 + T cells ( f ), frequencies of recipient GC B cells ( g ) as well as levels of ANAs in the sera of recipient mice ( h ) were determined on day 49 after transfer (WT, n = 6; iDKO, n = 5; iKO, n = 4; iTKO, n = 6). i , ANAs in the serum of 6–8-week-old WT, DKO T , KO T and TKO T mice (WT, n = 7; DKO T , KO T and TKO T , n = 5 analyzed mice in three independent experiments). All data are presented as mean ± s.e.m. Statistical significance was calculated by one-way ANOVA with Bonferroni post hoc test ( a , b ) or Dunnett’s post hoc test ( e – i ). Source data Full size image Roquin-1/2 and Regnase-1 control CD8 + T cell functions CRISPR–Cas9-mediated inactivation of Regnase-1 yielded in improved antitumor responses of adoptively transferred CD8 + or CAR-T cells 18 , 19 . We therefore analyzed CD8 + T cell phenotypes after conditional deletion of the different RBPs by Cd4 -Cre-mediated deletion (Fig. 3 and Extended Data Fig. 3 ). The majority of Roquin-1/2-deficient DKO T CD8 + T cells adopted a short-lived effector cell (SLEC) phenotype with upregulated KLRG1 (Fig. 3a,b ) and downregulated TCF-1 expression (Fig. 3c,d ). Fewer Regnase-1-deficient (KO T ) CD8 + T cells showed increased KLRG1 expression with a majority of cells maintaining high TCF-1 expression (Fig. 3a–d ). This finding is consistent with a previous report on retained TCF-1 expression after CRISPR–Cas9-mediated inactivation of Regnase-1 in CAR-T cells 19 . Notably, the TKO T genotype was similar to either Regnase-1 or Roquin-1/2-deficiencies as it moderately increased KLRG1 and strongly reduced TCF-1 expression (Fig. 3a–d ). Analyzing additional markers of activation, stemness and exhaustion 26 , 27 (Extended Data Fig. 3a ) we found increased expression of ICOS, CTLA-4 and CD38 in all knockouts. Regnase-1 deficiency increased CXCR5, whereas Roquin-1/2 deficiency induced PD-1 and Tim3 expression (Extended Data Fig. 3a ). All knockout T cells showed elevated expression of the transcription factor BATF (Extended Data Fig. 3b ), as reported before for inactivation of Regnase-1 (ref. 18 ). CD8 + T cells from all genotypes showed the capacity to produce tumor necrosis factor (TNF) upon ex vivo stimulation, but only knockout T cells simultaneously produced TNF and interferon (IFN)-γ (Fig. 3e and Extended Data Fig. 3c ). Only CD8 + T cells with Regnase-1 deficiency (KO T or TKO T ) acquired an enhanced ability to produce interleukin (IL)-2 (Fig. 3f and Extended Data Fig. 3d ), whereas all knockout T cells had increased granzyme B expression compared to wild-type counterparts (Fig. 3g and Extended Data Fig. 3e ). To quantify cytotoxicity in vitro, we redirected polyclonal CD8 + T cells toward P815 mastocytoma tumor cells in the presence of anti-CD3 (Fig. 3h ). All knockout T cells showed enhanced killing in a chromium-release assay, but we observed strongest effects for Roquin-1/2 (DKO T ) compared to Regnase-1 deficiencies (KO T ) and intermediate effects were seen for TKO T CD8 + T cells. We then addressed whether inactivation of Roquin-1/2 or Regnase-1 leads to increased CD8 + effector responses in a B16-OVA melanoma model by adoptively transferring CD8 + TCR-transgenic OT-I T cells into hosts that had received tumor cells 3 d before (Fig. 3i ). Of note, hosts that received PBS or wild-type OT-I T cells showed exponential tumor growth between days 7–21, whereas either Regnase-1 or Roquin-1/2-deficient OT-I T cells suppressed tumor formation resulting in delayed occurrence of measurable tumors and reduced affection rates of recipient mice (Fig. 3j ). Together these data show that Roquin-1/2 and Regnase-1 proteins also control shared cellular programs in cytotoxic T cells, but both RBPs have different contributions to the individual phenotypes (Supplementary Table 1 ). Fig. 3: Inactivation of Roquin-1/2 or Regnase-1 in CD8 + T cells enhances cytotoxicity. a – d , Flow cytometry analysis of splenic WT, DKO T , KO T and TKO T CD8 + T cells for KLRG1 and CD62L expression ( a ), frequencies of SLEC CD8 + T cells (WT, n = 11; iDKO, n = 10; iKO, n = 6; iTKO, n = 5 individual mice in three independent experiments) ( b ) and percentage of cells with TCF-1 downregulation (WT, n = 9; iDKO, n = 10; iKO, n = 6; iTKO, n = 6 individual mice in three independent experiments) ( c , d ). e , f , Quantification of intracellular cytokine staining of IFN-γ, TNF (WT, n = 9; iDKO, n = 9; iKO, n = 11; iTKO, n = 5 individual mice in three independent experiments) ( e ) and IL-2 (WT, n = 6; iDKO, n = 7; iKO, n = 8; iTKO, n = 5 individual mice in three independent experiments) ( f ) in CD8 + T cells after PMA/ionomycin stimulation for 4 h. NS, not significant. g , Quantification of granzyme B-positive CD8 + T cells (WT, n = 7; iDKO, n = 11; iKO, n = 8; iTKO, n = 9 individual mice in four independent experiments) in spleens from WT, DKO T , KO T and TKO T mice. h , Chromium-release assay of P815 cells cultivated in effector to target ratios as indicated 4 h after adding splenic CD8 + T cells isolated from WT, DKO T , KO T and TKO T mice ( n = 2 individual mice in one experiment). i , Schematic representation of the experimental set-up of the B16-OVA tumor model. j , B16-OVA tumor growth without transfer (PBS control n = 5) or after transfer of either WT ( n = 5), DKO T or KO T OT-I T cells ( n = 6 individual mice in one experiment). Data are presented as mean ± s.e.m. Statistical significance was calculated by one-way ANOVA with Dunnett’s post hoc test. Source data Full size image Roquin-1 and Regnase-1 exhibit functional interaction To address post-transcriptional interaction, we tested the contribution of Roquin-1/2 and Regnase-1 to the regulation of ICOS, a well-described target of these RBPs 3 , 5 , 10 , 28 . We performed tamoxifen gavage of mice to acutely delete floxed alleles by Cd4 -Cre-ERt2 in vivo. Isolated naive CD4 + T cells were stimulated in vitro for 2 d with anti-CD3/CD28 under type 1 helper T (T H 1) cell conditions, as ICOS expression differs among helper T cell subsets 10 , 29 . Appropriate deletion was confirmed in immunoblots (Fig. 4a ). In wild-type T cells Roquin-1, the much lower-expressed Roquin-2, as well as Regnase-1 proteins increased during days 1–2 and consistent with TCR-induced MALT1 activity, accumulated as cleavage products. Upon removal of TCR stimulation (days 3–5) the full-length Roquin-1/2 and Regnase-1 proteins increased and the cleavage product of Regnase-1 disappeared. By contrast, cleaved Roquin-1 persisted, suggesting either a longer half-life or constitutive cleavage of this protein (Fig. 4a ). Consistent with the Regnase-1-encoding Zc3h12a mRNA being a target of Roquin-1/2 (ref. 6 ), the Regnase-1 protein became strongly induced in iDKO T cells, whereas Roquin-1/2 expression was unchanged upon Regnase-1 inactivation (Fig. 4a ), in contrast to results obtained in a human T cell line 22 . ICOS protein expression on wild-type T H 1 cells increased during stimulation (days 1–2) and returned to basal expression after stimulation (days 3–5). In iTKO compared to wild-type T cells, ICOS protein expression was strongly de-repressed at all time points starting at the naive stage (day 0) (Fig. 4b ). Notably, iTKO cells were almost unable to decrease ICOS protein expression after removal of the TCR stimulus (days 3–5) (Fig. 4b ). Inactivation of either Roquin-1/2 (iDKO) or Regnase-1 (iKO) increased ICOS protein expression during (day 1–2) and after stimulation (days 3–5) intermediate to wild-type and iTKO T H 1 cells. Different from the iTKO genotype, iDKO and iKO T H 1 cells were partially able to decrease ICOS protein after stimulation (Fig. 4b ). We then tested whether Roquin-1 and Regnase-1 can complement in the absence of each other and whether redundancy existed among Regnase paralogs (Fig. 4c ). To address these questions, we reconstituted Regnase-1 (iKO)- or Roquin-1/2 (iDKO)-deficient CD4 + T cells with doxycycline-inducible, green fluorescent protein (GFP)-tagged Roquin-1 or Regnase-1, Regnase-2, Regnase-3 and Regnase-4 proteins (Extended Data Fig. 4a ). In these assays we quantified endogenous ICOS and Regnase-1 expression on day 4 of T cell activation. Notably, exogenous GFP–roquin-1 was readily able to decrease Regnase-1 or ICOS expression in iDKO T cells, but was only partially able to downregulate ICOS expression in Regnase-1-deficient (iKO) T cells (Fig. 4c ). Endogenous Regnase-1 protein displays strong regulation of Zc3h12a mRNA by Roquin and Regnase-1 (Fig. 4a and refs. 6 , 30 ). To distinguish endogenous from overexpressed Regnase-1 protein, we altered the epitope within GFP–regnase-1 that is recognized by the monoclonal antibody used for detection (Extended Data Fig. 4b,c ). While becoming invisible to the antibody, GFP–regnase-1 invis protein remained fully able to downregulate targets such as CTLA-4 (Extended Data Fig. 4d ). The analysis of iKO T cells reconstituted with ectopic Regnase-1 invis , Regnase-2, Regnase-3 or Regnase-4 revealed that all four Regnase paralogs were able to downregulate ICOS expression in the absence of endogenous Regnase-1, with only Regnase-4 being slightly less efficient (Fig. 4c and Extended Data Fig. 4e ). Ectopic expression of the same Regnase paralogs in Roquin-1/2-deficient T cells showed that neither Regnase-1 invis nor Regnase-2, Regnase-3 or Regnase-4 were able to efficiently downregulate endogenous ICOS or Regnase-1 protein expression (Fig. 4c and Extended Data Fig. 4f ). Therefore, full regulation of these shared targets required Roquin-1/2 and Regnase-1 proteins. Fig. 4: Functional definition of Roquin-1 interaction with Regnase-1. a , b , WT, iDKO, iKO and iTKO CD4 + T cells were deleted in vivo by tamoxifen gavage, activated in vitro with anti-CD3/28 under T H 1 cell conditions (days 0–2) and cultivated in medium containing IL-2 (days 3–5). For each day, expression of Roquin-1/Roquin-2, Regnase-1 and GAPDH ( a ) or ICOS ( b ) was analyzed by immunoblot or flow cytometry, respectively. Asterisks mark MALT1 cleavage fragments of Roquin-1 and Regnase-1 ( a ). MFI, mean fluorescence intensity. c , d CD4 + T cells of indicated genotypes were retrovirally transduced with GFP, GFP–roquin-1, GFP–regnase-1 invis , GFP–regnase-2, GFP–regnase-3, GFP–regnase-4 ( c ) or with GFP, GFP–roquin-1 aa1-510 and ROQ mutants introduced into GFP–roquin-1 aa1-510 ( d ). Histograms of ICOS and Regnase-1 expression in GFP + cells with indication of the respective geometric MFI (gMFI) values ( c ) or fold suppression of Regnase-1 expression in GFP + cells transduced with the indicated construct relative to cells transduced with GFP only calculated using gMFI ( d ). e , Structure of the Roquin-1 ROQ domain with a bound RNA stem loop marked in green, amino acid M199 marked in blue, amino acids essential for Roquin-1 and Regnase-1 functional interaction marked in orange and tested nonessential amino acids in yellow. f , g , WT ( f ) or iDKO ( g ) CD4 + T cells were retrovirally transduced with GFP, GFP–roquin-1 or the indicated GFP–roquin-1 mutants. Histograms of ICOS, Regnase-1 and Ox40 expression in GFP + cells with indication of the respective gMFI. Data are representative of n = 3 independent experiments ( a – c , f , g ) and are presented as mean ± s.e.m. of n = 3 independent experiments ( d ). Source data Full size image Molecular determinants of Roquin cooperation with Regnase-1 To better understand the functional interaction in the regulation of mRNA targets by Roquin and Regnase-1, we searched for a minimal region in Roquin-1 that was able to regulate shared targets. Notably, in iDKO T cells the amino-terminal MALT1 cleavage product of Roquin-1 (Roquin-1 aa1-510 ) showed partial or almost full activity to downregulate ICOS or Regnase-1 expression, respectively, but was unable to repress Ox40 ( Tnfrsf4 ), another well-known target 3 , 5 , 31 (Extended Data Fig. 5a–c ). This truncated version of Roquin-1 exerted a slight dominant–negative effect on Regnase-1 and Ox40 protein expression when we induced its expression in wild-type T cells (Extended Data Fig. 5d ). Deletion analyses indicated that the core RNA-binding domain of Roquin-1 containing HEPN N /ROQ/HEPN C was sufficient to suppress Regnase-1 expression in iDKO T cells (Extended Data Fig. 5e ). We then utilized the interdependent regulation of endogenous Regnase-1 as a readout to screen a set of Roquin-1 point mutants. We exchanged residues on the surface of the HEPN N /ROQ/HEPN C domains of Roquin-1 with mutations that alter physical properties but were predicted not to interfere with folding (Extended Data Fig. 6a ) and found residues M199, E201, E202, L209, E212, D213, L217, F225, D322 in the ROQ domain essential for Roquin-1 aa1-510 -mediated repression of Regnase-1 (Fig. 4d ). Projecting these residues onto the ROQ domain structure 32 , 33 , 34 revealed a site of potential interaction between Roquin-1 and Regnase-1 that is different from the RNA interaction surface (Fig. 4e ). Notably, amino acid M199 of Roquin-1, which is mutated in sanroque mice, is part of this patch of residues in the ROQ domain (Fig. 4e ) and overexpression of this mutant in wild-type T cells did not exhibit a dominant–negative effect on Regnase-1 or ICOS expression (Fig. 4f and Extended Data Fig. 6b ). Instead, M199R, L209Y and E212K Roquin-1 mutants in the full-length protein were impaired to downregulate ICOS and Regnase-1 in iDKO T cells at the protein and mRNA level (Fig. 4g and Extended Data Fig. 6c,d ). However, all mutants were fully active to repress Ox40 (Fig. 4g and Extended Data Fig. 6c,d ) and, similar to wild-type GFP–roquin-1 protein, localized to P bodies, as identified by the RNA helicase Rck, a marker of P bodies 10 (Extended Data Fig. 6e ). The sanroque mutation (M199R) as well as the newly identified L209Y and E212K mutants therefore create hypomorphic Roquin-1 variants that have impaired co-regulation activity with Regnase-1 of ICOS and Regnase-1-encoding target mRNAs. These mutations did not affect Ox40 regulation, consistent with regulation of this target mRNA being fully dependent on the carboxy terminus of Roquin-1 and its interaction with the CCR4-NOT complex 2 . These data demonstrate that shared mRNA targets can be subject to full ( Zc3h12a ) or partial cooperation ( Icos ) as well as independent ( Tnfrsf4 ) modes of regulation by Roquin-1 and Regnase-1. Roquin and Regnase-1 form a ternary complex on RNA To determine whether post-transcriptional co-regulation can be explained by direct interaction of Roquin-1 with Regnase-1, we performed co-immunoprecipitation and Förster Resonance Energy Transfer (FRET) experiments. Roquin-1 could be co-immunoprecipitated with Regnase-1 from lysates of wild-type but not iKO CD4 + T cells (Extended Data Fig. 7a ). To obtain spatial information on this interaction in living cells, we used fluorescence lifetime imaging (FLIM) in HeLa cells co-transfected with GFP–regnase-1 and mCherry–roquin-1. In these experiments the reduction of fluorescence lifetime of GFP is caused upon energy transfer from GFP to mCherry and can be used to quantify donor–acceptor interactions. Both fluorescent proteins localized diffusely in the cytoplasm and colocalized with the P-body marker Rck tagged by blue fluorescent protein (BFP) (Fig. 5a ). We found enrichment of GFP–regnase-1 at the ER as reported earlier 7 , but only in cells that were not co-transfected with Roquin-1 (Extended Data Fig. 7b ). Moreover, sucrose gradient centrifugation of T cell extracts showed the majority of endogenous Regnase-1 co-migrating with Roquin-1 in fractions without monosomes or polysomes (Extended Data Fig. 7c ). Notably, GFP–regnase-1 interacted with mCherry–roquin-1 as evidenced by the reduced fluorescence lifetime of GFP in the presence of mCherry–roquin-1 (Fig. 5a,c ) but not in its absence (Fig. 5b,c ). This energy transfer occurred mainly in P bodies but also dispersed in the cytoplasm. Expression of mCherry–roquin-1 aa1-510 also reduced the lifetime of GFP–regnase-1 fluorescence, albeit to a lesser extent (Fig. 5c ), confirming that the amino terminus of Roquin-1 is sufficient for the interaction. As all four Regnase paralogs were able to downregulate ICOS expression in Regnase-1-deficient iKO T cells (Fig. 4c ), we tested the PIN domain of Regnase-1, which is highly conserved among Regnase paralogs 35 . The truncated GFP–regnase-1 aa112-297 protein was localized to the cytoplasm and nucleus and, similar to truncated Roquin-1 aa1-510 , was no longer enriched in P bodies (Extended Data Fig. 7d,e ). The truncated form of GFP–regnase-1 aa112-297 also exhibited a reduction in fluorescence lifetime mainly in the cytoplasm where it colocalized with mCherry–roquin-1 aa1-510 (Extended Data Fig. 7d,e ). Fig. 5: Molecular determinants of Roquin-1 interactions with Regnase-1. a – c , HeLa cells were co-transfected with BFP–Rck, GFP–regnase-1 (GFP–reg-1) together with mCherry–roquin-1 (mCherry–roq-1), mCherry–roquin-1 aa1-510 or mCherry ( a , c ) or with GFP–regnase-1 alone ( b ). Protein localization ( a , b ), FRET efficiency (GFP–regnase-1 in combination with mCherry 0.67%, mCherry–roq-1 6.6% or mCherry–roq-1 aa1-510 3.91%) and lifetime of GFP fluorescence ( c ) was analyzed via fluorescence lifetime microscopy. d , HEK293T cells were transfected with HaloTag–regnase-1 in combination with the indicated NanoLuc–roquin-1 aa1-510 expression plasmids and NanoBret ratio (mBRET) was calculated after measuring NanoLuc and HaloTag signals via NanoBret assay. e , SPR signals after addition of GST–regnase-1 aa1-452;D141N to Biacore-chip-immobilized Roquin-1 aa2-440 . RU, resonance units. f , Competitive in vitro GST-pulldown experiment using GST–regnase-1 D141N and wild-type SUMO–roquin-1 aa2-440 in combination with the indicated Roquin-1 aa2-440 double mutants (untagged). Quantification of eluted Roquin-1 aa2-440 mutant relative to SUMO–roquin-1 aa2-440 wild-type protein of SDS–PAGE depicted in Extended Data Fig. 7f . g , EMSA using a Zc3h12a 3ʹ-UTR RNA fragment (nt194–212), Roquin-1 aa2-440 (320 nM) in combination with increasing levels of GST–regnase-1 aa1-452; D141N . Presumably due to inclusion of unlabeled competitor RNA, recognition of the Zc3h12a mRNA stem loop by Regnase-1 could not be detected. Representative data of n = 3 independent experiments ( a , b , e , g ). Data are presented as mean ± s.e.m. of n = 3 independent experiments ( c , d ) or mean ± s.d. of n = 3 independent experiments ( f ). Statistical significance was calculated using one-way ANOVA with Bonferroni post hoc test ( c , d ) or Student’s t -tests ( f ). Source data Full size image To finely map the structural details of this interaction, we introduced the identified mutations (Fig. 4d,g ) in NanoLuc–roquin-1 aa1-510 fusion proteins and coexpressed them with HaloTag–regnase-1 to perform NanoBRET assays (Fig. 5d ). Co-transfected cells were analyzed for energy transfer from nano-luciferase to the HaloTag ligand. We observed a similar BRET signal for wild-type Regnase-1 and Roquin-1 proteins as for the positive controls (p53 and MDM2). Of note, single ROQ domain mutants of Roquin-1 (M199R, L209Y and E212K) as well as double mutants (M199R/L209Y or M199R/E212K) effectively reduced the BRET signal. Mutations that interfere with Roquin-1 binding to RNA (K220A/K239A/R260A) (ref. 32 ) did not reduce the interaction with Regnase-1. To confirm direct protein–protein interaction, we used surface plasmon resonance (SPR) with purified recombinant proteins. By immobilizing Roquin-1 aa2-440 on the surface of a Biacore chip and adding Regnase-1 aa1-452 protein in solution, we demonstrated formation of a stable binary Roquin-1 and Regnase-1-containing complex at nanomolar affinity ( K D = 417 nM) (Fig. 5e ). Next, we used a pulldown assay to quantify how the identified mutations in Roquin-1 affect direct binding of recombinant proteins. A larger, SUMO-tagged wild-type Roquin-1 protein was mixed with a shorter (untagged) version of Roquin-1 harboring different mutations. We quantified the ability of both proteins to compete for interaction with immobilized, GST-fused Regnase-1 full-length protein (Fig. 5f ). The ratio of Regnase-1 bound untagged Roquin-1 mutants to the SUMO-tagged wild-type Roquin-1 proteins in these pulldown experiments revealed three- to fourfold weaker interactions for M199R/L209Y or M199R/E212K Roquin-1 mutants (Fig. 5f and Extended Data Fig. 7f ). Of note, single M199R, L209Y or E212K mutants only resulted in a twofold reduction of binding (Extended Data Fig. 7g, h ). We next analyzed the interaction of Roquin-1 and Regnase-1 in RNA-electrophoretic mobility shift assays (EMSAs). We used the RNA-binding sufficient amino terminus of Roquin-1 aa2-440 and the RNase-dead version Regnase-1 aa1-452; D141N to avoid RNA degradation 4 and asked whether these proteins can form a ternary complex with the 3′-UTR stem loop of the Zc3h12a mRNA (nt194–212) (ref. 30 ) (Fig. 5g ). Indeed, the stem loop of the Zc3h12a mRNA was specifically bound by Roquin-1 aa2-440 and increasing Regnase-1 concentrations in these binding reactions decreased the Roquin-specific band and induced a supershift (Fig. 5g ). Collectively, these data establish Roquin-dependent recognition of the mRNA stem loop and additional interactions of the RNA-bound Roquin-1 protein with Regnase-1. Roquin-1 interaction with Regnase-1 prevents autoimmunity We then addressed the functional consequences of interfering with Roquin-1 and Regnase-1 interaction in vivo. We introduced two different Rc3h1 mutations encoding for Roquin-1 L209Y or E212K, exhibiting weaker or stronger inhibition of interaction and cooperative regulation with Regnase-1, into the mouse germline. Homozygous mice expressing Rc3h1 mutations encoding for E212K, L209Y or M199R increased the frequencies of activated CD44 + CD4 + T cells (Fig. 6a,b ), increased T cell proliferation (Fig. 6c,d ), caused a T H 1 cell bias with increased IFN-γ production of ex vivo-stimulated CD4 + T cells (Fig. 6e,f and refs. 22 , 36 ) and also induced accumulation of effector memory CD8 + T cells compared to wild-type mice, which was strongest for E212K mutant (Fig. 6g,h ). E212K mutant mice showed compromised viability so that only two homozygous animals could be analyzed to date. Together, these data suggest that Roquin-1 L209Y/L209Y or Roquin-1 E212K/E212K expressing mice phenocopy the sanroque phenotype. Fig. 6: Inhibition of Roquin-1 interaction with Regnase-1 breaks T cell quiescence. a – h , CD4 + and CD8 + T cells from mice with L209Y and M199R mutations (10–12 weeks old) or from mice with E212K mutations (8 weeks old) in Roquin-1 were characterized. CD4 + effector/memory populations ( a , b ), CD4 + T cell proliferation ( c , d ), ability of CD4 + T cells to produce IFN-γ and IL-17 after PMA/ionomycin stimulation ex vivo ( e , f ) or CD8 + effector/memory populations ( g , h ) were compared. i – k , Frequencies of CD4 + effector/memory T cell populations ( i ), T FH cells ( j ) and GC B cells ( k ) from spleens of 9–14-week-old WT, Rc3h1 M199R/fl ; Cd4 -Cre, Rc3h1 M199R/fl ; Vav -Cre and Rc3h1 M199R/M199R ( sanroque ) mice were determined by flow cytometry. Contour plots are representative of n = 4 biological replicates analyzed in at least two independent experiments ( a , c , e , g ) or n = 2 biological replicates in two independent experiments ( b , d , e , h ). Data are presented as mean ± s.e.m. of WT, Rc3h1 M199R/M199R , n = 6; Rc3h1 M199R/fl , Cd4 -Cre, Rc3h1 M199R/fl , Vav -Cre, n = 5 ( i ) or WT, Rc3h1 M199R/M199R , n = 8; Rc3h1 M199R/fl , Cd4 -Cre, n = 6; Rc3h1 M199R/fl , Vav -Cre, n = 7 ( j , k ) analyzed mice in at least three independent experiments. Statistical significance was calculated by one-way ANOVA with Bonferroni post hoc test ( j , k ). Source data Full size image We then addressed whether the sanroque phenotype 14 develops only due to altered T cell functions. As heterozygosity of the sanroque allele has no phenotype in young mice 37 , we combined one Rc3h1 san and one Rc3h1 fl allele with Cd4 -Cre or Vav-Cre. This allowed us to compare phenotypes of the sanroque allele originating from T lymphocytes versus hematopoietic cells (Fig. 6i–k and Extended Data Fig. 8a–c ). Vav-Cre-mediated deletion of the floxed Rc3h1 allele was much more potent to induce T cell activation (Fig. 6i and Extended Data Fig. 8a ) and GC B cell accumulation compared to Cd4 -Cre (Fig. 6k and Extended Data Fig. 8c ), whereas T FH cell differentiation was similarly increased for both Cre lines (Fig. 6j and Extended Data Fig. 8b ). This result indicated T cell-extrinsic contributions of the sanroque allele to the observed activation of T cells and accumulation of GC B cells. Similar to the sanroque allele, heterozygosity of alleles encoding L209Y/+ or E212K/+ did not induce obvious phenotypes in young mice (Fig. 7 ). To formally test whether L209Y and E212K induce the same functional impairment in Roquin-1 as the M199R-encoding allele, we generated mice encoding two heterozygous mutations and compared their phenotypes to mice with homozygous M199R-encoding or wild-type alleles. CD4 + T cells from mice carrying mutations encoding Roquin-1 M199R/L209Y and Roquin-1 M199R/E212K showed a pronounced increase in frequencies of effector memory (Fig. 7a,b ) and T FH cells (Fig. 7c,d ) compared to wild-type or heterozygous mutant mice with one wild-type allele (Fig. 7a–d ). The heterozygous combination with one sanroque allele also caused spontaneous formation of germinal centers in the majority of B cell follicles of the spleen (Fig. 7e ) and induced GC B cells in the absence of immunization (Fig. 7f,g ). Analyzing autoantibodies in the sera of mice at 8–12 weeks of age (Fig. 7h ) we found that homozygous sanroque mice expressing Roquin-1 M199R/M199R , different from heterozygous Roquin-1 M199R/+ mice, showed a wide range of elevated titers of ANAs, which were partially matched by mice expressing the compound Roquin-1 M199/E212K , Roquin-1 L209Y/L209Y and Roquin-1 E212K/E212K . In fact, mice expressing Roquin-1 M199R/E212K showed significantly increased titers of ANAs when compared to mice expressing Roquin-1 M199R/+ . Collectively, our data demonstrate that the Roquin-1 L209Y and E212K mutants induce phenotypes similar to sanroque mice. Notably, the heterozygous combination of E212K- and M199R-encoding alleles revealed compound effects and autoimmunity equivalent to homozygous sanroque mice expressing the Roquin-1 M199R protein. Fig. 7: Combined heterozygosity of Roquin-1 E212K/M199R shows a synthetic phenotype. a – d , f – h , T and B cells from 10–12-week-old mice with the M199R mutation in Roquin-1 on one allele and L209Y, E212K or M199R mutations on the other allele were characterized. Frequencies of CD4 + T cell populations ( a , b ), T FH cells ( c , d ), immunofluorescence of GL7 (blue) and IgD (yellow) in spleen sections ( e ), frequencies of GC B cells ( f , g ) and levels of ANAs in sera determined by ELISA ( h ). Contour plots or immunofluorescence microscopy images are representative of at least three individual mice analyzed in at least two independent experiments ( a , c , e , f ). WT, n = 26; M199R/M199R, n = 6; M199R/L209Y, n = 3; M199R/E212K, n = 3; L209Y/L209Y, n = 14; E212K/E212K, n = 2; M199R/+, n = 3; L209Y/+, n = 2; E212K/+, n = 1 ( b , d , g , h ). Data are shown as mean ± s.e.m. Statistical significance was determined using one-tailed Student’s t -tests. Representative images of n = 3 analyzed mice per genotype ( e ). Source data Full size image Roquin-1 interaction with Regnase-1 inhibits antitumor immunity Based on our previous findings demonstrating enhanced cytotoxic activities of Regnase-1 or Roquin-1/2-deficient CD8 + T cells, we asked whether the mixed heterozygous Roquin-1 M199R/E212K or sanroque mouse mutants that showed elevated autoantibody titers would also exhibit enhanced antitumor responses. First, we established that OT-I CD8 + T cells from heterozygous sanroque mice (Roquin-1 M199R/+ ) were not effective in conferring protection from tumor growth in the B16-OVA model, as they were comparable to CD8 + T cells from OT-I wild-type mice (Fig. 8a ). In the OT-I context we determined only a moderately increased effector memory phenotype in T cells expressing Roquin-1 M199R/E212K or Roquin-1 M199R/M199R proteins (Fig. 8b ). Nevertheless, mice receiving OT-I T cells with either of the two mutations were largely protected from tumor growth as compared to control-treated tumor bearing mice (Fig. 8c ). Similar to previous observations on Regnase-1 inactivation 18 , the frequency of transferred OT-I TCR-transgenic relative to endogenous CD8 + T cells increased more in the tumor than in the spleen for T cells with mutations encoding Roquin-1 M199R/E212K or Roquin-1 M199R/M199R (Fig. 8d ) and these mutant OT-I T cells showed similar upregulation of CD44 but moderately increased KLRG1 expression compared to wild-type OT-I T cells (Fig. 8e ). Of note, analyzing PD-1 or TOX as markers of exhaustion 38 revealed decreased expression on T cells of mice expressing Roquin-1 M199R/E212K or Roquin-1 M199R/M199R (Fig. 8f–i ). Moreover, CD101, which marks terminally exhausted CD8 + T cells in chronic infections 26 , was strongly expressed on wild-type OT-I T cells in the tumor, but almost absent from OT-I T cells expressing Roquin-1 M199R/E212K or Roquin-1 M199R/M199R proteins. Together, these data show that inhibition of Roquin-1 interaction with Regnase-1 promotes the effector function of tumor-specific T cells by increasing their abundance and attenuating their functional inactivation in the tumor. Fig. 8: Disrupting Roquin-1 interaction with Regnase-1 improves effector function of tumor-specific T cells. a , c , Analysis of tumor growth of B16-OVA after transfer of OT-I T cells as described in Fig. 3i and with the indicated genotypes ( n = 5 individual mice in one experiment, respectively). b , Flow cytometry analysis of activation markers CD44 and CD62L on OT-I T cells before transfer. d , Frequencies of OT-I T cells relative to all + cells are displayed for tumor and spleen at day 21 after tumor cell transfer. e – i , Flow cytometry analysis of KLRG-1, CD44, CD101, PD-1 and TOX on OT-I T cells in the tumor on day 21 after tumor cell transfer. Data are representative of WT, n = 4; M199R/M199R, n = 5; M199R/E212K, n = 5 individual mice analyzed in one experiment ( d ). Data are representative of WT, n = 3; M199R/M199R, n = 3; M199R/E212K, n = 2 or 4 each individual mice analyzed in one experiment ( e ). Data are representative of WT, n = 7; M199R/M199R, n = 7; M199R/E212K, n = 2 individual mice in three independent experiments ( g ). Data are representative of WT, n = 5 and M199R/M199R, n = 4 individual mice in two independent experiments ( h ). Data are presented as mean ± s.e.m. Statistical significance was calculated by one-way ANOVA with Bonferroni post hoc test ( d , e , g ) or unpaired one-tailed Student’s t -test ( i ). Source data Full size image Discussion We describe the direct physical interaction of Roquin-1 and Regnase-1 proteins on RNA. Our findings reveal cooperative regulation of Icos and Zc3h12a (Regnase-1) mRNAs within the shared target set 7 , 9 , 39 , 40 , 41 and explain overlapping phenotypes of Roquin-1/2 and Regnase-1 mutant mice 3 , 5 , 14 . Conditional inactivation of Roquin-1/2 or Regnase-1 in T cells or germline mutations introducing M199R ( sanroque ), L209Y and E212K substitutions in the ROQ domain of Roquin-1 that similarly interfere with Roquin-1 binding to Regnase-1, caused autoimmunity. This role in peripheral tolerance becomes evident from spontaneous activation of T cells and accumulation of T FH cells and GC B cells in all mouse lines, as well as autoantibody formation in some mouse lines. Analyzing similar changes induced through acute deletion of RBP-encoding alleles in T cells, we find that the Roquin-1/2 and Regnase-1 proteins are continuously required in naive T cells to maintain quiescence. These proteins silence cell-intrinsic programs of activation and proliferation associated with metabolic reprogramming to increased glycolysis and enhanced oxidative phosphorylation. Although both Roquin-1/2 and Regnase-1 proteins have already been found to be negative regulators of the mTOR pathway, protein biosynthesis and purine metabolism 42 , 43 , it is currently unclear which target(s) are driving the observed metabolic changes and trigger spontaneous activation of T cells. Following activation, CD4 + T cells committed to the T FH cell subset and CD8 + T cells acquired polyfunctionality and enhanced cytotoxic activity. In our phenotypic comparisons, contributions from Roquin-1/2 and Regnase-1 were often comparable and combined inactivation typically showed increased effects. The related phenotypes indicate that these RBPs cooperate in the same pathways. Cooperative post-transcriptional regulation can either be explained by binding in a complex to the same mRNA, as established here. It can also be explained by independent binding of the different RBPs to the same mRNA 44 or independent binding to different molecules of the same mRNA species as suggested previously 7 , 13 , 22 or independent binding to different mRNA species that then cooperate in the same pathway. Understanding how cooperation is encoded in the transcripts of Icos and Zc3h12a but not in Tnfrsf4 and thereby allows formation of differential messenger ribonucleoproteins will require extensive structural, biochemical and functional analyses. Of note, our data also revealed selective contributions, especially for CD8 + T cells. Regnase-1 deficiency was associated with increased persistence of CD4 + and CD8 + T cells 5 , 18 and enabled CD8 + T cells to produce copious amounts of IL-2. By contrast, Roquin-1/2 deficiency induced KLRG1 expression in CD8 + T cells, caused downregulation of TCF-1 and strongly increased in vitro cytotoxicity. Nevertheless, inactivation of either Roquin-1/2 or Regnase-1 improved antitumor responses and a comparable improvement resulted from interfering with Roquin-1–Regnase-1 interaction. Here, we present the interaction of Roquin-1 with Regnase-1 as a molecular mechanism underlying the prevention of autoimmunity and propose that this interaction can become a promising target for improvement of therapeutic approaches with adoptively transferred antigen-specific T cells. Methods Mice All mice used in this study were on a C57BL/6 background. All animals were housed in a specific-pathogen-free barrier facility under a 12 h/12 h dark/light regime at 20–24 °C and at a humidity of 45–65% in accordance with the Helmholtz Zentrum München and the Ludwig-Maximilians-Universität München institutional, state and federal guidelines. All experimental procedures involving male or female mice were performed in accordance with regulations and were approved by the local government (Regierung von Oberbayern reference nos. 55.2-2532-Vet_02-19-122, 55.2-2532.Vet_02-17-159, 55.2-2532.Vet_02-18-10 and 55.2-2532.Vet_02-19-68). Rc3h1/2 fl/fl mice are transgenic for the Roquin-1-encoding gene Rc3h1 (ref. 21 ) and the Roquin-2-encoding gene Rc3h2 3 . Zc3h12a fl/fl mice are transgenic for the Regnase-1-encoding gene Zc3h12a 20 . Transgenic Rc3h1/2 fl/fl mice were crossed to Zc3h12a fl/fl mice to reach the final genotype of Rc3h1/2 fl/fl ; Zc3h12a fl/fl , which were crossed with either Cd4 -Cre 45 , Cre-ERt2 (ref. 23 ) or Cd4 -Cre-ERt2 (ref. 24 ) transgenic mice. For tumor experiments, OVA-specific transgenic TCR was introduced by crossing mice to the OT-I line 46 . Rc3h1/2 fl/fl ; Cd4 -cre-ERt2;rtTA-M2 or Zc3h12a fl/fl ; Cd4 -cre-ERt2;rtTA-M2 mice were generated by crossing mice with respective loxP sites and Cd4 -cre-ERt2 with Gt(ROSA)26Sortm1(rtTA*M2)Jae mice 47 . The Rc3h1 M199R mice ( sanroque) (EM:02168) were obtained from the European Mouse Mutant Archive consortium and mice expressing CD45.1 ( Ptprc a Pepc b /BoyJ), Vav-iCre (Jax no. 00861) as well as Gt(ROSA)26Sortm1(rtTA*M2)Jae mice were obtained from the Jackson Laboratory. Data collection and analysis were not performed blind to the conditions of the experiments. Experimental groups were assigned according to genotype. Generation of Rc3h1 mutant mouse lines via CRISPR–Cas9-based gene editing The Rc3h1_E212K mouse line was generated by Polygene Transgenetics via CRISPR–Cas9 gene editing of the ES cell line and blastocyst injection and the Rc3h1_L209Y mouse line via CRISPR–Cas9-based gene editing by electroporation of one-cell embryos. Specific guide RNAs (Rc3h1_L209Y_gRNA: 5ʹ-CAATGCAGAACCATCTTCTA-3ʹ) were used in form of in vitro transcribed single gRNA (EnGen sgRNA Synthesis kit, NEB, E3322). Before electroporation, the specific sgRNA (200 ng µl −1 ) and single-strand oligonucleotides (ssODN_Rc3h1_L209Y: 5ʹ-TTGTACCATTTTTCCTAGCGATGCAGGAGGAAGCTCTGAAGCTGGTCTTGTATGCTTTAGAAGATGGTTCTGCATTGTCTCGGAAAGTGTTGGTTCTCTTCGTGGTGCAAAGACTGGAGC-3ʹ; 300 ng µl −1 ) were diluted in Opti-MEM buffer (Thermo Fisher Scientific) together with recombinant Cas9 protein (200 ng µl −1 , IDT) and incubated for 10 min at 20 °C and 10 min at 37 °C to form the active ribonucleoprotein complex. One-cell embryos were obtained by mating C57BL/6N males (Charles River) with C57BL/6N females super-ovulated with 5 U pregnant mare’s serum gonadotropin and 5 U human chorionic gonadotropin and electroporated using an NEPA21 electroporator and a CUY501P1-1.5 electrode (Nepa Gene Co). Zygotes were transferred into pseudo-pregnant CD1 female mice to obtain live pups. Gene-editing events were analyzed on genomic DNA isolated from ear biopsies of founder mice and F1 progeny using the Wizard Genomic DNA Purification kit (Promega, A1120) following the manufacturer’s instructions. Isolation, in vitro cultivation and transduction of primary CD4 + T cells Isolation, in vitro deletion of floxed alleles, in vitro cultivation and transduction of primary CD4 + T cells was performed as described 44 . In reconstitution experiments, for induction of pRetro-Xtight-GFP construct expression in rtTA expressing T cells, transduced cells were cultured in the presence of doxycycline (1 µg ml −1 ) for 16 h before flow cytometry analysis or 6 h before sorting of GFP + cells for quantitative PCR with reverse transcription (RT–qPCR). In vivo deletion of floxed alleles Male and female Cd4 -Cre-ERt2 mice (age 8–16 weeks) with alleles encoding Roquin and Regnase-1 were fed by oral gavage with 5 mg tamoxifen (Sigma) in corn oil per dose on three consecutive days with two doses of tamoxifen on the last day (20 mg total tamoxifen dose per mouse). Mice were killed 3 d after the last gavage and total CD4 + T cells were isolated using the EasySep Mouse T Cell Isolation kit (STEMCELL) according to manufacturers’ instructions. Generation of mixed-bone-marrow chimeric mice Bone marrow cells were isolated from femurs and tibias, frozen in FCS containing 10% dimethylsulfoxide and stored at −80 °C until injected into mice. Male and female CD45.1/2 heterozygous recipient mice (age 8–11 weeks) were lethally irradiated with 2 × 5.5 Gy with a XStrahl CIX2 X-ray device. Then, 4 × 10 6 bone marrow cells were injected intravenously (i.v.) into CD45.1/2 heterozygous recipient mice. Mice were treated for 2 weeks with water supplemented with antibiotics in drinking water (0.04% Baytril) and 9 weeks after irradiation, reconstituted cells were analyzed by flow cytometry. Adoptive T cell transfer Total CD3 + T cells, naive CD4 + T cells or CD8 + T cells were isolated from donor mice using the EasySep Mouse CD3 + Cell Isolation kit, Mouse Naive CD4 + T Cell Isolation kit or Mouse CD8 + Cell Isolation kit (STEMCELL), respectively, according to manufacturers’ instructions. Then, 1.5 × 10 6 cells were injected i.v. into 10-week-old male and female CD45.1 + WT recipient mice. On day 1 and 2 after injection all recipient mice were fed 5 mg tamoxifen by oral gavage twice per day (20 mg total tamoxifen per mouse). Mice were killed on day 8 or day 49 after receiving the first tamoxifen dose. Ex vivo T cell stimulation Total splenocytes were stimulated with 20 nM PMA and 1 µM ionomycin for a total of 4 h. After 1 h of stimulation 10 µg ml −1 brefeldin A was added. The reaction was stopped by washing the cells with cold PBS twice before proceeding with antibody staining for flow cytometry. 51 Cr-release assay Cell-mediated cytotoxicity was determined by a redirected lysis assay. P815 mastocytoma target cells were labeled with 50 μCi 51 Cr for 1 h at 37 °C. After washing, 2,000 target cells per 96-well were incubated with effector cells (CD8 + T cells isolated from lymph nodes and spleen) at different effector:target (E:T) ratios in the presence of anti-CD3 (clone (cl.) 145-2C11H, 1 μg ml −1 , inhouse production) for 4 h at 37 °C. Subsequently, the amount of radioactivity in the supernatant was measured using a scintillation counter (TopCount NXT). Melanoma tumor model Male and female C57BL/6 wild-type mice were subcutaneously injected with 2 × 10 5 B16-OVA tumor cells. After 3 d, congenically marked OT-I T cells (1 × 10 6 i.v.) were adoptively transferred directly after isolation. Tumors were measured manually with a caliper three times per week and tumor size was calculated according to the formula (length × width 2 ) / 2. Mice were humanely killed if the maximal permitted tumor size of 1,400 mm 3 was reached. Otherwise spleens and tumors were collected not later than 21 d after tumor engraftment, cut into small pieces and passed through a 100-μm diameter filter. Flow cytometry and data analysis with FlowJo Single-cell suspensions were stained with fixable blue viability dye (Thermo Fisher Scientific) for 20 min at 4 °C. For the detection of surface proteins cells were stained with the appropriate antibodies in FACS buffer (2% FCS, 1 mM EDTA in PBS) for 20 min at 4 °C. For intracellular staining of cytokines, Roquin, Regnase-1 or CTLA-4 cells were fixed with 2% formaldehyde at 20 °C for 15 min, washed with saponin permeabilization buffer (0.5% saponin and 1% BSA in PBS) and stained with the appropriate antibodies in saponin buffer for 40 min at 4 °C. For commercially available intracellular antibodies, cells were fixed with Foxp3 Fixation/Perm buffer (eBioscience) according to the manufacturer’s protocol for 30 min at 4 °C, permeabilized with Foxp3 permeabilization buffer (eBioscience) and stained with antibodies diluted in Foxp3 permeabilization buffer for 40 min at 4 °C. Cell populations were acquired on BD LSR Fortessa, BD FACSCanto II or Cytoflex (Beckmann Coulter) flow cytometry devices or sorted using BD FACS Aria III. Data were processed using FlowJo software (v.10.6.0, BD Bioscience). An exemplified gating strategy is shown in Extended Data Fig. 9 . The following antibodies were used: anti-CD4 (cl. GK1.5, 1:400 dilution), anti-CD8a (cl. 53-6.7, 1:400 dilution), anti-CD38 (cl. 90, 1:200 dilution), anti-CD44 (cl. IM7, 1:200 dilution), anti-CD62L (cl. MEL-14, 1:200 dilution), anti-CD45.1 (cl. A20, 1:200 dilution), anti-CD45.2 (cl. 104, 1:200 dilution), anti-CD45R (B220; cl. RA3-6B2, 1:200 dilution), anti-CD101 (cl. Moushi101, 1:200 dilution), anti-PD-1 (cl. J43, 1:200 dilution), anti-GL7 (cl. GL7, 1:200 dilution), anti-granzyme B (cl. NGZB, 1:150 dilution), anti-IFN-γ (cl. XMG1.2, 1:200 dilution), anti-ICOS (cl. C398.4A, 1:200 dilution), anti-IL-2 (cl. JES6-5H4, 1:100 dilution), anti-KLRG1 (cl. 2F1, 1:100 dilution), anti-Ox40 (cl. OX-86, 1:200 dilution), anti-CTLA-4 (cl. UC10-4B9, 1:200 dilution), anti-Foxp3 (cl. FJK-16S, 1:100 dilution), anti-Ki67 (cl. SolA15, 1:200 dilution), anti-Tim3 (cl. RMT3-23, 1:200 dilution) and anti-TNF-α (cl. MP6-XT22, 1:100 dilution) all from eBioscience; anti-CD95 (cl. JO2, 1:200 dilution), anti-BATF (cl. S39-1060, 1:40 dilution) and anti-Bcl6 (K112-91, 1:50 dilution) all from BD Bioscience; anti-IL-17A (cl. TC11-18H10.1, 1:100 dilution), anti-CXCR5 (cl. L138D7, 1:50 dilution), anti-CD19 (cl. 6D5, 1:300 dilution), anti-IgD (cl. 11-26c.2a, 1:200 dilution), goat anti-rat antibody (cl. Poly4054, 1:200 dilution) all from BioLegend; and goat anti-rabbit antibody (Invitrogen, 1:200 dilution), anti-CD138 (cl. 281-2, BD Pharmingen, 1:200 dilution), anti-Roquin-1/2 (cl. 3F12, inhouse production, 1:10 dilution), anti-Regnase-1 (cl. 15D11, inhouse production, 1:10 dilution), anti-TCF-1 (cl. S33-966, BD Bioscience, 1:80 dilution), anti-TOX (cl. REA473, Miltenyi Biotech, 1:50 dilution) and anti-Rck (Bethyl). AMNIS image stream measurements Cells were stained as described above and measured using the AMNIS image stream (Millipore). Immunofluorescence microscopy Spleens were frozen in OCT compound (Tissue Tek), cryosections (7 µm) were prepared and fixed in acetone. Slides were stained with anti-IgD-PE (cl. 11-26c.2a, BioLegend) and anti-GL7-Alexa647 (cl. GL7, BioLegend). Images were acquired on an Olympus BX41 fluorescence microscope and processed with Fiji (v.1.0). Seahorse measurements In vitro-deleted and activated CD4 + and CD8 + T cells were starved of IL-2 overnight and restimulated with anti-CD3e and anti-CD28 antibodies (0.5 µg ml −1 , BioLegend) for 6–7 h before the seahorse measurement. Cells were washed with PBS, resuspended in seahorse assay medium (XF RPMI, Agilent) containing 2 mM l -glutamine (Thermo Fisher Scientific), with or without 5 mM glucose (Sigma-Aldrich) for glycolytic or mitochondrial stress tests, respectively, and seeded on poly- l -lysine (50 µg ml −1 , Sigma-Aldrich) and goat anti-hamster (50 µg ml −1 ) pre-coated plates at a density of 2–2.2 × 10 5 cells per well. Cells were degassed using a Cytation1 reader (BioTek) at 37 °C for 1 h. ECARs and OCRs were measured on a 96-well XFe Extracellular Flux Analyzer (Agilent). For each treatment three cycles of 3 min mixing and measurement each were performed. For normalization, nuclei stained with 8 µm Hoechst (Thermo Fisher Scientific) were counted in a Cytation1 reader. To assess basal glycolytic activity during the glycolytic stress test, the ECAR response to an acute 5-mM glucose (Sigma-Aldrich) injection was measured, followed by a 1.5µM oligomycin (Sigma-Aldrich) injection to inhibit mitochondrial respiration and induce maximal glycolytic capacity. Nonglycolytic acidification was assessed after injecting 50 mM 2-DG (Sigma-Aldrich). In the mitochondrial stress test 1.5 µM oligomycin was injected to inhibit mitochondrial proton flux. Maximal mitochondrial respiration was induced by injection of 1 µM FCCP (Sigma-Aldrich) and terminated by 0.5 µM rotenone (Sigma-Aldrich) and antimycin A (Sigma-Aldrich) injections. Sucrose gradient fractionation Sucrose gradient fractionation was performed as described 48 , with slight modifications. Shortly after, 5 × 10 7 CD4 + T cells were washed in ice-cold PBS containing cycloheximide (0.1 mg ml −1 ), resuspended in extraction buffer (20 mM Tris-HCl (pH 7.4), 140 mM KCl, 0.5 mM dithiothreitol (DTT), 5 mM MgCl 2 , 0.5% Nonidet-P40, 0.1 mg ml −1 cycloheximide and protease inhibitor (PI)), incubated for 15 min on ice and centrifuged for 10 min at 12,000 g . Extracts were layered onto a 4.7-ml sucrose gradient (18–50% sucrose ( w / v ) in 20 mM Tris-HCl, pH 7.4, 140 mM KCl, 0.5 mM DTT, 5 mM MgCl 2 and 0.1 mg ml −1 cycloheximide) and centrifuged at 4 °C in a SW55Ti rotor (Beckman) at 35,000 r.p.m. for 90 min. Gradients were fractionated into ten 0.5-ml fractions and absorbance profiles at 254 nm were recorded using the Piston gradient fractionator (Biocomp). For further protein analysis, polysome gradient fractions were subjected to TCA precipitation. Culture of cell lines HeLa, HEK293T, P815 and B16-OVA cells were cultured in DMEM (Invitrogen) supplemented with 10% ( v / v ) FCS (Gibco), Pen-Strep (100 U ml −1 , each, Thermo Fisher Scientific), 10 mM HEPES, pH 7.2–7.5 (Invitrogen) at 37 °C and 10% CO 2 . HEK293T, HeLa, B16-F10 (no. CRL-6475) and P815 (no. TIB-64) cell lines were purchased from ATCC. The B16-F10 cell line was retrovirally transduced with MigR1-OVA-GFP (provided by D. Zehn, TU Munich). Calcium phosphate transfection for generation of retroviral particles HEK293T cells pre-treated with 25 μM chloroquine, were co-transfected with 5 μg of the packaging vector pCL-Eco (Addgene; 12371) and 50 μg of the respective pRetro-Xtight plasmids using calcium phosphate as a transfection reagent. After 6 h cells were washed and cultured in fresh medium for an additional 48 h. Viral particles were filtered (0.45 μM) and mixed with polybrene (10 µg ml −1 ) before T cell transduction. Expression plasmids For murine T cell reconstitution experiments, the GFP-coding sequence (CDS), the corresponding murine complete CDS or corresponding shortened versions, as indicated, of Roquin-1, Regnase-1, Regnase-2, Regnase-3 or Regnase-4 C-terminally fused to GFP were inserted into the pRetro-Xtight expression plasmid (Clontech) under the control of a Tet-responsive promotor. For FLIM/FRET experiments, murine Roquin-1 CDS fused C-terminally to the mCherry CDS or the Rck CDS C-terminally fused to eBFP2 was inserted into the pdest12.2 backbone (Invitrogen). For NanoBret assays, the complete CDS of Regnase-1 or Roquin-1 aa1-510 was inserted downstream of the HaloTag CDS in the pFN21A HaloTag CMV Vector (Promega) or NanoLuc CDS in the pFN31K Nluc CMV-neo Flexi Vector (Promega), respectively. Mutations in the CDS of Regnase-1 or Roquin-1 were inserted via the QuikChange site-directed-mutagenesis procedure (Stratagene). Primer sequences are available on request. Cell lysis, SDS–PAGE and immunoblotting Cell lysis and SDS–PAGE were performed as described 42 . For immunoblotting, proteins were transferred to a PVDF membrane and analyzed using primary antibodies followed by horseradish peroxidase (HRP)-conjugated secondary antibodies (Cell Signaling). For protein detection, Amersham ECL Prime Western Blotting Detection Reagent and X-ray films were used. The following primary antibodies were used: Roquin-1/2 (cl. 3F12, monoclonal, inhouse production), Regnase-1 (cl. 15D11, inhouse production), Roquin-1 (A300-515A, polyclonal, Bethyl), GAPDH (cl. 6C5, Merck Millipore) and Rpl7a (Abcam, ab70753). RNA isolation and RT–qPCR RNA isolation was performed by column-based RNA isolation utilizing the NucleoSpin RNA isolation kit (Macherey-Nagel) according to the manufacturer’s protocol. RNA was transcribed into complementary DNA using the Quantitect RT kit (Roche) according to the manufacturer´s protocol. To quantify gene expression, the UPL Probe Library System by Roche and the Roche Light Cycler 480 were utilized (Supplementary Table 2 ). Co-immunoprecipitation To analyze interaction of Roquin and Regnase-1 proteins in T cells, co-immunoprecipitation was performed. T cells were lysed in lysis buffer (20 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.25% ( v / v ) Nonidet-P40, 1.5 mM MgCl 2 , 1 mM DTT supplemented with 1× cOmplete, EDTA-free Protease Inhibitor Cocktail (Roche), 1× Halt Phosphatase Inhibitor Cocktail (Thermo Fisher) and 0.2 U μl −1 RNase inhibitor (RNasin, Promega)) on ice for 15 min and cleared by centrifugation for 15 min at 12.000 g and 4 °C. Then, 10 µl Protein-A dynabeads (Invitrogen) were coupled under constant rotation to 10 μg Regnase-1 antibody (R&D systems, no. 604421) in lysis buffer at 4 °C overnight, followed by 1 h at 20 °C. After washing the antibody-coupled beads with phosphate-citrate buffer (24.4 mM citric acid, 65 mM sodium hydrogen phosphate (pH 5), supplemented with 1 mM DTT and PI), washed beads were incubated with 15 mg protein lysate in 900 μl lysis buffer for 4 h at 4 °C while continuously rotating. Then, beads were washed three times with lysis buffer (+ DTT and PI) and resuspended in 40 μl 1× Laemmli buffer. The samples were boiled at 95 °C for 5 min and co-immunoprecipitation was analyzed by SDS–PAGE and immunoblotting with monoclonal antibodies recognizing Regnase-1 (cl. 15D11) and Roquin (cl. 3F12). NanoBret assay NanoBret assays were performed according to the protocol of the NanoBRET Nano-Glo Detection System (Promega). Between 4–6 h after seeding of 1 × 10 6 HEK293T cells into six-well plates, cells were transfected with 2 µg of HaloTag and 0.02 µg of NanoLuc expression plasmids using FuGENE reagent (Promega) according to the manufacturer’s instructions. After 20 h, cells were trypsinized, resuspended in Opti‐MEM I Reduced Serum Medium (no phenol red, with 4% FCS, Thermo Fisher Scientific), re‐plated into 96‐well plates and incubated with HaloTag NanoBRET 618 Ligand (100 nM final concentration) or dimethylsulfoxide as a no ligand negative control for 22 h. After addition of NanoBRET Nano‐Glo Substrate the donor emission (460 nm) and acceptor emission (618 nm) was measured using the GloMax Discover System (Promega). The raw donor and acceptor values were calculated and milliBRET units (mBU) were determined as mBU = acceptor emission (618 nm) / donor emission (460 nm) × 1,000 and afterwards corrected for background signal as mBRET = mean mBU experimental sample – mean mBU no ligand control. FRET/FLIM experiments For confocal microscopy 1 d before analysis HeLa cells were transfected via calcium phosphate precipitation and cells were seeded 6 h before microscopic analysis on eight-well µ-Slides (glass bottom, Ibidi) in Leibovitz’s L-15 medium (no phenol red, Thermo Fisher Scientific). For staining of the ER the cells were washed with HBSS (calcium, magnesium and no phenol red, Thermo Fisher Scientific) and incubated for 25 min at 37 °C in prewarmed staining solution (300 µl HBSS containing 0.8 µM ER-Tracker Red dye, Thermo Fisher Scientific). Confocal and FLIM images were performed with a TCS SP8X FALCON confocal head (Leica Microsystems) mounted on an inverted microscope (DMi8; Leica Microsystems). For confocal imaging, a 405-nm diode and a white-light laser were used as excitations sources (488 nm for GFP and 594 nm for mCherry). Single photons were collected through a 93×/1.3 NA glycerin-immersion objective and detected on Hybrid Detectors (HyD) (Leica Microsystems) with a 414–468 nm, 500–550 nm and 610–722 nm spectral detection window for BFP, GFP and mCherry detection, respectively. The image size was set to 512 × 512 pixels and a 2.5-fold zoom factor was applied, giving a pixel size of 0.098 μm and an image size of 50 × 50 μm. For FLIM, the white-light laser delivered 20 MHz repetition rate at 488 nm. Arrival time of single photons was measured with the included FALCON module and 60 frames were acquired at 1.17 Hz for each time-correlated single-photon counting recording, corresponding to a scanning speed of 600 Hz. FLIM image analyses were performed in the LAS X software. A threshold was applied to the lifetime images before analysis to eliminate background noise. Different regions of interest were fitted with a two-exponential decay model. The mean amplitude-weighted fluorescence lifetime of the area was extracted then reported. The FRET efficiency ( E FRET ) was calculated according to the following formula: $$E_{\mathrm {FRET}} = 1 - \left( {\tau _{{{{\mathrm{DA}}}}}/\tau _{{{\mathrm{D}}}}} \right)$$ where τ DA is the lifetime of the donor–acceptor sample and τ D is the lifetime of the donor alone. Lifetime images shown were produced using the phasor approach 49 . ELISA for detection of anti-nuclear antibodies Blood taken from hearts of male and female (age 6–12 weeks) mice was directly centrifuged at 10,000 g for 10 min at 4 °C to collect serum. ANA ELISAs were performed using the mouse ANA total IgG ELISA kit (Alpha Diagnostic) according to the manufacturer’s instructions. Optical density at 450 nm was measured on an ELISA reader (Versa Max Microplate reader, Molecular Devices) and concentrations were calculated by using standard serum as a reference. Prediction of interacting residues on the Roquin surface The structures of the ROQ domain bound to RNA (PDB 4QI2 ) and ROQ-HEPN domain (PDB 4TXA ) were superposed using PyMOL and a model of the ROQ-HEPN domain bound to the RNA stem loop was generated. Based on this model, residues on the Roquin surface that are typically involved in protein–protein interactions but not covered by RNA were mutated on the basis of stereochemical and/or electrostatic interference. Expression and purification of GST–regnase-1 D141N Full-length GST–regnase-1 D141N was expressed from pGEX-6P-3 in Escherichia coli Rosetta 2 (DE3) pLysS. Cells were cultured at 37 °C in 2YT containing 50 µg ml −1 ampicillin, 35 µg ml −1 chloramphenicol and 30 µM ZnCl 2 . At an optical density (OD 600 ) of 0.8, temperature was reduced to 18 °C and expression was induced by adding isopropyl β-D-1-thiogalactopyranoside (IPTG) to a concentration of 0.7 mM. After 16 h of incubation, cells were collected (5,200 g , 4 °C, 30 min). Due to the fast degradation of the protein, the whole purification was conducted at 4 °C in one day. Cells were resuspended in lysis buffer (500 mM NaCl, 50 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 , 2 mM DTT, 1× cOmplete Protease Inhibitor Cocktail (Roche)) and sonicated on ice. After clarification of the lysate (30,000 g , 4 °C, 30 min), supernatant was applied on a pre-equilibrated GSTrap column (GE Healthcare). After washing steps with high-salt buffer (1 M NaCl, 50 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 , 2 mM DTT) and re-equilibration in lysis buffer, bound proteins were eluted with lysis buffer containing 30 mM reduced glutathione. Eluted fractions were pooled and immediately loaded onto a heparin column (GE Healthcare) equilibrated in 100 mM NaCl, 20 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 and 2 mM DTT and eluted using a linear gradient to 60% high-salt buffer in ten column volumes. Pooled fractions were further purified using a Superdex200 column (GE Healthcare) in gel filtration buffer (150 mM NaCl, 20 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 , 2 mM DTT). Protein-containing fractions were pooled and concentrated. Expression and purification of GST–regnase-1 aa1-452; D141N His 6 –GST–regnase-1 aa1-452;D141N with an additional C-terminal His 6 tag was expressed from pOPINJ in E. coli Rosetta (DE3) cells. Cells were grown in LB medium with 34 µg ml −1 chloramphenicol and 100 µg ml −1 ampicillin at 37 °C. Expression of the protein was induced at OD 600 0.6 by adding 0.5 mM IPTG and overnight growing conditions changed to 18 °C. Next, cells were collected (6,238 g , 15 min, 4 °C) and resuspended in lysis buffer (500 mM NaCl, 2 mM DTT, 2 mM MgCl 2 , 30 µM ZnCl 2 , PIs and 50 mM HEPES, pH 8) and sonicated on ice. The supernatant of the centrifugation (48,384 g , 30 min, 4 °C) was applied to GSTrap column. The column was washed with high-salt buffer (1 M NaCl and 50 mM HEPES, pH 8) and the protein eluted using elution buffer (500 mM NaCl, 30 mM glutathione, 2 mM DTT, 2 mM MgCl 2 , 30 µM ZnCl 2 , PIs and 50 mM HEPES, pH 8). After changing the buffer to a low-salt buffer (50 mM NaCl, 35 µM ZnCl 2 , 2 mM DTT, 2 mM MgCl 2 and HEPES, pH 8) the protein was loaded onto a heparin column. The bound protein was eluted with a gradient (20 column volumes 0−100%) using the same buffer including 1 M NaCl. Fractions were pooled, concentrated and further purified using a Superdex 75 10/300 GL column (Amersham Pharmacia Biosciences) in 150 mM NaCl, 35 µM ZnCl 2 , 2 mM DTT and HEPES, pH 7.5 buffer. Expression and purification of Roquin-1 aa2-440 , SUMO–roquin-1 aa2-440 and its mutants His 6 –SUMO–roquin-1 aa2-440 was expressed from pOPINS3C in E. coli BL21 Star (DE3). Roquin-1 aa2-440 and its mutants (Roquin-1 aa2-440;M199R , Roquin-1 aa2-440;L209Y , Roquin-1 aa2-440;E212K , Roquin-1 aa2-440;M199R/L209Y and Roquin-1 aa2-440;M199R/E212K ) were expressed as His 6 -tagged proteins from pETM11 or pOPINF in E. coli BL21 (DE3) or BL21 Star (DE3). Cells were grown at 37 °C in LB medium with 50 μg ml −1 kanamycin (pETM11) or with 100 µg ml −1 ampicillin (pOPINS3C or pOPINF). At an OD 600 of 0.4, cultures were induced by adding 0.5 mM IPTG and overnight growing conditions changed to 20 °C. Cells were collected (6,238 g , 15 min, 4 °C), resuspended in lysis buffer (300 mM NaCl, 15 mM imidazole, 1 mg ml −1 lysozyme, 2 mM DTT, PIs and 50 mM Tris, pH 8) and sonicated on ice. After centrifugation (48,384 g , 30 min, 4 °C), the supernatant was applied to a HisTrap column (GE Healthcare). Bound protein was eluted, concentrated and further purified using a Superdex 75 10/300 GL column (Amersham Pharmacia Biosciences) in 150 mM NaCl and 50 mM HEPES, pH 7.5 buffer. Electrophoretic mobility shift assay The RNA fragment of Regnase-1 3ʹ-UTR (nt194-212, IBA GmbH) was radioactively labeled using T4 polynucleotide kinase (Thermo Fisher Scientific) and [γ 32 P] ATP (Hartmann Analytic) at 37 °C for 30 min. The reaction was stopped at 75 °C for 10 min. Sepharose spin columns (NucAway; Invitrogen) were used to separate RNA from free nucleotides. Radioactively labeled RNA (6 nM), proteins (GST–regnase-1 aa1-452;D141N , Roquin-1 aa2-440 ) and tRNA competitor (30 μg ml −1 ) were incubated in HEPES/NaCl/MgCl 2 buffer (10 mM HEPES (pH 7.5), 150 mM NaCl and 2 mM MgCl 2 ) and 4% glycerol in a final volume of 20 μl for 30 min at 20 °C. Samples were resolved by native TBE–PAGE (4% polyacrylamide and 1× TBE buffer) or by gradient NativePAGE 4–16% Bis-Tris (Invitrogen) gels. Gels were analyzed using Fuji imaging plates exposed in the FLA-5100, after 10 min incubation in fixing solution (30% ( v / v ) methanol and 10% ( v / v ) acetic acid) and vacuum drying. Competition pulldown assays of GST–regnase-1 D141N and Roquin variants Competition pulldowns were performed using GSTrap beads (GE Healthcare) pre-equilibrated in binding buffer (150 mM NaCl, 30 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 and 2 mM DTT). Then, 0.5 nmol human full-length GST–regnase-1 D141N was mixed with 150 µl bead slurry and incubated for 10 min on ice. Subsequently, premixes of 1.2 nmol SUMO–roquin-1 aa2-440 wild-type and 1.2 nmol Roquin-1 aa2-440 mutant variants were added and incubated for 60 min at 4 °C. After four wash steps (950 µl binding buffer), bound proteins were eluted with 60 µl elution buffer (150 mM NaCl, 30 mM HEPES (pH 7.5), 2 mM MgCl 2 , 30 µM ZnCl 2 , 2 mM DTT and 30 mM reduced glutathione). Then, 5% input of individual proteins, wash fractions and elutions were analyzed on 12.5% SDS–PAGE, stained with Coomassie blue and imaged using a ChemiDoc XRS+ (BioRad). Band intensities of Roquin-1 aa2-440 mutant variants and SUMO–roquin-1 aa2-440 wild-type of three independent experiments were quantified and averaged using Image Lab software and their ratio was plotted. Surface plasmon resonance Binding between Roquin-1 aa2-440 and GST–regnase-1 aa1-452;D141N was analyzed using BIACORE 3000 instrument (Biacore). Roquin-1 aa2-440 was coupled to the CM5 sensor chip (Biacore) at a concentration of 35 μg ml −1 in 10 mM sodium-phosphate buffer (pH 5.7). GST–regnase-1 aa1-452;D141N was injected onto the sensor chip using the concentrations 0.032, 0.063, 0.125, 0.25, 0.5 and 1 μM at 30 μl min −1 flow rate in running buffer (150 mM NaCl, 35 µM ZnCl 2 , 0.05% Tween20, 2 mM DTT and 10 mM HEPES, pH 7.5) at 20 °C. Acquired binding curves were double-referenced against the signal in the buffer run and a ligand-free reference channel. The equilibrium dissociation constant ( K D ) was calculated from steady-state measurements using the BIAevaluation program (Biacore). Statistical analysis and experimental design Statistical analysis was performed with Prism 5.0b (GraphPad) or Origin. P values were calculated with Student’s t- test or one-way or two-way ANOVA, as indicated. Statistical significance was indicated. Error bars represent mean of all data points ± s.e.m. or mean ± s.d., as indicated. No statistical methods were used to predetermine sample sizes but our sample sizes are similar to those reported in previous publications 3 , 6 , 18 , 19 . Data distribution was assumed to be normal, but this was not formally tested. Experiments did not involve randomization of animals/samples or conditions or blinding of investigators. No animals or data points were excluded from analyses. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data are provided in the article and its supplementary files or from the corresponding author upon reasonable request. For analysis of the Roquin structure the publicly available structural datasets of Roquin-1 ROQ domain bound to RNA (PDB 4QI2 ) and Roquin-1 ROQ-HEPN domain (PDB 4TXA ) were used. Source data are provided with this paper.
Ludwig Maximilians University Munich (LMU) immunologists have discovered how mutations in Roquin-1 trigger autoimmunity, but can also improve the body's fight against cancer cells. With autoimmune diseases such as lupus erythematosus, severe inflammation occurs in different areas of the organism. The immune system mistakenly identifies the body's own structures as foreign and attacks them. Such disorders have various triggers, and only a handful of known mutations in individual genes lead to autoimmunity. These include the gene that codes for Roquin-1. The so-called sanroque mutation induces a lupus-like syndrome in mice. "Such mutations teach us how our body protects itself against autoaggressive reactions of the immune system," explains Professor Vigo Heissmeyer, who is a researcher at the Institute for Immunology at LMU and the Molecular Immune Regulation Research Unit at Helmholtz Zentrum München. By means of functional investigations and mouse models, he and his team have now shown how the exchange of a single amino acid—such as the sanroque mutation in Roquin-1—leads to stronger autoimmunity. "We think we've found a target structure that controls autoimmunity and which could even be suitable for enhancing anti-tumor responses," says Heissmeyer, outlining the key results of the experiments from his team. Roquin controls immunological processes Together with colleagues from the Helmholtz Zentrum München and LMU, he had previously elucidated molecular functions of Roquin-1. The protein plays a key role in the adaptive immune response by controlling the activation and differentiation of T cells via the regulation of gene expression. Interestingly, it had been suggested that the Regnase-1 protein works in the same way. "What we didn't understand before was why the exchange of an amino acid in the sanroque mutation of Roquin-1 leads to a very similar form of autoimmunity as the loss of the gene encoding Regnase-1," says Heissmeyer. The research group has now been able to demonstrate that Roquin-1 binds directly to the Regnase-1 protein so as to efficiently control the expression of certain genes. Surprisingly, the amino acids involved in this binding were discovered to be in close spatial proximity to the amino acid that was altered in the sanroque mutant. This suggested that they represent an extended binding site. In the gene encoding Roquin-1 in mice, the researchers successfully used CRISPR-Cas technology to replace individual amino acids that are involved in the binding to Regnase-1 with other specific amino acids. During the protein biosynthesis, this produced Roquin-1 proteins that interacted much more weakly with Regnase-1. These novel mutations led to autoimmunity in the rodents." Our data shows that the physical interaction of Roquin-1 with Regnase-1 is of key importance when it comes to controlling the activity of immune cells," summarizes the LMU scientist. Enhancing immune responses as a therapeutic strategy Although the observed autoimmunity damages the organism and leads to illnesses, there could be benefits for cancer patients in an enhanced activation of immune cells that fight tumors. "Mechanisms in T cells that our immune system has developed to prevent autoimmunity are actually used by the tumor to silence T cells," explains Heissmeyer. Accordingly, mice with the Roquin-1 gene mutations described above produced T cells that attacked malignant cells with greater vigor after transfer into tumor-bearing mice. This makes Roquin-1 an interesting target structure for oncology. Future research projects could seek to develop an inhibitor that reduces interactions between Roquin-1 and Regnase-1—and that activates immune cells. "We expect that this will give a strong boost to the T cell response against tumors for a limited period of time," says Heissmeyer.
10.1038/s41590-021-01064-3
Biology
Climate change to push species over abrupt tipping points, finds study
Alex Pigot, Abrupt expansion of climate change risks for species globally, Nature Ecology & Evolution (2023). DOI: 10.1038/s41559-023-02070-4. www.nature.com/articles/s41559-023-02070-4 Journal information: Nature Ecology & Evolution
https://dx.doi.org/10.1038/s41559-023-02070-4
https://phys.org/news/2023-05-climate-species-abrupt.html
Abstract Climate change is already exposing species to dangerous temperatures driving widespread population and geographical contractions. However, little is known about how these risks of thermal exposure will expand across species’ existing geographical ranges over time as climate change continues. Here, using geographical data for approximately 36,000 marine and terrestrial species and climate projections to 2100, we show that the area of each species’ geographical range at risk of thermal exposure will expand abruptly. On average, more than 50% of the increase in exposure projected for a species will occur in a single decade. This abruptness is partly due to the rapid pace of future projected warming but also because the greater area available at the warm end of thermal gradients constrains species to disproportionately occupy sites close to their upper thermal limit. These geographical constraints on the structure of species ranges operate both on land and in the ocean and mean that, even in the absence of amplifying ecological feedbacks, thermally sensitive species may be inherently vulnerable to sudden warming-driven collapse. With higher levels of warming, the number of species passing these thermal thresholds, and at risk of abrupt and widespread thermal exposure, increases, doubling from less than 15% to more than 30% between 1.5 °C and 2.5 °C of global warming. These results indicate that climate threats to thousands of species are expected to expand abruptly in the coming decades, thereby highlighting the urgency of mitigation and adaptation actions. Main Species are increasingly being exposed to dangerous temperatures, driving mass die-offs, and population declines and contractions at the warm edges of their geographical range 1 , 2 , 3 , 4 , 5 , 6 , 7 . As global warming continues, the area over which species are adversely impacted by thermal exposure will expand, increasing the risks of local and global extinctions 8 , 9 and disrupting the functioning and stability of the ecosystems these species form and on which society depends 10 . Critical to understanding and managing these climate risks is how the spatial footprint of thermal exposure will expand across a species’ geographical range over time. Because climate change will unfold over decades to centuries, the expansion in the area over which a species is at risk of thermal exposure may also be protracted 11 . A gradual spread of thermal risks would provide more time for species to adapt via dispersal 12 or evolution 13 , and more opportunity to implement conservation interventions and adaptation policies once the adverse effects of thermal exposure are first detected. While the gradual spread of risk could pose a potential challenge for existing vulnerability assessments, which typically consider population and range declines over much shorter time horizons (for example, a single decade 14 , 15 ), a greater concern is the possibility that future climate risks to species will expand suddenly, impacting widespread areas across a species’ geographical range almost simultaneously 16 , 17 , 18 . An abrupt expansion in the area of a species’ geographical range at risk of thermal exposure could overwhelm the ecological and evolutionary processes that might otherwise provide resilience to species and ecosystems under more gradual environmental change 19 , 20 , and would limit the capacity for timely conservation actions 21 . Determining whether there are thresholds of warming beyond which risks of thermal exposure to species rapidly expand, and predicting where and when these thresholds will be crossed, is essential for improved early warning systems to assist conservation and adaptation planning, and for informing international policy to mitigate climate change. To understand the risks to species from abrupt thermal exposure, we used global climate models to project the cumulative area of individual species existing geographical ranges that will be exposed to potentially dangerous temperatures up to 2100 (at approximately 100-km grid cell resolution; Methods ). Our analysis encompasses geographical data on 35,863 species from both terrestrial ( n = 31,790) and near-surface-marine ( n = 4,073) environments, including: mammals, amphibians, reptiles, birds, corals, cephalopods, reef fish, seagrasses and zooplankton (Extended Data Table 1 ). While species will be adversely impacted by exposure to multiple abiotic and biotic variables, we focus our analysis on temperature, which provides a universal driver of species distributions across both marine 22 and terrestrial 23 realms; thus, it is a logical starting point for understanding the spatiotemporal dynamics of climate change risks to species. We do not consider processes of evolutionary adaptation, changes in phenology and behaviour or dispersal to new locations. While these processes will determine the resilience of species to climate change, in this study we focus on the first key step of understanding the spatial and temporal dynamics of thermal exposure that will ultimately drive these biological responses. The adverse impacts of thermal exposure (for example, declines of fitness or increased mortality) are probably driven by the increasing intensity and frequency of extreme temperatures rather than changes in long-term climate averages 24 , 25 . In this study, we define thermal exposure as the year after which the annual maximum monthly air or sea-surface temperatures in a grid cell consistently (for at least 5 consecutive years) exceeds the most extreme monthly temperature experienced by a species across its geographical range over recent history (1850–2014), hereafter its ‘upper realized thermal limit’ 10 ( Methods ). We focus on an intermediate greenhouse gas (GHG) shared socioeconomic pathway (SSP) emission scenario (SSP2-4.5), corresponding to approximately 2.5 °C global warming by the end of the century, relative to the pre-industrial period (1850–1900). This is approximately the level of warming expected if countries meet the 2030 targets in their nationally determined contributions at the time of COP26 (ref. 26 ). We also explore how the dynamics of thermal exposure vary under both lower (SSP1-2.6) and higher (SSP5-8.5) GHG emission scenarios and thus global warming levels. We quantify how gradually or abruptly the spatial extent of thermal exposure is projected to expand over time using a moving window analysis to calculate the maximum percentage of grid cell exposure events occurring in any decade for each species (Extended Data Fig. 1 ) 10 . We additionally calculate the magnitude of exposure, that is, the total proportion of the species’ geographical range exposed this century (Extended Data Fig. 1 ). Finally, we calculate the timing of exposure in two ways: (1) the year of onset of exposure; and (2) the median year of grid cell exposure, which for species undergoing abrupt exposure, captures well the timing of these abrupt events (Extended Data Fig. 1 ). Together, the abruptness, magnitude and timing of exposure describe key independent dimensions of climate change risk for a species. Results Spatiotemporal dynamics of thermal exposure Species exhibit three distinct spatial patterns in the projected expansion of thermal exposure, which are determined by the spatiotemporal dynamics of future warming and the distribution of a species’ geographical range across thermal gradients (Fig. 1 ). First, grid cells in a species’ geographical range projected to experience more rapid warming this century are exposed earlier than those where warming is projected to occur more gradually (Extended Data Fig. 2a ). Second, grid cells with a small warming tolerance, defined as the difference between the ‘current’ temperature (2005–2014 mean) of a grid cell and the species’ range-wide upper realized thermal limit, are exposed earlier than grid cells where the warming tolerance is larger (Extended Data Fig. 2b ). Third, projected thermal exposure will not occur gradually. Instead, over the coming decades, trends of increasing thermal exposure are characterized by periods of relative stability punctuated by sudden pulses, where large numbers of grid cells across a species’ geographical range are exposed in a narrow window of time, with these pulses occurring at different times for different species (Fig. 1 ). Fig. 1: The spatiotemporal dynamics of thermal exposure across species geographical ranges. a – d , Contour maps showing the projected timing (that is, year) of thermal exposure of grid cells across four exemplar terrestrial ( a , b ) and marine ( c , d ) species for a single run of the Whole Atmosphere Community Climate Model (CESM2-WACCM) under an intermediate greenhouse gas emissions scenario (SSP2-4.5). For visualization, spatial patterns of exposure are smoothed across 100-km grid cells. The colours indicate the timing of thermal exposure binned into decadal windows, with grey indicating grid cells not exposed by the end of the century. Below each map, Horizon profiles 10 show the cumulative percentage of grid cells exposed over time in each species’ range. The dashed line indicates the pattern expected under a constant rate of exposure. Species shown are Pristimantis malkini ( a ), Telescopus beetzi ( b ), Pectinia pygmaeus ( c ) and Abudefduf declivifrons ( d ). Full size image An abrupt expansion in the area at risk of thermal exposure is a pervasive pattern across species’ geographical ranges. On average, 57% (mean ± 15% s.d.) of the exposure projected for a species this century will occur in a single decade under SSP2-4.5, with similar levels of abruptness under both higher and lower GHG emission pathways (Fig. 2a ). Despite the contrasting physical environments in which species occur, the expansion of thermal exposure risks is projected to occur abruptly for both terrestrial (mean = 58% ± 16% s.d.) and marine species (mean = 51% ± 11% s.d.) across all studied organism groups, from reptiles to zooplankton, and regardless of whether species are widespread (more than a median range size of 34 grid cells; mean = 58% ± 15% s.d.) or geographically rare (fewer than 34 grid cells; mean = 56% ± 15% s.d.). Moreover, abrupt thermal exposure occurs regardless of whether a species’ geographical range is only partially (fewer than 25% grid cells; mean = 55% ± 13% s.d.) or widely exposed (75% or more grid cells; mean = 56% ± 15% s.d.) and whether exposure on average happens early (before 2050; mean = 66% ± 18% s.d.) or late (2050 or after; mean = 53% ± 13% s.d.) in the century (Extended Data Fig. 3 ). Some degree of synchronicity in the timing of thermal exposure among grid cells could arise by chance. However, for almost all species (88%), the spatial extent of thermal exposure expands more abruptly than expected if exposure events within a species’ geographical range occur independently over time (Fig. 3j and Methods ). Fig. 2: The abruptness, timing and magnitude of thermal exposure across the geographical ranges of species. a – c , The distribution of thermal exposure metrics is shown across n = 35,863 land and ocean species for three global warming scenarios. a , Abruptness is the maximum percentage of grid cell thermal exposure events occurring in any single decadal window during the twenty-first century. b , Timing is the onset (green) or median (brown) year of grid cell exposure across the geographical range of each species. c , Magnitude is the percentage of grid cells across a species’ geographical range exposed by the end of the century. For each metric, the median species scores across General Circulation and Earth System Models (hereafter GCMs) are shown for a low (SSP1-2.6), intermediate (SSP2-4.5) and high (SSP5-8.5) GHG emission scenario. To avoid biased estimates of abruptness, only species where at least ten grid cells are exposed this century are plotted ( n = 14,403) ( a ) ( Methods ). Full size image Fig. 3: Partitioning the causes of abrupt thermal exposure. a – d , Computational experiments in which projected future climate warming trends for each grid cell within a species’ geographical range were artificially manipulated to identify the causes of abrupt thermal exposure. e – h , Horizon profiles show the corresponding cumulative percentage of grid cells exposed over time for each experiment. The data used in a – h are for illustration purposes only, showing hypothetical warming and exposure trends for a single hypothetical species. a , e , Empirical data as obtained from a single climate model under an intermediate GHG emission scenario SSP2-4.5. b , g , Future climate warming trends were manipulated to be smoother than projected ( b , f ), smoother and more gradual than projected ( c,g ), and smoother, more gradual and with grid cell warming tolerances (WTs) evenly distributed across the species’ realized thermal niche ( d , h ). In a the upper realized thermal limit for a hypothetical species (dashed line) is indicated. The points in a – d show when, in the future, each grid cell will be thermally exposed. i , Density curves showing the distribution of projected abruptness (%) scores across real species (median across climate models) under an intermediate GHG emission scenario SSP2-4.5 (grey) and for each experiment. Abruptness is the maximum percentage of grid cell thermal exposure events occurring in any single decadal window during the twenty-first century. Abruptness was only calculated for species and climate models where at least ten grid cells are exposed this century ( n = 14,403 species). j , The percentage of species in each experiment where abruptness exceeded that expected under a null model in which grid cell exposure events occur independently over time (5%, one-tailed). Full size image The timing and magnitude of exposure varies substantially across species; while some species are projected to experience minimal thermal exposure by the end of the century, others experience an almost immediate onset of exposure that spreads across their entire geographical range (Fig. 2b,c and Extended Data Fig. 3 ). Under SSP2-4.5, 52% of species are projected to experience thermal exposure before 2050 (Fig. 2b ), with 34% of species exposed across at least 30% of their geographical range by the end of the century (Fig. 2c ). The time between the initial onset of thermal exposure for a species and the median year of exposure across its geographical range is on average 12 years (mean ± 12 s.d.), indicating that once exposure commences, there is only a limited window of time before the area at risk expands abruptly (Fig. 2b ). The drivers of abrupt thermal exposure One possible explanation for the pervasive abruptness of thermal exposure is that the relatively coarse spatial grain size (100 km) of global climate models underestimates spatial variability in rates of warming and thus heterogeneity in the timing of future exposure across grid cells. However, this seems unlikely because we found similar levels of abruptness when repeating our analysis on a subset of species using regional climate models that generate projections at a finer spatial resolution (20 km) (Supplementary Fig. 1 ). Another potential explanation is that abrupt thermal exposure is driven by the rapid pace of future climate change, both in terms of the long-term warming ‘press’ and short-term ‘pulses’ of extreme conditions 27 , relative to the range of temperatures species have occupied in recent history. To quantify the effect of rapid climate warming on future exposure dynamics, we performed a computational experiment in which we manipulated the future temperature time series (Fig. 3a–d ) and recalculated the abruptness of projected thermal exposure (Fig. 3e–h and Methods ). First, smoothing the future temperature time series to remove extreme year-to-year variability (Fig. 3b,f ) results in a consistent reduction in projected abruptness from on average 57% to 48% (±13% s.d.) of grid cell exposure events for a species occurring within a single decade (Fig. 3i ). Second, smoothing and slowing the long-term warming trend, so that the average warming across a species’ geographical range projected by 2100 is only reached in 2500 (a factor of 5 downgrading of the warming trend; Fig. 3c,g ), results in a further reduction in projected abruptness (mean abruptness = 22% ± 9% s.d.; Fig. 3i ). However, even after removing the short-term ‘pulse’ and reducing the long-term ‘press’ of future warming, the abruptness of thermal exposure for 51% of species still exceeds a null model expectation for abruptness in which grid cell exposure events occur independently over time (Fig. 3j ). Thus, neither the pulse nor the rapid press of future climate change is sufficient to explain abrupt expansions in the area of species’ existing geographical ranges projected to be at risk of thermal exposure over the coming decades. Instead, we found that the underlying driver of abrupt thermal exposure is the ubiquitous skew in the distribution of temperatures across a species’ geographical range (Fig. 4 and Methods ). Within a species’ geographical range, most grid cells have relatively narrow warming tolerances, that is, they currently experience maximum monthly temperatures close to the species’ range-wide upper realized thermal limit. On average, 65% of a species’ geographical range lies in the hottest half of the realized thermal niche, with 27% of the geographical range concentrated within only 10% of the thermal niche. Similar levels of warm-skewness are observed across the geographical ranges of both terrestrial and marine species (Extended Data Fig. 4 ). This clustering and skew in grid cell warming tolerances means that even when the climate warms gradually, multiple grid cells across a species geographical range are projected to experience thermal exposure near synchronously. Artificially removing this effect by simulating a scenario in which grid cell warming tolerances are evenly spaced between the hottest and coldest conditions occupied by a species (Fig. 3d,h ), leads to thermal exposure across a species’ geographical range accumulating more gradually (Fig. 3i , mean abruptness = 18% ± 9% s.d.) and at a rate that is consistent with a null model of random independent exposure across most (84%) species (Fig. 3j ). Fig. 4: The warm-skewed structure of species’ geographical ranges. a , The top histogram shows the interval (10%) of the realized thermal niche with the highest density of grid cells for each species ( n = 18,714 species). Only species occurring in at least 30 grid cells are included. Values are the multi-model mean under an intermediate GHG emissions scenario (SSP2-4.5). b , The density distribution of grid cells within species’ realized thermal niches is shown for a random sample of 2,500 species for a single climate model (CESM2-WACCM), coloured according to skew. The species in Fig. 1a–d are highlighted. Full size image The warm-skewed structure of species’ geographical ranges is evident for both simulated climate and interpolated observed weather data (Extended Data Fig. 4 ), and mirrors the warm-skewed distribution of air and sea-surface temperatures available globally (Extended Data Fig. 5 ). Within most terrestrial regions, area declines with increasing elevation from warm lowlands to cold highlands 28 . Over larger spatial extents, the warm-skewed distribution of air and sea-surface temperatures arises because latitudinal bands cover a smaller area towards the poles and because of the relatively flat meridional temperature gradient across the tropics, compared to the narrower isotherms at high latitudes 29 . The greater area available at the warm end of thermal gradients has long provided a core explanation for latitudinal and elevational gradients in global biodiversity 29 , 30 . Our study suggests that this basic geometry of the planet also causes the distribution of temperatures across species’ geographical ranges to be skewed towards hotter conditions, making species vulnerable to abrupt thermal exposure even when the climate warms gradually. Simulations using a spreading dye algorithm ( Methods ) support this, showing that when species sample grid cells at random across the land or seascape, species’ geographical ranges are expected to be strongly warm-skewed, matching very closely the pattern observed in the empirical data (Extended Data Fig. 6 ). Abrupt exposure risks under increased global warming Because different GHG emission scenarios lead to similarly high rates of warming over the next two decades, thermal exposure expands abruptly (Fig. 2a ) and with similar timing (Fig. 2b ) irrespective of the future emission pathway (Supplementary Fig. 4 ). The major effect of GHG emissions, and thus the magnitude of twenty-first century global warming, is to drastically change the magnitude, that is, the area of species’ existing ranges at risk of thermal exposure (Fig. 2c ). Under intermediate (SSP2-4.5) and high (SSP5-8.5) emission scenarios, global temperatures increase throughout the century, driving a high magnitude of exposure that continues to accumulate for many decades (Extended Data Fig. 3 ). In contrast, under SSP1-2.6, warming plateaus by the middle of the century. This shorter duration of warming under SSP1-2.6 constrains thermal exposure to occur relatively more abruptly than in higher-emission scenarios because only those grid cells with a narrow warming tolerance are projected to become exposed, but the total area of any species’ existing range at risk of these abrupt exposure events is reduced. For any given GHG emission scenario, species with early, abrupt and widespread thermal exposure could be expected to be especially at risk. The species-level approach presented in this study that integrates abruptness, timing and magnitude of exposure could increase the saliency of climate change risk information for assessing threats to species, such as for the International Union for Conservation of Nature (IUCN) Red List, that often require information on both the extent and near-term timing of a threat 15 . Indeed, our analysis suggests that the area at risk of thermal exposure will expand abruptly for species assessed by the IUCN as threatened by extreme temperatures (Fig. 5 ). Fig. 5: Increasing risks of abrupt thermal exposure with the magnitude of global warming. a , Percentage of species projected to experience abrupt and widespread thermal exposure (that is, 30% or more of their existing geographical range exposed in a single decade) for different levels of global warming ( n = 35,863 species). Risk is estimated assuming a single range-wide upper thermal limit for a species (solid line) or a separate upper thermal limit for each grid cell within the geographical range (dashed line), thus assuming that populations are locally adapted to the conditions in each grid cell. The points show the risk across n = 15 climate model and GHG emission scenario combinations. b , c , Risk for terrestrial ( b ) and marine species ( c ) separately and for the species in each realm assessed under the IUCN as threatened by thermal extremes. Full size image Comparing the dynamics of exposure across all combinations of climate models and GHG emissions pathways, reveals that the number of species at risk of thermal exposure events of both high magnitude and abruptness increases rapidly with the level of global warming (Fig. 5a ). For instance, at 1.5 °C of warming, 15% of species are at risk of experiencing exposure across at least 30% of their existing geographical range in a single decade, but this doubles to 30% of species at 2.5 °C of warming. This increase in risk is continuous, so that every fraction of a degree of warming that can be avoided reduces the number of species passing thermal thresholds leading to abrupt and widespread exposure. These results provide evidence that failure to achieve the Paris Agreement climate goals of limiting global warming ‘well below’ 2 °C, will substantially increase the risk of sudden biodiversity losses. Our model assumes that exposure occurs when temperatures consistently exceed the hottest conditions across a species’ geographical range over recent history. However, for organisms where populations are adapted to local thermal regimes, such as some reef-building corals 31 , the departure from the bounds of local climate variability may be a more appropriate metric of exposure than those based on species’ range-wide thermal limits. Modifying our model to allow strong local adaptation across space (but not over time) results in a dramatically steeper increase in the number of species at risk of widespread and abrupt thermal exposure (Fig. 5a and Methods ). Even at current levels of global warming (approximately 1.1 °C), this model predicts that locally adapted species are at immediate risk of sudden and widespread thermal exposure, which is consistent with the mass bleaching and mortality of corals already occurring over wide geographical areas 17 and the ubiquitous long-term degradation of coral reefs projected to occur by 2 °C global warming in the absence of strong thermal adaptation over time 32 . While strong local adaptation in space greatly increases risks, other factors could lead to risks from thermal exposure being overestimated in our models. In particular, many species will be limited by environmental 23 or biotic 33 factors other than temperature and have fundamental thermal tolerances that exceed their upper realized limit 4 , 34 . Species can also be buffered against warming (at least temporarily) by behaviours to exploit cooler microclimates 35 , changes in phenology 36 , 37 , the evolution of higher thermal tolerance 13 , 32 or the contraction of populations into thermal refugia 38 , such as higher elevations on land or greater depths in the ocean. Thus, while the abruptness of projected thermal exposure is a ubiquitous phenomenon occurring across all terrestrial and marine organisms we studied, our simple temperature-based model of exposure will not be equally useful in understanding climate risks for all species 39 . However, uncertainty in thermal tolerances and heterogeneity in responses to thermal exposure is unlikely to alter our conclusion that thermal risks will expand abruptly across species’ existing geographical ranges under future warming. Repeating our analysis using only those terrestrial ( n = 240) and marine ( n = 866) species in our sample assessed by the IUCN as threatened by extreme temperatures ( Methods ), leads to a similar estimate of the acceleration in the risk of abrupt thermal exposure under future warming (Fig. 5b,c ). For these species at least, we can be more confident that increasing thermal exposure will adversely impact existing populations; our analysis suggests that these risks will expand abruptly over the coming decades. Discussion A new expectation for abrupt climate risks to biodiversity With continued global warming, the risk of passing tipping points leading to major and irreparable disruption to key elements of the Earth system will increase 40 . These tipping points are characterized by positive feedback loops that can translate relatively gradual changes in forcing conditions into a nonlinear and sometimes rapid shift in the state of the system. Such amplifying feedbacks also characterize ecological tipping points, including the collapse of populations (for example, fisheries) and switching of ecosystems between alternative stable states (for example, Amazon forest dieback) 41 . Our results on the dynamics of thermal exposure for thousands of species provide an additional mechanism for a non-linear increase in the magnitude of ecological disruption with future warming, one that does not require amplifying feedbacks. While one might expect that, in the absence of ecological or evolutionary dynamics, a linear rate of warming would result in a linear increase in the area of a species’ geographical range at risk of thermal exposure, we show that this expectation may be incorrect. Instead, the warm-skewed structure of species’ geographical ranges, means that as the climate warms, large numbers of localities that currently share similar thermal conditions will exceed the thermal tolerance of a species at similar times in the future, resulting in an abrupt expansion in the area that is thermally exposed, even when temperatures rise at a constant rate. This result may be informative not only for predicting threats from ongoing and future climate warming but also for understanding the causes of abrupt collapses of populations in response to past environmental change 42 . Our models focus on the area of species’ existing geographical ranges exposed over time and do not consider spatial variation in abundance 43 , which could either reduce or magnify the risks of thermal exposure. If abundance is also skewed towards species’ upper thermal limits 44 , these populations may be more resilient to exposure. Alternatively, such skewed abundance could mean that climate risks to species, in terms of the total number of individuals exposed, will increase even more abruptly than expected based on species presence alone. Variation in the steepness of thermal gradients across space is an important factor determining the velocity of climate change, that is, the rate at which species must disperse to track changing climates 45 . For a given magnitude of warming, regions with shallow thermal gradients across space require faster rates of dispersal than those where spatial thermal gradients are steep. However, metrics of climate change velocity do not indicate when thermal limits are likely to be exceeded or how the risks of thermal exposure spread across a species’ geographical range over time, and in particular whether this will occur at a constant rate or in sudden pulses. Our results show that thermal exposure is expected to expand in sudden pulses across species’ geographical ranges. In this study, we do not attempt to project where species may potentially disperse to in the future. While expansion at cold-range margins is critical to understanding range shifts 5 and risks of global extinction for a species 12 , climate-driven decline or loss of local populations arising from thermal exposure will cause disruption to the integrity and stability of ecosystems regardless of the species’ ability to disperse elsewhere. Many species also face dispersal constraints and stand to lose more than they gain in range size from climate change 8 , such that abrupt thermal exposure that is widespread across species’ existing geographical ranges will increase the risk of their global extinction. Sudden and widespread thermal exposure could also impede expansion of species to cooler environments if collapsing populations further limit the capacity for dispersal and evolutionary rescue 42 , 46 . The ecological interactions 47 , demographic lags 48 and evolutionary processes 36 , 49 not considered in our models could variously either delay, dampen or amplify the risk of abrupt collapse in ways that will probably vary across species depending on both their ecology and life history. Thus, rather than providing predictions of the timing and dynamics of local extinction or geographical range loss, our models are best regarded as projections of how climate risks to species’ existing geographical ranges will expand over space and time. Our findings show that with continued warming, risks of exposure to dangerous thermal conditions are set to expand abruptly across the geographical ranges of thousands of species, highlighting the imperative of pursuing ambitious emission reduction targets to limit global warming well below 2 °C. They also highlight the critical need for advanced threat assessments that use more refined estimates of species niche limits and finer temporal resolution climate information to identify both where and when dangerous thresholds for warming will be exceeded for different species and ecosystems. Methods Biodiversity data To model the dynamics of thermal exposure across species’ existing geographical ranges, we combined expert verified geographical range maps for n = 35,863 species, from both terrestrial ( n = 31,790) and marine ( n = 4,073) environments (Extended Data Table 1 ), with climate model projections. Expert range maps provide the most comprehensive information available on species’ global geographical distributions 50 but are available for only some well-studied organism groups. Our sample includes birds ( ), reptiles 51 , amphibians, mammals, marine fish, benthic marine invertebrates, habitat-forming corals and seagrass ( ), krill 52 and cephalopods 53 (Extended Data Table 1 ). We included only native breeding geographical distributions for terrestrial taxa and excluded marine species that are restricted to depths greater than 200 m (the lower limit of the epipelagic zone) because these species are less likely to respond to changes in sea-surface temperature. Range maps were converted to 96-km resolution equal-area grid cells (that is, grid cells), the finest resolution justifiable for these data globally without incurring false presences 54 and approximately matching the native resolution (approximately 1°) of simulated climate data. Climate model projections We used simulated monthly temperature projections from five GCMs developed for CMIP6 (Extended Data Table 2 ). For each model, we downloaded a single projection for near-surface air (TAS) and sea-surface temperature (TOS) (both in Kelvin and converted to Celsius) for the historical run (1850–2014), as well as the SSP1-2.6, SSP2-4.5 and SSP5-8.5 scenarios for the years 2015–2100. Model output was downloaded from (accessed 16 December 2021). Climate model data were regridded to a 96-km resolution grid using an area-weighted mean interpolation. Because the adverse effects of thermal exposure are often associated with short-term temperature anomalies rather than long-term climate averages 24 , 32 , we modelled the dynamics of exposure according to the temperature of the hottest month each year, hereafter maximum monthly temperature (MMT). GCMs provide climate projections at a relatively coarse spatial resolution. An important consideration is the extent to which our conclusions are affected by the spatial grain size of the modelled climate data we use. To address this, we repeated our analysis for a subset of n = 10,356 terrestrial species using a regional climate model for South America obtained from the Coordinated Regional Downscaling Experiment ( ). This model generates dynamically downscaled climate projections at a spatial resolution of 20 km, compared to the 100 km of GCMs. While the use of different spatial grains and climate models inevitably leads to differences in the timing, magnitude and abruptness of exposure, the overall dynamics were very similar (Supplementary Fig. 1 ). For example, under an intermediate GHG emission scenario (SSP5-4.5), the median abruptness of exposure for the species considered in this comparison is 63% and 73% at a 100-km and 20-km resolution respectively (Supplementary Fig. 1 ). Thus, the abruptness of projected thermal exposure that we report is unlikely to be an artefact of the spatial grain at which climates are modelled, at least over the range of grain sizes explored in this study. Defining species’ thermal limits and the timing of exposure We define thermal exposure as the year after which conditions in a grid cell consistently exceed the upper realized thermal limit of a species. The realized niche describes the range of conditions, over both space and time, under which a species exists. Beyond the realized niche, evidence for the ability of the species to persist in the wild is lacking, leading to, at best, a sizable increase in the uncertainty of species survival and, at worst, an increase in the likelihood the species will be committed to local extinction 10 . For each species i , we estimated the upper realized thermal limit \({{\mathrm{SpeciesMaxMMT}}}_{i}\) using the MMT projections from the historical run of each climate model (1850–2014), which includes variability due to observed changes in radiative forcing from natural factors (for example, volcanic eruptions), as well as anthropogenic emissions and land use changes 55 . Specifically, we calculated \({{\mathrm{SpeciesMax}{\mathrm{MMT}}}}_{i}\) by taking the maximum temperature historically experienced at each occupied grid cell j \({{\mathrm{SiteMax}}{\mathrm{MMT}}}_{j}\) and then the maximum of these values across the species’ geographical range. To prevent estimates of species’ thermal limits being inflated by outliers in either the temperature time series or from the overestimation of species’ geographical ranges 54 , we excluded values more than 3 s.d. above the mean value when calculating \({{\mathrm{SiteMax}}{\mathrm{MMT}}}_{j}\) and the maximum \({{\mathrm{SiteMax}}{\mathrm{MMT}}}_{j}\) across the species’ geographical range. Sensitivity analyses show that the precise way that \({{\mathrm{SiteMax}}{\mathrm{MMT}}}_{j}\) is calculated, including the length of the historical time window or whether outlier temperature values are included, has little effect on the projected dynamics of thermal exposure (Supplementary Fig. 2 ). For each species i , we calculated the timing of thermal exposure of each grid cell j as the Year ij after which the MMT j is projected to exceed the \({{\mathrm{SpeciesMax}}{\mathrm{MMT}}}_{i}\) for at least 5 consecutive years 10 . Because of the long-term warming trend under the future SSP scenarios we used, an exposure period of 5 consecutive years equates to essentially permanent exposure this century. Thus, using longer exposure periods (for example, a run of 20 years) has been shown to have little influence on the timing of thermal exposure 10 . We note that using an alternative definition of thermal exposure, based on the first decade where any 5 years exceed \({{\mathrm{SpeciesMax}}{\mathrm{MMT}}}_{i}\) , resulted in very similar projected dynamics (Supplementary Fig. 3 ). We calculated Year ij using individual climate simulations rather than ensembles or multi-model averages because individual simulation runs include variance in climatic time series due to internal climate variability, such as the timing of El Niño-Southern Oscillation events 56 . This internal variability is a key component of the uncertainty in the timing of exposure and is smoothed out if using multi-model averages as input into the analysis. By calculating Year ij using individual model simulation runs and then summarizing across models, we capture the uncertainty in the timing of exposure due to both internal climate variability and climate model uncertainty (that is, uncertainty about climate physics across models), in line with ‘time of emergence’ analyses from climate science 57 , which identify when in the future local climate departs from the envelope of historical variability. Predicting the timing of thermal exposure within species’ geographical ranges To understand the causes of variation in the timing of exposure across grid cells within a species’ geographical range, for each species i we fitted a linear model predicting Year ij as a function of both the magnitude of twenty-first century warming at grid cell j $${\varDelta {\mathrm{MMT}}}_{j}=\,\overline{{{\mathrm{MMT}}}_{{\mathrm{2005-2014}}_{j}}}-\overline{{{\mathrm{MMT}}}_{{\mathrm{2090-2100}}_{j}}}$$ and the warming tolerance (WT) at grid cell j , $${{\mathrm{WT}}}_{{ij}}=\,{{\mathrm{SpeciesMax}}{\mathrm{MMT}}}_{i}-\,\overline{{{\mathrm{MMT}}}_{{\mathrm{2005-2014}}_{j}}}$$ The warming tolerance of a grid cell represents the difference between the current temperature at that grid cell and the species’ maximum realized thermal limit, analogous to the warming tolerance calculated for an individual organism based on the difference between the temperature of the organism’s habitat and their critical thermal maxima (CT max ) 58 . We jointly estimated the slope for each of these terms (Extended Data Fig. 2 ). We excluded grid cells that were not exposed by 2100 and restricted our analysis to species where at least ten grid cells were exposed to reliably estimate the slopes (Extended Data Table 3 ). Metrics of thermal exposure dynamics We summarized the dynamics of thermal exposure using three independent metrics that capture different dimensions of climate risk 10 . First, the magnitude of exposure was calculated as the percentage of grid cells across the species’ geographical range exposed by the end of the twenty-first century (Extended Data Fig. 1 ). Second, we calculated the timing of exposure for each species in two ways, as (1) the year of the first grid cell exposure time (that is, onset) and (2) the median grid cell exposure time (Extended Data Fig. 1 ). Grid cells not exposed before the end of the twenty-first century were excluded when calculating the median exposure year. Third, we used a moving window of 10 years’ duration, advancing in annual increments, to quantify the abruptness of exposure as the percentage of all grid cell exposure times that occur in the decade of maximum exposure (Extended Data Fig. 1 ) 10 . For a given magnitude of exposure, a higher abruptness score indicates that most of the exposure that takes place is concentrated in a relatively narrow window of time. For each of these exposure metrics, we report the median value across the five climate models for a given GHG emission scenario (Extended Data Fig. 3 ). Metrics of abruptness are less informative when few grid cells are exposed. For example, a species exposed at a single grid cell must necessarily have an abruptness score of 100%, while for a species exposed at two grid cells, abruptness must be 50% or 100%. Using these values would artificially inflate the apparent abruptness of thermal exposure (Extended Data Fig. 7 ). Sensitivity analysis showed that this effect is negligible when more than approximately ten grid cells are exposed and so we used this as a cut-off, only including abruptness scores for species and GCM combinations where exposure occurs across at least ten grid cells ( n = 14,403 species under SSP2-4.5; see Extended Data Table 3 for sample sizes under different SSP scenarios). Using such an area threshold reduces the number of analysed species, particularly on land where species’ geographical range sizes are smaller, and under low GHG emission scenarios where the magnitude of exposure is lower (Extended Data Table 3 ). However, we found that the overall distribution of abruptness scores was highly consistent regardless of the cut-off used (from n = 10–250 exposed grid cells) (Extended Data Fig. 7 and Extended Data Table 3 ). Null model of abruptness Even if the thermal exposure of grid cells occurred as independent random events, some level of clustering in the timing of exposure events within species would be expected simply by chance. To understand how abruptly thermal exposure would be expected to occur by chance (that is, if exposure events occurred randomly over time), we conducted the following randomization procedure: for each species we randomly sampled, with replacement, the years between the first (2015) and final year (2100 or 2500) of the future climate simulation run, keeping the number of grid cells that are exposed fixed at the value projected for that species and GCM combination. We performed 200 replicate simulations and calculated the 95% quantile in projected abruptness (that is, a one-tailed test). Partitioning the cause of abrupt thermal exposure To test the factors driving the abruptness of projected thermal exposure, we conducted a computational experiment in which we systematically eliminated each potential cause by manipulating the future warming trend (Fig. 3 ). First, for each grid cell we removed short-term temperature fluctuations (that is, interannual and decadal) to generate a constant, monotonic trend of increasing temperatures over the twenty-first century (Fig. 3b ). To do this, we assumed a linear increase in temperature between the mean of the first (2004–2014) and final decade (2090–2100) of the century. Second, we downgraded this smoothed future warming trend by a factor of five, so that the level of warming projected to occur by the end of the century (2090–2100, approximately 2.5 °C under SSP2-4.5) is instead not reached until the middle of the millennium (2490–2500) (Fig. 3c ). This choice of time period is arbitrary but equates to a slow future rate of warming of approximately 0.3 °C per century (compared to approximately 1 °C of warming since approximately 1970). Third, in addition to smoothing and downgrading the temperature time series, we eliminated the skewed distribution of grid cell warming tolerances for each species by making the current temperatures MMT 2005–2014 across grid cells within a species’ geographical range uniformly distributed between the species’ lower and upper realized thermal limit (Fig. 3d ). After each of these three steps, we recalculated the timing of grid cell thermal exposure events and the projected abruptness of thermal exposure across each species’ geographical range (Fig. 3e–i ). Skew in grid cell warming tolerances For each species, we calculated a number of metrics to describe the uneven distribution of occupied grid cells across the species’ realized thermal niche. First, we calculated the proportion of grid cells that are warmer than the midpoint of the realized thermal niche. Second, we divided the species’ realized thermal niche into ten equally spaced temperature intervals, ordered from the warmest (interval = 1) to the coldest (interval = 10), and identified the temperature interval covering the largest area (that is, the number of grid cells) (Fig. 4a ). When two or more temperature intervals were tied, we took the mean interval position and then calculated the median interval position for each species across GCMs. Third, we calculated the skew in grid cell warming tolerances across each species’ geographical range, where positive values indicate that most grid cells have a narrow warming tolerance and thus have temperatures close to the species’ upper realized thermal limit (Fig. 4b ). Finally, to illustrate how the density of occupied grid cells varies across the realized thermal niche, for each species we standardized MMT values between 0 (warm edge) and 1 (cold edge) and used kernel density estimation accounting for boundary effects 59 implemented in the R 60 package bde 61 (Fig. 4b ). To ensure these patterns were not an artefact of using simulated climate data, we repeated our analysis using observed weather data on the mean daily maximum air temperature (1970–2000) 62 and mean sea-surface temperature of the warmest month (2000–2014) 63 , both available at 1-km resolution (Extended Data Fig. 4 ). To match the scale of the simulated climate data, we extracted the average air or sea-surface temperature within approximately 100-km grid cells. Geographical constraints on species’ thermal occupancy To describe the background availability of thermal conditions, we calculated the probability density of air and sea-surface temperatures globally at 100-km grid cell resolution (Extended Data Fig. 5 ). We repeated this for both simulated and observed climate data obtaining highly consistent results (Extended Data Fig. 5 ). For the observed weather data, we also calculated the probability density of temperatures averaged at different spatial grains, from the original 1-km resolution up to 768-km grid cells, obtaining very similar patterns (Extended Data Fig. 5 ). Thus, the warm-skewed distribution of temperatures globally is not an artefact of the particular spatial resolution used. To test if the warm-skewed structure of species’ geographical ranges is consistent with that expected due to the background availability of thermal conditions, we implemented a null model based on a spreading dye algorithm 64 (Extended Data Fig. 6 ). This approach has been applied in studies examining the distribution of species richness 65 and geographical ranges 23 , 66 expected in the absence of environmental gradients. Starting from a single randomly selected grid cell within the observed species’ geographical range, subsequent grid cells are sequentially added until the observed range size is reached. Grid cells were selected at random but we enforced geographical range cohesion by sampling from those grid cells that were adjacent, in any of the four cardinal directions, to those already selected. We simulated terrestrial and marine species separately, sampling grid cells from their respective domains. For those species distributed across multiple isolated regions (for example, different continents, ocean basins), each fragment of the species“ geographical range was simulated separately. For each species, we performed 20 replicate simulations. Our simulations thus maintained the observed size, cohesion and approximate position of each species’ geographical range but assumed that the occupation of grid cells is entirely random 66 . Thus, the distribution of temperatures across a species’ geographical range in this model is dependent only on the availability of thermal conditions. Global warming levels and risks of abrupt thermal exposure To understand how the risk of abrupt and widespread thermal exposure increases with the magnitude of climate warming, for each combination of GCM and GHG emission scenario ( n = 15 combinations), we calculated the projected increase in global-mean surface temperature (GST) between the pre-industrial (1850–1900) and end of the century (2080–2100), by averaging air temperatures across the land and sea-surface temperatures across the oceans 10 . We then fitted a generalized additive model to estimate how the percentage of species where at least 30% of their existing geographical range is exposed in a single decade varies as a function of GST (Fig. 5 ). This 30% threshold is arbitrary but we note that similar qualitative patterns were obtained when using alternative thresholds (20–60%; Supplementary Fig. 5 ). We fixed the percentage of species passing this threshold to equal 0 at 0.84 °C, the average GST across climate models at the end of the historical climate run (2006–2014). This model thus assumes that risks of abrupt and widespread thermal exposure only began this century. This is a conservative assumption, given that many species started to experience the adverse effects of warming earlier than this. Species’ realized thermal limits and the consequences of thermal exposure The extent to which \({{\mathrm{SpeciesMax}{\mathrm{MMT}}}}_{i}\) reflects fundamental limits to species persistence, and thus the risk of adverse consequences from thermal exposure, will vary across species. For species where thermal tolerance exceeds the range of conditions experienced previously 4 , 34 , thermal exposure (as defined in this study) may occur without adverse consequences for local populations. For species that had already experienced the adverse impacts of warming (for example, mass mortality 32 , 67 , population declines 3 , local extinctions 1 , 2 , range contractions 4 , 5 , 6 ) before 2014 (that is, the end of the historical climate model run), our estimates of \({{\mathrm{SpeciesMax}{\mathrm{MMT}}}}_{i}\) may overestimate thermal tolerance and thus underestimate risks from thermal exposure. It is beyond the scope of our analysis to determine the number of species where future thermal exposure is more likely to have adverse impacts on populations. However, for the many species where extreme temperatures have already been identified as a threat, we can at least evaluate for these species how risks from thermal exposure are projected to spread over time. To do this we repeated our analysis using a subset of species where extreme temperatures have been identified as an ongoing or future threat (Fig. 5b,c ). The IUCN Red List of threatened species is the most comprehensive index of global species extinction risks 68 . Although the assessment of risk from climate change is recognized as incomplete and biased towards particular groups, for those species where climate change is listed we can be more confident that future climate warming will represent an increasing threat to the long-term survival of populations and the species 14 , 15 . Of the species assessed by the IUCN, we extracted those where the IUCN Red List identifies climate change as a threat (level-1 threat classification = 11), regardless of the species’ current Red List category ( n = 2,485). We restricted our analysis to those species specifically identified as threatened by thermal extremes (level-2 threat classification = 11.3) rather than any other aspect of climate change. We also excluded species where thermal extremes were listed as a past threat that is unlikely to return. In total, n = 1,106 species in our analysis have thermal extremes listed as a threat. For most of these species, thermal extremes are listed as an ongoing threat ( n = 1,042), with a smaller number listed as unknown ( n = 8), future ( n = 51) or a past threat, likely to return ( n = 5). We note that for most of these species, the severity ( n = 857) and scope ( n = 844) of the threat posed by thermal extremes, indicating the pace of population decline (severity) and the proportion of the population affected (scope), are not evaluated. Local adaptation and risks of thermal exposure Using \({{\mathrm{SpeciesMax}{\mathrm{MMT}}}}_{i}\) assumes that thermal exposure is governed by a single species’ range-wide thermal limit. However, populations may be locally adapted in space, potentially leading to higher risks of thermal exposure 69 . To consider this possibility, we estimated the risk of abrupt and widespread thermal exposure assuming that populations at each grid cell are perfectly adapted to the local thermal regime (Fig. 5a ). In this case, the thermal limit of species i at grid cell j is defined by the maximum MMT experienced at that grid cell \({{\mathrm{SiteMax}{\mathrm{MMT}}}}_{j}\) and Year ij is equivalent to the timing of local climate emergence 70 . We aggregated Year ij values across the grid cells occupied by a species to calculate the magnitude and abruptness of exposure across the geographical range. Species are likely to vary in the strength and scale of local adaptation but the information required to parameterize this variation is not widely available. Thus, simulations using either \({{\mathrm{SpeciesMax}{\mathrm{MMT}}}}_{i}\) or \({{\mathrm{SiteMax}{\mathrm{MMT}}}}_{j}\) to calculate Year ij provide a best and worst case scenario, respectively, for risks of thermal exposure based on realized distributions. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Climate and biodiversity data are freely available for download or upon request from the original sources. Data generated for this project are available at . Code availability The code used to conduct the analysis is available at .
Climate change is likely to abruptly push species over tipping points as their geographic ranges reach unforeseen temperatures, finds a new study led by a UCL researcher. The new Nature Ecology & Evolution study predicts when and where climate change is likely to expose species across the globe to potentially dangerous temperatures. The research team from UCL, University of Cape Town, University of Connecticut and University at Buffalo analyzed data from more than 35,000 species of animals (including mammals, amphibians, reptiles, birds, corals, fish, cephalopods and plankton) and seagrasses from every continent and ocean basin, alongside climate projections running up to 2100. The researchers investigated when areas within each species' geographical range will cross a threshold of thermal exposure, defined as the first five consecutive years where temperatures consistently exceed the most extreme monthly temperature experienced by a species across its geographic range over recent history (1850–2014). Once the thermal exposure threshold is crossed, the animal is not necessarily going to die out, but there is no evidence that it is able to survive the higher temperatures—that is, the research projects that for many species there could be an abrupt loss of habitat due to future climate change. The researchers found a consistent trend that for many animals, the thermal exposure threshold will be crossed for much of their geographic range within the same decade. Lead author Dr. Alex Pigot (UCL Center for Biodiversity & Environment Research, UCL Biosciences) said, "It is unlikely that climate change will gradually make environments more difficult for animals to survive in. Instead, for many animals, large swaths of their geographic range are likely to become unfamiliarly hot in a short span of time. "While some animals may be able to survive these higher temperatures, many other animals will need to move to cooler regions or evolve to adapt, which they likely cannot do in such short timeframes. "Our findings suggest that once we start to notice that a species is suffering under unfamiliar conditions, there may be very little time before most of its range becomes inhospitable, so it's important that we identify in advance which species may be at risk in coming decades." The researchers found that the extent of global warming makes a big difference: if the planet warms by 1.5°C, 15% of species they studied will be at risk of experiencing unfamiliarly hot temperatures across at least 30% of their existing geographic range in a single decade, but this doubles to 30% of species at 2.5°C of warming. Dr. Pigot added, "Our study is yet another example of why we need to urgently reduce carbon emissions to mitigate the harmful effects climate change is having on animals and plants, and avoid a massive extinction crisis." The researchers hope that their study could help with targeting conservation efforts, as their data provides an early warning system showing when and where particular animals are likely to be at risk. Co-author Dr. Christopher Trisos (African Climate and Development Initiative, University of Cape Town) said, "In the past we've had snapshots to show the impact of climate change, but here we are presenting the data more like a film, where you can see the changes unfold over time. This shows that for many species the risk is a bit like everything, everywhere, all at once. By animating this process, we hope to help direct conservation efforts before it's too late, while also showing the potentially catastrophic consequences of letting climate change continue unchecked." The researchers say that this pattern of abrupt exposure may be an inevitable feature of living on a round planet—because of the shape of the Earth, there is more area available to species in environments near the hot end of what they are used to, such as in low-lying areas or near the equator. A previous study by the same lead authors found that even if we stop climate change so that global temperatures peak and start to decline, the risks to biodiversity could persist for decades after. In another analysis similar to the current study, they found that many species facing unfamiliar temperatures will be living alongside other animals experiencing similar temperature shocks, which could pose grave risks to local ecosystem function.
10.1038/s41559-023-02070-4
Earth
Stress in Earth's crust determined without earthquake data
Andrew A. Delorey et al, Estimation of the orientation of stress in the Earth's crust without earthquake or borehole data, Communications Earth & Environment (2021). DOI: 10.1038/s43247-021-00244-1 Journal information: Communications Earth & Environment , Nature
http://dx.doi.org/10.1038/s43247-021-00244-1
https://phys.org/news/2021-10-stress-earth-crust-earthquake.html
Abstract Mechanical stress acting in the Earth’s crust is a fundamental property that is important for a wide range of scientific and engineering applications. The orientation of maximum horizontal compressive stress can be estimated by inverting earthquake source mechanisms and measured directly from borehole-based measurements, but large regions of the continents have few or no observations. Here we present an approach to determine the orientation of maximum horizontal compressive stress by measuring stress-induced anisotropy of nonlinear susceptibility, which is the derivative of elastic modulus with respect to strain. Laboratory and Earth experiments show that nonlinear susceptibility is azimuthally dependent in an anisotropic stress field and is maximum in the orientation of maximum horizontal compressive stress. We observe this behavior in the Earth—in Oklahoma and New Mexico, U.S.A, where maximum nonlinear susceptibility coincides with the orientation of maximum horizontal compressive stress measured using traditional methods. Our measurements use empirical Green’s functions and solid-earth tides and can be applied at different temporal and spatial scales. Introduction Knowledge of the mechanical stress acting in the Earth’s crust and lithosphere is important for a wide range of geophysical studies and applications 1 , 2 , 3 , 4 including plate tectonics 5 , 6 , seismicity and faulting 7 , 8 , 9 , 10 , 11 , and subsurface fluid behavior 9 , 12 , 13 . The stress field is commonly represented as the orientation of the maximum horizontal compressive stress ( S Hmax ) 3 , 8 , 14 , 15 , 16 , and other information regarding the principal components is often not known, much less the full stress tensor. At regional to tectonic-plate scales (100 s to 1000 s of km), the orientation of S Hmax is influenced by lateral plate boundary forces, tractions along the bottom of the lithosphere, and gravitational potential 17 . At local scales (<100 km), the orientation of S Hmax may vary owing to heterogeneities in density and elasticity, slip-on faults 7 , 12 , and pore pressure 12 . The orientation of S Hmax is commonly estimated using borehole-based methods 18 , 19 , inverting earthquake focal mechanisms 20 , 21 , 22 , 23 , 24 , and less commonly by measuring the orientation of young, stress-sensitive geologic features 25 . Shear wave splitting and azimuthal seismic anisotropy are sometimes related to the stress field, but also reflect past deformation 26 . Borehole-based methods are high-cost, point measurements 27 , and are commonly applied in hydrocarbon-producing regions 8 . Interpreting earthquake focal mechanisms is limited to seismically active areas and requires an adequate monitoring network to produce high-quality, low-uncertainty source mechanisms. Because of limitations with these techniques, the stress field in broad regions of continental interiors is poorly constrained 15 . Rocks are heterogeneous materials with stress and strain-dependent elastic properties, and finite, nonzero relaxation times (slow dynamics) 28 , 29 , 30 . This is in contrast to ideal linear elasticity (Hooke’s Law), in which the elastic modulus is insensitive to strain, and the elastic response is instantaneous 31 . In individual rock samples, the relationship between stress, strain, and elasticity is complex 32 , 33 with mechanical damage and weak grain contacts being primarily responsible for nonlinear elastic behavior in brittle rocks 34 . Temperature, pressure, and the presence of fluids modulate the nonlinear behavior 34 , 35 . In the Earth, seismic velocities are commonly observed to be faster when rocks are compressed, usually interpreted as the closing of cracks and stiffening of internal contacts 36 , 37 , whereas they are typically slower after experiencing strong shaking, usually interpreted as the breaking or weakening of internal contacts 38 , 39 . After a dynamic disturbance, the material relaxes back to its original or a new metastable state via the process of slow dynamics 39 , 40 . Thus, rocks are metastable in their elastic behavior and strongly influenced by relatively weak external forces perturbing their material structure 35 , 38 , 39 . Rock samples in laboratory experiments exhibit anisotropic linear and nonlinear elastic properties when differential stress is applied 41 , 42 . In two dimensions, differential stress is the difference between the two principal stresses. To demonstrate this nonlinear effect, Nur and Simmons (1969) applied uniaxial stress to a cylinder of granite, normal to the cylinder axis, and measured the travel time of an elastic wave as a function of angle with respect to the uniaxial stress 42 . The pressure derivative of the wave modulus with respect to strain (called nonlinear susceptibility, NS) was strongest when the angle between the uniaxial stress and the propagation of the probe wave was zero, and weakest when the angle was 90 degrees (Fig. 1 ). The effect was greatest for compressional P-waves, and the nonlinear elastic behavior can be quantified by measuring it. Stress-induced anisotropy in linear elasticity is exhibited by the velocity being faster for P-waves traveling in the same orientation of the applied uniaxial stress, rather than perpendicular. Stress-induced anisotropy in nonlinear elasticity is exhibited by the stress derivative of P-wave velocity being higher in the same orientation as the applied uniaxial stress, rather than perpendicular. Owing to a modest uniaxial or differential stress (1 MPa), one can observe anisotropy in linear elasticity of a few percent, whereas one can observe anisotropy in nonlinear elasticity of 100 s of percent 41 , 42 . Hence, it is preferable to measure anisotropy in nonlinear elasticity to infer principal stress orientations, rather than anisotropy in linear elasticity, as is done with shear wave splitting and azimuthal anisotropy of surface or body waves. Fig. 1: Stress-induced anisotropy. Stress-induced anisotropy in nonlinear susceptibility with data from Nur and Simmons 42 . The vertical axis represents nonlinear susceptibility. The horizontal axis shows the angle between the orientation of the uniaxial stress and the orientation of propagation of the probe wave. Full size image In addition to a nonlinear response to quasi-static stress, rocks exhibit creep behavior when compressed 43 . In this context, quasi-static means that the response of a system is as fast as the applied disturbance, and also that there is no inertia. Creep is delayed, time-dependent, and occurs when the strain response of a material is slower than the rate of the applied stress. The creep rate in rocks is highly sensitive to the magnitudes of the confining pressure and differential stress 43 . Yamamura et al., 29 measured elastic wave travel times in rock throughout the periodic tidal strain cycle and observed travel time differences from peak extension to the next peak compression (~6 h) were only weakly sensitive to the strain rate. Over several semi-diurnal (~12 h) cycles, the mean cycle travel time was most sensitive to the magnitude of peak extension, and relatively insensitive to the magnitude of peak compression. This indicates that the material response to extension was instantaneous while the material response to compression was delayed. The difference in travel time between peak extension and the next peak compression, and the mean cycle travel time, were strongly controlled by an apparently constant creep rate. The confining pressure and differential stress were constant in this study, but we expect the creep rate to be slower for higher confining stress and faster for higher differential stress, as it is in laboratory experiments on rocks 43 . Thus, under constant confining pressure, as is generally present in the Earth, we can use the creep rate to infer information about the differential stress. We exploited this nonlinear elastic behavior in rocks and applied a technique to passively monitor the orientation of S Hmax in the lithosphere. Our approach to measure S Hmax orientation in situ relied on seismic velocity measurements that employed Empirical Green’s Functions (EGF) derived from ambient Earth noise recorded at multiple pairs of seismic stations 44 . Cross-correlations of diffuse seismic wavefields, such as ambient Earth noise or scattered coda waves, can be used to estimate Green’s function. Most studies on Earth that measure temporal changes in seismic velocities do so by differencing the phase in the coda part of the EGF 36 , 38 , 39 . The coda of the EGF follows the direct waves and is the result of scattered waves that travel through some volume between the two stations 45 . We measured the velocity sensitivity to strain using a classic nonlinear acoustic approach known as the pump-probe method 33 , where, in a laboratory setting, the material is strained with a low-frequency oscillation (pump) and the elasticity is monitored by measuring the travel time of a high-frequency probe wave that was applied at different points in the pump cycle. For this study, solid-earth tides were used as the low-frequency pump and EGFs were the high-frequency probe. Solid-earth tides produce peak-to-peak axial strains up to 5 × 10 −8 46 , which corresponds to axial stress of ~3.8 kPa using a P-wave velocity of 5 km s −1 for the shallow crust, corresponding to a reasonable bulk modulus of 63 GPa and Poisson’s ratio of 0.25. The aerial strain tensor is elliptical with its long axis pointing towards the point on the Earth most directly facing the Moon (for Moon cycles) and Sun (for Sun cycles). For our study areas, there was an average ratio of ~2.5 between the south–north and west–east axes of the strain tensor and this ratio is latitude-dependent. The magnitude of differential stress in Earth’s brittle upper crust is uncertain in most areas, but is on the order of 10 s of MPa, decreasing toward the surface 11 . Consequently, the stress due to solid-earth tides is around three orders of magnitude less than the differential stress in the Earth’s crust. By measuring changes in seismic velocity over the average tidal cycle, we could approximate the derivative of the seismic velocity with respect to strain, and also estimate the creep rate during compression. If the behavior we observe is similar to that shown by Yamamura et al. 29 , in a shallow cave, then the system in the Earth’s upper crust is not quasi-static and creep behavior is important. We performed this natural pump-probe experiment in two prototype studies located in north-central Oklahoma and north-central New Mexico, USA (Fig. 2 ). We selected north-central Oklahoma because of the ongoing induced seismicity, generated by decades of injected wastewater from oil and gas operations 47 , 48 , that tends to occur on faults optimally orientated in the regional stress field 9 . North-central New Mexico was selected to test if we could resolve similar results in a geologic setting that straddles a continental rift separating the Colorado Plateau from the stable craton and has many different S Hmax orientations from Oklahoma 16 , 49 (Fig. 2 ). In north-central Oklahoma, S Hmax is oriented approximately N80E with some local variations 9 , but in north-central New Mexico, S Hmax is aligned nearly south–north along the Rio Grande Rift and rotates to a more east–west orientation eastward into northeastern New Mexico 16 . The dominant faulting style in Oklahoma is strike-slip, though there is some normal faulting in the northern part of our study area, whereas the faulting style is strongly normal faulting in northern New Mexico, associated with active extension along the Rio Grande Rift 8 . Fig. 2: Geographic location of the study area and seismic stations. The rectangle in a indicates the position of b within North America. The blue circles are seismic stations from Earthscope’s Transportable Array. The red circles are seismic stations from the Nanometrics Research Array. The short black lines indicate the direction of S Hmax from the 2016 release of the World Stress Map (WSM) 25 using conventional methods. Classes A, B, and C are WSM quality ratings that are identified according to length. Full size image Our results show that the Earth exhibits stress-induced anisotropy of NS that is aligned with S Hmax in these two different geologic settings, suggesting that the creep rate is fastest in the orientation of S Hmax . Since our measurements use only ambient seismic noise, there are several advantages over existing methods for estimating the orientation of S Hmax : (1) earthquake source properties are not used or required, (2) borehole measurements are not used or required, (3) sufficient seismic data exist in many regions of interest where traditional stress measurements are unavailable, and (4) the technique can be applied at a wide range of spatial and temporal scales. Results Our first measurement was to compare average seismic velocities in the Earth when tidal strains are extensional versus when tidal strains are compressional. We used the scattered wavefield in the coda of the EGFs calculated from the vertical components of the seismometers (Fig. 3 ). These scattered waves consisted of Rayleigh, compressional, and vertically polarized shear waves, though the relative amounts were difficult to discern. In both study areas, the Earth was slower during extension than during compression by fractional velocities of 0.04% with an uncertainty of ±0.003% for Oklahoma, and by 0.03% with an uncertainty of ±0.01% for New Mexico. This is consistent with softening of internal contacts during extension and stiffening of internal contacts during compression 29 , 36 . In Figs. 4 and 5 , we report NS as fractional velocity change between velocities measured when the Earth is in compression to velocities measured when the Earth is in extension, though NS is actually fractional velocity change per unit strain. Tidal strain is on the order 10 −8 and it varies somewhat from cycle-to-cycle and by azimuth. Our results reflect the average peak-to-peak strain amplitude, which is discussed below. Fig. 3: Empirical Green’s functions. Empirical Green’s functions (Z-Z correlations), for a Oklahoma, and b New Mexico, ordered by interstation distance. The black lines bracket the coda used for the velocity calculations. Full size image Fig. 4: Oklahoma results. For Oklahoma, we show the seismic stations and azimuthal dependence of the fractional change in velocity from when tidal strains were in compression to when tidal strains were in extension. In a – f , we show the S Hmax results from this study with directional indicator, the results of an f test comparing the sine model to a uniform model, and the uncertainty in the azimuthal prediction. In g – l , the short black lines indicate the direction of S Hmax from the 2016 release of the World Stress Map (WSM) 25 using conventional methods. Classes A, B, and C are WSM quality ratings that are identified according to length. Solid red stations are used to calculate fractional velocity changes shown adjacent m – r . Vertical red bars represent 1-sigma standard deviation uncertainties in the azimuthal measurements. The blue curve is the average of 1000 realizations (gray lines) of the best-fit sine function applying uncertainties in azimuthal measurements. The minimum value of the blue curve indicates the orientation of S Hmax . Full size image Fig. 5: New Mexico results. For New Mexico, we show the seismic stations and azimuthal dependence of the fractional change in velocity from when tidal strains were in compression with when tidal strains were in extension. a – c show the S Hmax results from this study with directional indicator, the results of an f test comparing the sine model to a uniform model, and the uncertainty in the azimuthal prediction. In d – f , the short black lines indicate the direction of S Hmax from the 2016 release of the World Stress Map (WSM) 25 using conventional methods. Classes A, B, and C are WSM quality ratings that are identified according to length. Solid blue stations are used to calculate fractional velocity changes shown adjacent g – i . Vertical red bars represent 1-sigma standard deviation uncertainties in the azimuthal measurements. The blue curve is the average of 1000 realizations (gray lines) of the best-fit sine function applying uncertainties in azimuthal measurements. The minimum value of the blue curve indicates the orientation of S Hmax . Full size image Oklahoma In Oklahoma, a sine function fitted to the results shows that the maximum (negative magnitude) NS occurs between 69 and 91°, depending upon the selection of stations (Figs. 2 and 4 ). Borehole measurements and a focal mechanism inversion show S Hmax orientations to be between 71° and 84° in the same region 9 . In that previous study, the reported S Hmax azimuth was 71° ± 6° in the north (their area 2 N) and 82° ± 6° in the south; (their areas 3 and 4, see Fig. 1 and Table 1 in Alt and Zoback 9 ). If we consider Fig. 4a–d versus Fig. 4e–f , we see a similar stress rotation from north to south. By considering S Hmax orientations from the 2016 release of the World Stress Map (WSM) 25 shown in Figs. 4g–l and 2 , we can see that azimuths begin to decrease again (rotate more southwest–northeast) in the southernmost part of our study area. Some of the rotated stress indicators are outside our coverage area and we do not detect this rotation. So, this is a pointed disagreement between our measurements and the WSM. The areas considered in Alt and Zoback 9 , that we used for comparison, are not southward enough to include these more southwest–northeast oriented stress indicators. It is possible that we would detect this rotation with additional data to the south. Our measurements and these previously reported values reflect different spatial scales so exact comparisons are difficult. We note that considering only some of the northern stations produces ambiguous results because azimuthal variations are not clearly sinusoidal. This ambiguity and a few positive values in Fig. 4 that indicate velocities are faster during extension, maybe the result of poroelastic effects, and are discussed in more detail below. We estimated uncertainties in the predicted orientation of S Hmax by considering the uncertainties in the azimuthal measurements of NS (red bars on Fig. 4 ). The uncertainties in azimuth vary from 0.5 to 2.7 degrees depending upon the subarray. In addition, we used an f test to evaluate whether a sine function, which has three parameters (amplitude, phase, mean), is a statistically better model than a uniform function, which has one parameter (mean). Our results show that a sine function is a statistically better model for the observations with p-values ranging from 0.001 to 0.05, depending on the subarray. These results are reported in Fig. 4 . New Mexico In New Mexico, a sine function fitted to the results from all nine stations (blue circles, Figs. 2 and 5 ) shows that the maximum (negative magnitude) NS occurs at 173° (Fig. 5 ). The maximum NS using only the six westernmost stations is 1° and using only the six easternmost stations is 161°, with the three central stations used in both subarrays. The regional published S Hmax orientations 9 , 25 show a transition, moving west to east, from slightly southwest–northeast to south–north within the Rio Grande Rift at this latitude (Fig. 2 ). Continuing east, a southeast–northwest S Hmax orientation may be expected if we interpolate between indicators in the Rio Grande Rift to indicators in northeastern New Mexico, although no published S Hmax orientations are available in the vicinity of the three eastern stations. Employing only the six western stations, the NS suggests that S Hmax is 1°, which is consistent with S Hmax orientations observed within the rift valley and mountains to the west. Employing only the six easternmost stations, NS suggests that S Hmax is 161°, which is counterclockwise to the results of the six western stations, and intermediate between those reported within the Rio Grande Rift and the southeast–northwest S Hmax orientations reported east of our study area (east the blue circles in Fig. 2 ). Our NS-derived S Hmax results for all nine stations are in agreement with the average orientation of published S Hmax orientations within the footprint of the seismic array 9 , 25 . As no S Hmax orientations exist for the eastern part of the study area, our results using the eastern stations suggest the observed clockwise rotation from east to west transitions into our study area. The results provide a clear example of constraining the stress field using passive seismic data in a region where no other estimates are available. We estimated uncertainties in the predicted orientation of S Hmax , and tested a sine function versus a uniform function as models to describe the azimuthal variations, in the same manner as the Oklahoma results. The uncertainties in azimuth vary from 3.0 to 4.4 degrees depending upon the subarray. Our results show that a sine function is a statistically better model for the observations when using all nine stations with a p value of 0.0005. Using only the six western stations, the p value is 0.05, and using only the six eastern stations the p value is 0.5. So, for the eastern stations, we cannot reject the null hypothesis that a uniform function describes the observation better than a sine function. These results are reported in Fig. 5 . The uncertainties that we report on Figs. 4 and 5 represent 1-sigma standard deviation measurement uncertainties for average stress orientations and not the variability in stress orientations within the corresponding seismic array. It is possible to have low uncertainty for average behavior from a set of individual measurements with higher uncertainties. To explore this further, we can consider four panels, Fig. 4a–d . As we remove northern stations, the remaining interstation paths produce a lower azimuth (74°–69°), then higher (69°–74°), then lower again (74°–69°) with measurement uncertainties of ~1°. This suggests something like a striped pattern in the spatial distribution of stress orientations. This could be true, but it also leads us to believe that our uncertainties may be underestimated by up to a few degrees, and perhaps the true average for this area is just somewhere between 69° and 74°. The contribution of data and measurement uncertainties from each pair of stations, to the final result, is not linear as with simple averaging. Discussion Here, we discuss different mechanisms that may explain our observation that the fractional velocity change (Δv v −1 ) between tidal extension and tidal compression (Δv Δε −1 ) was the greatest in the orientation of S Hmax . In general, we assumed that measured velocity variations are related to the stiffness across internal contacts 41 , 42 . These contacts, such as grain boundaries or fractures, produce nonlinear behavior and have a direction in which they open (weaken) and close (strengthen) 28 , 50 , 51 . We assumed that internal contacts naturally exist at all orientations in rocks of the subsurface, and considered only those that open and close in a horizontal direction because these internal contacts are most strongly affected by the differential, areal stress. From this population, we considered how internal contacts with different orientations respond to tidal forcing and the ambient stress field ( S Hmax ) together. When we refer to contact orientations below, we mean the orientation in which they open and close. For both P-waves and S-waves, velocities are sensitive to contacts oriented in the same direction as wave propagation, but whose sensitivities are higher for P-waves than for S-waves 41 , 42 . Both Love and Rayleigh waves would be considered SH waves as described in Nur and Simmons 42 and Johnson and Rasolofosaon 41 because they consider a vertically applied uniaxial stress, and in the Earth the uniaxial (differential) stress is horizontal. A shear wave would have to travel vertically and be polarized in the orientation of the uniaxial stress to be considered SV in their notation. Because we use vertical component data, we expect that our wavefield consists of scattered Rayleigh waves, P-waves, and V sv (vertically polarized) shear waves. According to laboratory and theoretical results, the pressure derivative of P-waves, at small pressures, is ~4× greater for waves traveling in the same direction as the applied uniaxial stress than for waves traveling perpendicular to the applied uniaxial stress, and ~2× greater for SH waves 41 , 42 . Even though SH waves are much less sensitive to differential stress than P-waves in this regard, these amounts are still dramatically higher than expected anisotropy in the wave speeds themselves, which would be only a few percent for differential stress of a few MPa 41 , 42 . To determine our preferred mechanism, presented first, we used the results from Yamamura et al. 29 , and laboratory experiments as guidance. In the Yamamura study, Δv v −1 over one tidal cycle was only weakly sensitive to the strain rate, and there was no unique correspondence between velocity and strain, suggesting that Δv v −1 exhibits creep during compression 29 . We knew from laboratory experiments on rocks that creep rate is highly sensitive to confining pressure and differential stress 43 . Rocks at any given depth in the Earth have the same overburden with no azimuthal dependence, so overburden could not produce an anisotropic result. Spatial variations in pore pressure at the same depth could result in different effective confining pressures, which would affect creep rates, but also would not produce an anisotropic result unless poroelastic properties were highly anisotropic. The data from Yamamura et al. 29 were measured at a single azimuth and the orientation of S Hmax was unknown to us, so no azimuthal dependence could be discerned. However, we considered the effect of the differential ambient stress field, which is three orders of magnitude stronger than the tidal stress (~10 MPa versus ~4 kPa). For internal contacts in the orientation of S Hmax , the differential stress always promotes strengthening or closure and results in faster creep rates for closing contacts during compressive tidal stress 43 . For internal contacts perpendicular to S Hmax (i.e., in the orientation of S Hmin , the minimum horizontal compressive stress), the differential stress always opposes closure and results in slower creep rates. These contacts experience negative (contact weakening) differential stress even when tidal stress is compressive because the magnitude of the ambient stress field is three orders of magnitude greater. Therefore, the creep rate during tidal compression should always be faster for contacts in the orientation of S Hmax than for contacts in the orientation of S Hmin . Since in Yamamura et al. 29 , Δv v −1 was determined primarily by the creep rate, we should expect to observe the highest Δv v −1 values in the orientation of S Hmax . The Yamamura et al. 29 experiment was at very low confining stress in soft rock; P-wave velocity ~2 km s −1 , saturated, and with 40% porosity. The rocks in our study were likely less porous, at least 2–3 times stiffer, and were likely to be saturated 52 . Both the confining pressure and the differential stress were several orders of magnitude higher. However, creep behavior is ubiquitous in rocks in the laboratory over a wide variety of types and conditions 43 . A second possible mechanism is that ambient tectonic stress and tidal stress are combined in a nonlinear manner. In the laboratory experiment described previously, an increasing uniaxial stress-induced elastic anisotropy in dry granite without any confining pressure 42 . These results were for a quasi-static system and did not consider behavior during loading and unloading separately 42 . Our experiment consisted of constant confining and differential stresses under likely wet conditions, combined with a near-isotropic cyclical (tidal) stress. A simple application of the laboratory results would be that the NS decreases with increasing magnitude of the uniaxial stress in any specific orientation, which would produce the opposite result to the one we obtained. However, one should not assume quasi-static conditions in our study. A more applicable experiment to our study would provide a better comparison. A third possible mechanism for the observed azimuthal dependence of fractional velocity change in the study areas is that there is a preferred orientation for cracks, internal contacts, or anisotropic minerals that are not related to the present-day stress field 53 . Since we were not measuring anisotropy in linear elasticity (Δv v −1 ), but instead anisotropy in nonlinear elasticity (Δv Δε −1 ), it is not clear that the same behaviors observed in linear elasticity, such as shear wave splitting or azimuthal anisotropy in surface waves, apply to nonlinear elasticity. If a material were merely softer in one orientation than another, its modulus is not necessarily more or less sensitive to strain. A fourth possible mechanism is that apparent azimuthal variations in the measured NS (Δv Δε −1 ) were the result of biases in the ambient noise field 54 . An intrinsic assumption when using cross-correlations to recover EGFs was that ambient noise was an equipartitioned, diffuse wavefield 55 . Any non-equipartitioned, non-diffuse properties of the ambient wavefield would introduce biases that could give incorrect velocity measurements. Though undoubtedly, there were non-equipartitioned, non-diffuse aspects to the ambient wavefield, even after pre-processing, we do not think it was sufficient to invalidate our results. The data from the two study sites were not recorded at the same time, but they would have similar ocean-based sources and seasonal variations 55 and yet, they provided different, and apparently correct estimates of S Hmax . Also, as we were measuring velocity changes, owing to solid-earth tides, rather than absolute velocities, consistent or slowly-changing biases would not affect our results. The difference in strain between maximum extension and maximum compression for solid-earth tides was of the order 5 × 10 −8 according to model calculations made with the software package SPOTL 46 . As we observed fractional changes in the velocity of ~0.04%, associated with these differences in tidal strain, the observed NS was of the order 10 4 . Our results are similar in magnitude to those found by Takano et al. 56 near a volcano in Japan using EGF frequencies of 1–2 Hz, and an order of magnitude higher than those found by Hillers et al. 57 in California using frequencies of 2–8 Hz. It is difficult to compare these results to ours beyond the raw numbers because the apertures of the arrays in the previous studies were both under 1 km, and ours were >100 km. Owing to the larger apertures and lower frequencies in our study, we are likely probing deeper into the subsurface and are certainly averaging over a larger area. The previous studies measured nonlinearity, but did not report azimuthal differences, nor the relationship between nonlinearity and stress-induced anisotropy. In addition to the softening and stiffening of internal contacts, there may be poroelastic effects owing to interactions between solid-earth tides and pore pressure. In saturated conditions, pore pressure increases during applied compression and decreases during applied extension, with pore pressure having the opposite effect on the effective confining stress as the applied tidal stress. If all station pairs in an array experience the same poroelastic conditions, the effect of stress-induced anisotropy is preserved, though curves shown in Figs. 4 and 5 would shift upward (positive). In some cases, we might even observe positive values 57 . If different station pairs experience different poroelastic conditions, such as different coupling between pore pressure and effective stress, then the effect of stress-induced anisotropy may not be as evident. In Oklahoma, we were able to get good estimates for S Hmax despite possible contributions from heterogeneous poroelastic conditions 58 that were apparent when using fewer station pairs. This may be why using only northern stations in Oklahoma did not produce a sinusoidal pattern with azimuth, and that the sinusoidal fit was generally worse when using fewer station pairs in Figs. 4 and 5 . Measuring and modeling principal stress orientations in the Earth’s crust is challenging, and it is important to match the length scale and depth to the desired application. Since stress heterogeneity likely exists at all scales 10 , 20 , 27 , it is advantageous to measure S Hmax at the same length scales and depths as the application. EGFs can be calculated using varying interstation distances to provide S Hmax estimates at different horizontal length scales. Estimating S Hmax at specific depths is more challenging but possible when using relative depths inferred through frequency content and coda time offset in the EGFs 38 , 45 . Perhaps most importantly, measuring the orientation of S Hmax is not limited to locations with earthquakes or boreholes, and provides data-driven constraints to regional estimates. Additional possibilities include calculating the time evolution of NS, which could reveal temporal changes in relative amplitude and orientation of S Hmax . Temporal changes may occur in the immediate vicinity of a recent earthquake, on a plate boundary experiencing stress/strain loading, or in a reservoir being depleted. Stress and deformation patterns in tectonic plates are generated from global and local scale mantle convection 59 , 60 , gravitational potential 61 , and plate boundary tractions 16 , 62 . The relative contribution of these mechanisms is unknown. Ultimately, we do not know to what extent continental scale stress models represent the actual stress field in regions with few or no measurements to constrain these estimates. Therefore, we cannot attempt to model or characterize mechanisms for these unknown heterogeneities. This method makes possible dense and uniform observations of the orientation of S Hmax across continental regions, which will improve stress models and thereby our understanding of the underlying geodynamical processes. One possible application would be to apply this method at a continental scale by using many overlapping subarrays within Earthscope’s US Array, similar to what we have done in New Mexico. Another option would be to design a denser, local array over a particular area of interest, such as a geothermal field. In this study, we calculated EGFs as a function of tidal strain and azimuth in north-central Oklahoma and north-central New Mexico to constrain NS and estimated the orientation of S Hmax . Our results in both study areas show that seismic velocities were, on average, faster when tidal strains were in compression relative to when they were an extension. We observed stress-induced anisotropy in nonlinear anelastic behavior, which is aligned with S Hmax and provides a technique to estimate the orientation of S Hmax without focal mechanism inversions or borehole measurements. Large-scale application of this method may resolve additional tensor properties of the nonlinear behavior, reveal how S Hmax varies with horizontal length scales and depth, and how S Hmax evolves temporally in areas such as fluid reservoirs and active fault zones. Methods and data Data We used publicly available seismic data from the two study areas, north-central Oklahoma and north-central New Mexico. For Oklahoma, we obtained waveform data recorded by the Nanometrics Research Array (NX) from the Incorporated Research Institutions for Seismology Data Management Center 63 . The NX array consists of 30 broadband, three-component instruments that recorded 100 samples per second for ~3 years between mid-2013 and mid-2016 (red circles in Fig. 2 ). For New Mexico, we obtained waveform data from nine stations in Earthscope’s Transportable Array (TA) also from the Incorporated Research Institutions for Seismology Data Management Center 64 . This subarray consists of nine broadband, three-component instruments that recorded 40 samples per second for ~2 years between mid-2008 and mid-2010 (blue circles in Fig. 2 ). We used only the vertical component for all seismic data. Signal processing Signal processing steps were performed using the Python package ObsPy 65 . We organized the data into day-long segments and deconvolved the instrument response. When calculating EGFs from continuous broadband seismic data it was important to remove transient signals like earthquakes 66 . We removed earthquake signals from the data using earthquakes identified in the U.S. Geological Survey Comprehensive Catalog. We multiplied windows within the waveforms by zero, tapering to 1 at the beginning and end, following three sets of earthquake criteria: (1) earthquakes with a minimum magnitude of 3.5 and a maximum distance of 30 km from the array, between surface wave velocities of 2 and 5 km s −1 , (2) earthquakes with a minimum magnitude 5 and a maximum distance of 2000 km from the array, between surface wave velocities of 2 and 7 km s −1 , and (3) earthquakes with a minimum magnitude of 6 at any distance, between surface wave velocities of 2 and 8 km s −1 . These windows were chosen because they bracket the expected start and end of the wave train from the corresponding earthquakes. There may be smaller earthquakes present on the recordings, but their shorter durations and weaker amplitudes are less of a concern for interfering with our measurements. This resulted in the zeroing of 7.9% of waveforms for Oklahoma and 4.3% of waveforms for New Mexico. The disparity exists because there were more local earthquakes in Oklahoma than in New Mexico during the study intervals. Additionally, we clipped all signals greater than three times the RMS of each day-long segment to remove non-earthquake signals observed as emergent or impulsive noise. Tidal strain In order to determine time windows that were in periods of high (extensional) and low (compressional) strain, we determined the volumetric tidal strain using the software package SPOTL 46 . We used the volumetric strain component because we expect the nonlinear behavior to be localized on pre-existing internal contacts, which may have any orientation. We divided time into two groups according to tidal strain magnitude, the top 25% and the bottom 25%, where top refers to maximum extension and the bottom refers to maximum compression. For stress modeling in the Earth, a common convention is that compressive stress has a positive sign, because absolute stress is nearly always compressive. Except when referring to S Hmax (maximum horizontal compressive stress), we use the opposite convention so that positive axial stress results in extension if the elastic modulus is also positive. In this manuscript, we frequently use the terminology compressive and extensional to avoid any misunderstandings. Empirical Green’s functions We cut and merged the preprocessed waveforms for all stations in order to separate the data into two groups corresponding to time periods when tidal strains were in compression and time periods when tidal strains were in extension. We discarded any waveform segments shorter than 30 min duration because very short segments do not contain enough data to be useful. We calculated an EGF for each selected station pair, for both compressional and extensional groups using a phase cross-correlation method 67 in which we pre-whitened the spectrum before applying a phase cross-correlation. There were ~780 individual segments for each station pair during the recording period for both study areas, though the actual number for each pair varied based on data availability, data quality, and other factors. In particular, we discarded any segments that produced a velocity measurement that exceeded four times the standard deviation from the mean of all the segments. For the Oklahoma stations, we empirically determined that a station separation distance between 30 and 60 km produced the best EGFs and selected station pairs accordingly. This involved applying a bandpass filter with corners of 0.1 and 1 Hz, plotting the EGFs, and visually inspecting them. In our visual inspection, we looked for a well-defined, dispersive Rayleigh wave with decaying amplitude in the coda, and when plotting the EGFs by interstation distance, there was a clear and consistent move-out of the waveforms. For the New Mexico stations, we used all pairs because there were fewer stations. Next, we describe the EGF stacking procedure (Fig. 3 ). We selected 14-day windows, and for each station pair, we selected all EGFs whose segment start time fell within ±7 days of the center of the window. Each EGF was scaled by the square root of the duration of the underlying time series before stacking so that each EGF contributed to the stack according to the amount of data it contained. We calculated the Pearson correlation coefficient 68 , which varies between −1 (perfectly anti-correlated) and 1 (perfectly correlated), of each EGF with the stack. Any EGF yielding a Pearson correlation coefficient <0.5 was discarded and then we produced a new stack with the remaining EGFs. We re-evaluated the discarded EGFs and any that had a Pearson correlation coefficient >0.5 using the updated stack was reinserted to create a new stack. The process was repeated until there were no discarded EGFs with a Pearson correlation coefficient greater than 0.5 with the stack. Subsequent 14-day windows were calculated with a 7-day overlap. This stacking procedure was intended to include as many observations as possible while discarding outliers. The process resulted in stacked EGFs for each station pair that represented 14-day windows with 7-day overlaps for each of the two strain bins described above (Fig. 3 ). We estimated the measurement uncertainty using the standard deviation of the 14-day windows. We summed the positive and negative lagging parts of the EGF and selected the coda part as shown in Fig. 3 to avoid direct wave arrivals. We determined the average phase difference and velocity change (Δv v −1 ) between the two EGFs (compression and extension) in a 30 s coda window for waves between 4 and 5 s period following the steps outlined in the wavelet method of Mao et al. 69 . This method uses a continuous wavelet transform to convert time series data (the coda) into the frequency domain. To perform the continuous wavelet transforms, we used the Python package PyWavelets 70 . Unlike a Fourier transform, a wavelet transform is localized in both time and frequency because the wavelet has both a finite time-width and frequency bandwidth. We selected periods of 4–5 s because Rayleigh waves in this period range are sensitive to structures in the top few km of the Earth. The coda also contained scattered body waves consisting of P and vertically polarized S waves ( V sv ). Once the coda waveforms were represented in the frequency domain, we could measure small phase shifts as a function of frequency and time offset in the coda. We selected a coda window of 30 s because that is the period of time when we expect scattered wave arrivals based on past experience 38 . We used a Morlet wavelet 71 with ω 0 = 0.25 Hz that corresponded to the periods we analyzed and allowed us to recover the known phase shifts in simple synthetic examples. In these synthetic examples, we took a real EGF coda waveform and manually introduced phase shifts of various magnitudes and signs by stretching or compressing the waveform in time. Then, we verified that we could recover the apparent velocity change associated with the imposed phase shift. This method had the ability to measure phase shifts associated with changes in velocity as a function of frequency and coda offset time. In general, the earlier part of the coda contained more scattered surface waves, whereas the latter part contained more body waves, with the transition time governed by the scattering properties of the subsurface 45 . Scattered Rayleigh waves are sensitive to the upper 2–3 km for periods of 4–5 s. If the window of the measurements contained body waves (P and V sv ), then the waves would be sensitive over a greater depth range, depending on the velocity of the scattered waves, and the scattering properties in the subsurface. We were only interested in the average velocity changes for this analysis, and so we calculated the average phase shift, and associated velocity change, over the entire coda window and period range. Results measured as a function of frequency and coda offset time likely contain depth information 38 , 45 , 69 but were beyond the scope of this study. Along with measuring phase shifts, we also calculated coherence between EGFs and discarded any cases where the average coherence fell below 0.95. In our convention, a negative (Δv v −1 ) value meant that according to the EGF coda on the vertical channel, the Earth was slower during extension than during compression. Azimuthal measurements We grouped and stacked the station pairs by azimuth so that we could examine any orientation dependence on the results. The azimuth for each pair was determined using the relationship from the more western station to the more eastern station so that values are always between 0 and 180 degrees, with 0 and 180 indicating south–north and 90 indicating west–east. For Oklahoma, we considered nine azimuths in 20-degree steps. For each azimuthal interval, we averaged the Δv v −1 values for all pairs whose azimuth was within ±20 degrees with wrapping. For example, at an azimuth of 0 degree, we averaged paths between 0 and 20 plus those between 160 and 180 degrees (which is equivalent to between −20 and 0 degree). For New Mexico, we considered all station pairs individually because there were not enough pairs to average in azimuthal bins. In addition to calculating the average Δv v −1 at different azimuths, we fit a sine function, periodic on 2θ, to the results. To obtain the predicted values for S Hmax reported on Figs. 4a–f and 5a–c , we generated 1000 realizations where we drew a value for fractional velocity change for each station pair (New Mexico) or azimuthal bin (Oklahoma) from a normal distribution using the mean and standard deviation indicated by the red bars, then solved for the best fitting sine function. We report the mean and standard deviation of the 1000 realizations on Figs. 4a–f and 5a–c . Generating more realizations does not change the result. We also applied an f test to evaluate whether or not a sine function is a statistically better model than a uniform function to describe the azimuthal observations. The sine function has three parameters (amplitude, phase, mean), whereas the uniform function has only one parameter (mean). Applying the appropriate penalty for having more model parameters, a low p value indicates that the sine function is a better model to describe the observations. The p value of the f test is annotated in Figs. 4a–f and 5a–c . Only the results for the eastern stations in New Mexico indicate that a sine function is not a statistically significantly better model than a uniform function. Uncertainties in our velocity measurements were difficult to precisely estimate. When calculating EGFs, there was an intrinsic assumption that noise sources are equipartitioned, white, azimuthally uniform, and stationary. This is never true on Earth, but we could take steps to reduce the influence of recordings that violate these assumptions. We assumed that EGF coda velocities in our study area vary by no more than the amounts observed in other studies, <±1% 39 , 56 , 57 . As we never directly compared data that were recorded >14 days apart, seasonal variations were not expected to be important, including spectral content, azimuthal variations, and relative amounts of coherent and incoherent noise. By discarding EGFs that were not well correlated and coda that was not highly coherent, we avoided time segments that were badly contaminated with non-equipartitioned waveforms. In both study areas, we stacked over two years of data. The resulting uncertainty based on the variance in the accepted measurements suggested that uncertainties are no more than one-tenth (Oklahoma) and one-third (New Mexico) of the magnitude of the measurements for the full dataset. Uncertainties for individual stations (New Mexico) and azimuthal bins (Oklahoma) are indicated in Figs. 4 and 5 . Average uncertainties on the full dataset for the two study areas include the fact that the Δv v −1 magnitudes are different at different azimuths. Therefore, these values overestimate the actual measurement error in this regard. Data availability All data used in this study are available through the Incorporated Institutions for Seismology Data Management Center ( ). Information regarding the Nanometrics Research Array can be found here (ds.iris.edu/mda/NX/) and Earthscope’s Transportable Array here (ds.iris.edu/mda/TA/). Time series data can be requested through several tools provided here (ds.iris.edu/ds/nodes/dmc/tools/#data_types = time series). Code availability This study was performed using the Python package Obspy 65 and PyWavelets 70 and uses workflows provided in Ventosa et al. 67 and Mao et al. 69 .
Scientists at Los Alamos National Laboratory have developed a method to determine the orientation of mechanical stress in the earth's crust without relying on data from earthquakes or drilling. This method is less expensive that current approaches, could have broad applicability in geophysics and provide insight into continental regions lacking historical geologic information. "We utilized the nonlinear elastic behavior in rocks and applied a new technique to monitor the orientation of the maximum horizontal compressive stress in rocks in parts of Oklahoma and New Mexico," said Andrew Delorey of Los Alamos. "The orientation of that maximum horizontal compressive stress reveals which fractures in the rock will be active." North-central Oklahoma was selected because induced seismic activity has been ongoing in the region after decades of injected wastewater from oil and gas operations. That seismic activity occurs on faults optimally oriented in the regional stress field. North-central New Mexico was selected to compare the results to a geologic setting straddling a continental rift separating the Colorado Plateau from a stable section of the earth's crust. The scientists determined that the earth exhibits stress-induced anisotropy of nonlinear susceptibility that is aligned with the maximum horizontal compressive stress in these two different geologic settings. Rocks become stiffer when compressed and softer when extended, but this effect isn't instantaneous. The rate is faster in the orientation where the ambient stress field is most compressive. By measuring this rate in different orientations, scientists can determine the orientation where ambient stress is most compressive. Determining the geophysical stress orientation, or the direction of maximum horizontal compressive stress, is usually determined by drilling narrow, deep boreholes. However, borehole drilling is expensive and only provides a single data point. Additionally for vast regions, the geophysical data simply hasn't been collected because it is too expensive. This method provides an alternative. Their approach could be essential for the oil and gas industry trying to avoid hazards and optimize production. In the case of hydro fracking, the fractures will open in the direction of the minimum horizontal compressive stress, which scientists can now determine before any drilling. "Estimation of the orientation of stress in the Earth's crust without earthquake or borehole data," by Andrew A. Delorey, Christopher W. Johnson, and Paul A. Johnson and Götz Bokelmann of the University of Vienna was published in September in Nature's "Communications Earth and Environment journal.
10.1038/s43247-021-00244-1
Physics
How can we stop the spread of false rumors about COVID-19? Better math
Jessica T. Davis et al. Phase transitions in information spreading on structured populations, Nature Physics (2020). DOI: 10.1038/s41567-020-0810-3 Journal information: Nature Physics
http://dx.doi.org/10.1038/s41567-020-0810-3
https://phys.org/news/2020-03-false-rumorsabout-covid-math.html
Abstract Mathematical models of social contagion that incorporate networks of human interactions have become increasingly popular, however, very few approaches have tackled the challenges of including complex and realistic properties of socio-technical systems. Here, we define a framework to characterize the dynamics of the Maki–Thompson rumour spreading model in structured populations, and analytically find a previously uncharacterized dynamical phase transition that separates the local and global contagion regimes. We validate our threshold prediction through extensive Monte Carlo simulations. Furthermore, we apply this framework in two real-world systems, the European commuting and transportation network and the Digital Bibliography and Library Project collaboration network. Our findings highlight the importance of the underlying population structure in understanding social contagion phenomena and have the potential to define new intervention strategies aimed at hindering or facilitating the diffusion of information in socio-technical systems. Main The mathematical modelling of contagion processes is crucial in gaining insight into a broad range of phenomena from the spreading of infectious diseases to social collective behaviour. While this avenue of research has a long tradition both in the biological and social sciences, in recent years there have been considerable advancements triggered by increasing computational power and data availability characterizing socio-technical systems. These advances are particularly evident in the area of infectious disease forecasting where current models now incorporate realistic mobility and interaction data of human populations 1 , 2 , 3 , 4 , 5 , 6 . Analogously, social contagion phenomena that were initially modelled using the same mathematical framework as epidemics 7 , 8 , 9 , 10 are now described by complex contagion models 11 , 12 , 13 aimed at specifically characterizing processes such as the establishment of shared social norms and beliefs 14 , 15 , 16 , the diffusion of knowledge and information 17 , 18 , and the emergence of political consensus 19 . These models consider complex factors such as reinforcement and threshold mechanisms 20 , 21 , 22 , 23 and the loss of interest mediated by social interactions 24 , 25 . Furthermore, many of these theoretical approaches have put networks at the centre of our understanding of social contagion phenomena and the information spreading process 8 , 12 , 17 , 26 , 27 , 28 , 29 , 30 , 31 , 32 . However, most theoretical and numerical work on the dynamics of social contagion focuses on highly stylized models, trading off the realistic features of human interactions for analytical transparency and computational efficiency. As a result, social contagion models able to integrate the effects of human mobility, community structure and time-varying behavioural patterns are largely unexplored. Here we consider the classic rumour spreading model 24 , 25 to study the effects of structured populations on the global diffusion of a rumour or piece of information. More specifically, we model the spatial structure of realistic populations and the behaviour of individuals in virtual social networks through a reaction–diffusion model in a metapopulation network and an activity-driven model with communities, respectively. We first identify analytically the necessary conditions for the social contagion to spread to a macroscopic fraction of the population. This analysis shows that although the rumour model is lacking any critical threshold, the population structure introduces a dynamical phase transition (global invasion threshold 33 ) that is a function of the interactions between subpopulations. We validate the analytical results with large-scale numerical simulations on synthetic networks with different topological structures. Additionally, we recover the global threshold of the contagion process in data-driven models of the European transportation network and the Digital Bibliography and Library Project (DBLP) collaboration network. Understanding how the social structure in both the physical and virtual worlds affects the emergence of contagion phenomena has the potential to indicate novel ways to utilize the network connectivity to develop efficient network-based interventions. The framework developed here opens a path to study the effects of communities and spatial structures in other complex contagion processes that can incorporate agent memory 34 and social reinforcement 22 , or introduce other heterogeneous features such as age-dependent contact patterns and socio-economic conditions 35 . Model definition Here we use a variant of the original rumour model 24 , known as the Maki–Thompson model 25 , to describe the spread of information through a population based on interactions between agents 36 . Similar to epidemic models, individuals can be classified into three compartments: ignorants, those who do not know the rumour; spreaders, those who know and are actively sharing the rumour; and stiflers, those who know the rumour but are no longer spreading it. The contagion process evolves through interactions between individuals in a population. If a spreader contacts an ignorant individual, with a probability λ , the ignorant will transition into a spreader. However, when a spreader contacts either a stifler or another spreader, with a probability α , the spreader will transition into a stifler. The stifling mechanism describes an individual’s tendency to become uninterested in the rumour once the appeared novelty of the information is lost. In homogeneously mixed populations, this feature does not allow the presence of a rumour threshold 24 , 37 , meaning that for any λ > 0 the rumour will always spread to a macroscopic proportion of the population (see Methods). We investigate the behaviour of this model on two types of structured populations that incorporate the complexity observed in socio-technical systems (Fig. 1 ). Rumour model in spatially structured populations We first consider a population where spatially defined groups of individuals (subpopulations) are coupled together by a mobility rate (Fig. 1a ) 32 , 38 . This structure, also called a metapopulation network, is used to model species persistence in ecosystems 39 , the evolution of populations 40 and the global spreading of infectious diseases 41 . Specifically, we consider a metapopulation network with V subpopulations, each with an average population size of \(\overline{N}\) individuals. Reaction–diffusion processes are used to characterize both the local interaction and global mobility dynamics. Individuals first react within their current subpopulation according to the rumour model dynamics and then diffuse between subpopulations based on a Markovian diffusion process. The probability that an individual will leave their current subpopulation and travel to a specific neighbour is p ∕ k , where p is the mobility parameter and k is the number of neighbouring subpopulations. Rumour model in virtual structured populations In contrast to the reaction–diffusion scheme, rather than moving between communities, individuals may belong to a specific virtual community such as an interest or disciplinary group, online forum, or political affiliation, but interact occasionally with individuals in other virtual communities through collaborations, forum posts or direct messages (Fig. 1b ). We model these interaction dynamics using a modular activity-driven network scheme 42 , 43 . In particular, we consider a population with V communities whose sizes ( s ) follow a specific distribution, P ( s ). Every individual is assigned an activity a i that is sampled from a preset distribution F ( a ). The activity of an individual corresponds to the rate with which the individual becomes active during a given time step t (ref. 42 ). Each active individual will form a single connection to another individual in the population creating an instantaneous network. To induce a community structure, an activated individual will choose to form a link to a randomly selected individual outside of their home community with a probability μ , otherwise, an intra-community link will be formed. The parameter μ allows us to tune the interaction between communities. After each single iteration of the rumour model, the network resets and a new instance is generated in the same manner. Fig. 1: Types of structured population considered in the modelling framework. a , A schematic representation of a reaction–diffusion process on a metapopulation network, where individuals homogeneously interact within their current subpopulation, and then diffuse through the network constrained by the global structure. b , A schematic representation of a modular activity-driven network at two points in time, t , where individuals are confined to a single community, but when activated choose to form links to those outside their current community based on a probability of inter-community interaction. Each instantaneous network is generated independently of prior networks. Full size image Invasion threshold in structured populations Although the rumour model in a single homogeneous population does not exhibit a spreading threshold, the presence of a subpopulation structure fundamentally alters the contagion dynamics. This can be clearly seen for the rumour model in virtual structured populations by examining two limits of the inter-community interaction term, μ . When μ = 0, individuals will only interact with others in their community. Thus, the rumour will never escape the seed community. However, in the limit where μ = 1, individuals effectively do not belong to any community and will always choose to form an external connection. Therefore, the rumour will certainly reach a macroscopic fraction of the population. The same reasoning can be applied to the limits of the mobility parameter, p , in the case of spatially structured populations. In both modelling frameworks, the population structure induces a transition point separating a dynamical regime where only local spreading is possible from a regime where the rumour spreads globally through the network. To characterize this transition point quantitatively, we use a branching process framework to describe the rumour spreading dynamics across subpopulations 41 , 44 . Let us consider a system that is structured into V subpopulations, each consisting of \(\overline{N}\) individuals, on average, at any given time. Within the homogeneous population structure, we assume that all nodes are statistically equivalent and the connections formed between pairs of nodes are uncorrelated. Let D n be the number of affected subpopulations where the rumour is known by at least one individual at generation n . We use a tree-like approximation to write an expression that captures the number of subpopulations that know the rumour at each generation of the spreading process, obtaining: $${D}_{n}={D}_{n-1}\left(1-\frac{\sum _{m=0}^{n-1}{D}_{m}}{V}\right)C\Phi$$ (1) The above equation assumes that every affected subpopulation in the ( n −1)th generation ( D n −1 ) may seed each one of its \((1-{\sum }_{m=0}^{n-1}{D}_{m}/V)C\) unaffected neighbours with a probability Φ , where C indicates the average number of neighbouring subpopulations and \(1-{\sum }_{m=0}^{n-1}{D}_{m}/V\) is the probability that the neighbouring subpopulation is not already aware of the rumour during the ( n −1)th generation. In a structured population model, a rumour epidemic occurs when each affected subpopulation, early in the contagion process, spreads the rumour on average to at least one fully ignorant subpopulation. Using equation ( 1 ), this global contagion condition reads as D n ∕ D n −1 ≥ 1. Given that we are interested in the early time dynamics of the process, we assume that \({\sum }_{m=0}^{n-1}{D}_{m}/V\ll 1\) , defining the global contagion threshold D n ∕ D n −1 ≃ C Φ ≥ 1. This effectively defines the subpopulation reproductive number R * = C Φ , that is, the average number of communities becoming aware of the rumour from a single subpopulation. Analogously to the reproductive number in biological epidemics, in order for information to spread globally, R * must be greater than or equal to one 41 , 44 . The terms C and Φ depend explicitly on the type of structured population model as well as on the contagion process. In the following sections we provide equations for these parameters and the rumour invasion thresholds for both spatially structured and virtual populations. Spatially structured populations In a homogeneous metapopulation network the number of possible subpopulations that could be seeded by each affected subpopulation is C = 〈 k 〉 − 1, that is, the average number of neighbouring subpopulations minus the one that originally seeded the contagion process. Owing to the lack of a local rumour threshold within a single subpopulation, the probability Φ is simply given by the probability that at least one spreader will decide to leave an affected community and travel to a neighbouring subpopulation where they will start spreading the rumour: $${\Phi} =1-{\left(1-\frac{p}{\langle k\rangle }\right)}^{\beta }$$ (2) where β is the number of spreaders in an affected subpopulation that can travel out of their current subpopulation during the rumour epidemic. This value Φ , is calculated by considering one minus the probability that none of the spreaders will travel to a new community, \({(1-\frac{p}{\langle k\rangle })}^{\beta }\) . Here, \(\beta =\frac{2(1+\lambda /\alpha )\overline{N}}{\alpha }\) , is the product of the total number of individual spreaders generated by the contagion process within a single population and the average amount of time they are actively spreading the rumour (see details in the Methods). Using equation ( 2 ) and considering a small mobility probability, such that p ∕ 〈 k 〉 ≪ 1, we can approximate the probability Φ ≃ β p ∕ 〈 k 〉. In this limit we obtain an explicit equation for the rumour invasion threshold as: $${R}^{* }=\frac{\langle k\rangle -1}{\langle k\rangle }2\left(1+\frac{\lambda }{\alpha }\right)\frac{p\overline{N}}{\alpha }\ge 1$$ (3) From equation ( 3 ) it is possible to rewrite the necessary threshold condition to find the critical mobility p c in the system required for a global spreading of the rumour as: $${p}_{{\mathrm{c}}}=\frac{\langle k\rangle }{(\langle k\rangle -1)}\text{}\frac{\alpha }{2(1+\frac{\lambda }{\alpha })\overline{N}}$$ (4) Below the critical value, p c , the amount of individual mobility restricts the global propagation of the rumour. In this subcritical regime, spreaders in affected communities are generally unable to travel to a new subpopulation before they transition into stiflers, which consequently causes the rumour to go extinct in the early stages. This critical mobility is a function of both the network structure and the rumour model parameters, λ and α . However, for homogeneous networks with sufficiently large average degrees 〈 k 〉, the effect of the network structure is relatively insignificant. In the Supplementary Information we derive the critical mobility for metapopulation networks with heavy-tailed degree distributions and find that the analytical expression depends not only on the average degree of the network 〈 k 〉, but also on the second moment 〈 k 2 〉 of the degree distribution. Heterogeneous networks are characterized by having degree distributions with high variance (large 〈 k 2 〉), thus considerably affecting the value of the mobility threshold. We also see that p c is linearly dependent on α . When λ is small relative to α , the critical mobility is controlled predominantly by the stifling probability. Recall that the stifling probability characterizes the tendency for an individual to become disinterested in the rumour (that is, transition into a stifler) when interacting with others that know the rumour. It is worth noting this feature for the global spread of a rumour in a spatially structured environment that places the emphasis not on how appealing a rumour is, but rather on the rate at which people decide that the rumour is not worth spreading. Virtual structured populations Now let us consider a modular activity-driven network where we assume discrete time, Δ t = 1, a homogeneous activity rate ( a ) for all individuals in the network and a homogenous distribution of community sizes. In this model, if an individual chooses to form an inter-community link, by construction it can choose any of the other C = V − 1 communities. The probability Φ that at least one spreader from an affected population will choose to connect with another individual outside their current community and successfully transmit the rumour can be written as $${\Phi} =1-\left(\right.1-\frac{\lambda \mu }{V-1}{\left)\right.}^{\beta }$$ (5) In this expression \(\frac{\lambda \mu }{V-1}\) is the probability that an inter-community link successfully transmits the rumour to one of the V − 1 specific subpopulations and β translates to the number of potential chances that a single, affected community has to spread the rumour to another community. As described in the Methods, β is not explicitly dependent on the activity assigned to each individual as long as a > 0 and still remains a function of the total number of spreaders in the community and the average amount of time they were active. Therefore, the same equation used for spatially structured populations, \(\beta =\frac{2(1+\lambda /\alpha )\overline{N}}{\alpha }\) , holds. We can thus calculate the rumour invasion threshold by assuming that \(\frac{\lambda \mu }{V-1}\ll 1\) in the limit of large V , obtaining: $${R}^{* }=2(1+\frac{\lambda }{\alpha })\frac{\lambda \overline{N}\mu }{\alpha }\ge 1$$ (6) The rumour invasion threshold in terms of the critical inter-community interaction rate, μ c reads $${\mu }_{{\mathrm{c}}}=\frac{\alpha }{2\lambda \overline{N}(1+\frac{\lambda }{\alpha })}$$ (7) This equation resembles the mobility threshold of equation ( 4 ) except for the addition of the λ parameter in the denominator, which comes from the node interaction process. In the spatially structured model, the mobility of an individual was the only factor that controlled whether the rumour spread to a new community. However, in the activity-driven model, active individuals do not move to another community, but rather may form a single connection through which the rumour has to be successfully transmitted to another individual in order to start the contagion process. This introduces a linear dependence on α ∕ λ rather than α alone. When the spreading probability λ is high, the more likely it is that an ignorant individual will transition into a spreader during a specific interaction. Thus, the rumour spreads more readily and does not require a high amount of inter-community interaction to globally propagate. This result shows the inherent differences between equation ( 4 ) and equation ( 7 ) and brings attention to the importance of the type of structured population used when modelling a socio-technical system. In the Supplementary Information we derive the critical interaction probability for populations with a heterogeneous size distribution. Simulations on synthetic structured populations To validate the analytical findings, we performed an extensive set of stochastic simulations of the rumour model on synthetic, structured populations. In spatially structured populations we examined the linear dependence of the critical mobility on α and for virtual populations we show the inverse dependence on λ . We generated homogeneous metapopulation networks as Erdös–Rényi random graphs with average degrees of 〈 k 〉 = 12 and an average population size of \(\overline{N}=1{0}^{3}\) individuals. To initiate the contagion process, one individual is made aware of the rumour in a single, randomly selected community. The microscopic reaction dynamics are mathematically defined by chain binomial and multinomial processes, which were used to update the stochastic transitions of individuals between compartments (details in the Supplementary Information ). Following the reaction process, individuals diffuse along a specific link to a neighbouring subpopulation with a probability p ∕ k , where k is the degree of the individual’s current community. Our simulation results show the final fraction of affected communities as a function of the mobility probability p for various α parameters (Fig. 2a ), recovering the critical transition that separates the non-spreading and global spreading dynamical regimes. The vertical lines represent the values predicted from equation ( 4 ) and are in good agreement with our numerical findings. Furthermore, we see a clear dependence of the transition point on the stifling rate α . In the Supplementary Information we show similar simulations for metapopulation networks with heavy-tailed degree distributions, p ( k ) ~ k −2.2 . In this scenario, the network structure significantly reduces the threshold since, as mentioned above, it now depends on the second moment of the degree distribution that diverges for heterogeneous networks when V → ∞. Fig. 2: Results from numerical simulations of the rumour spreading process in homogeneous structured populations. a , The final fraction of subpopulations where the rumour is known at the end of the rumour epidemic as a function of the mobility probability p for various α values averaged over 4,000 simulations. The networks have V = 10 3 subpopulations with an average of \(\overline{N}=1{0}^{3}\) individuals and an average degree 〈 k 〉 = 12. The vertical lines represent the predicted threshold values from equation ( 4 ) and the value λ = 0.1 was used for the spreading probability. b , The final fraction of communities where the rumour is known as a function of the inter-community interaction probability μ for various λ values averaged over 1,000 simulations. A total population had V = 10 3 communities, each containing 10 3 individuals. The vertical lines represent the predicted values from equation ( 7 ) and a value of α = 1 was used as the stifling probability. The shaded regions represent the 90% reference range and the averages in the supercritical regime were calculated on the simulations where at least 5% of the subpopulations experienced a rumour epidemic. Source data Full size image For the second type of structured population, we generated modular activity-driven networks with V = 10 3 total communities, each with the same number of individuals, \(\overline{N}=1{0}^{3}\) , and every individual assigned the same activity rate, a = 0.1. To start the contagion process, a single individual from a randomly selected community is made aware of the rumour. An instantaneous network is generated by the modular activity-driven network model on which the rumour dynamics unfold for a single iteration (Δ t = 1). After the reaction process, a new network instance is generated and the process repeats until all individuals are either still ignorants or stiflers (more model details are in the Supplementary Information ). In Fig. 2b we show the final fraction of communities where a rumour epidemic occurred as a function of the inter-community interaction parameter μ , for multiple λ values. The phase diagram supports our theoretical findings, and confirms that for higher values of λ fewer inter-community interactions are required for the rumour to globally propagate. Additionally, we also model this system using a heterogeneous size distribution and report the results in the Supplementary Information . Data-driven simulations To further support the theoretical results obtained in the previous section, we analyse the rumour model on two real-world networks. Specifically, we simulate a rumour spreading across a metapopulation network modelling the transportation patterns in Europe and across a modular activity-driven network modelling scholarly collaborations from the DBLP collaboration network. The mobility of individuals throughout Europe is constructed by dividing the continent into spatial regions that are coupled together using data about commuting patterns and long-range transportation fluxes such as airline traffic (details in the Supplementary Information ). This realistic, synthetic metapopulation network has been used in simulations of emerging infectious diseases as well as in the analysis and predictions of pandemic events 10 , 45 , 46 . In this framework, the mobility of individuals across subpopulations (analogous to the p parameter in spatially structured populations) is derived from actual transportation data. To study the effects of a reduction in mobility, we rescaled the proportion of individuals that travel at each time step by a factor ω . We show the results of the rescaled mobility on the spatial diffusion of a rumour simulated over the transportation network in Fig. 3a,c . Interestingly, we see a clear transition in the mobility required for the rumour to spread. The phase diagram reveals that the critical ω c in this system is significantly small implying that the current human mobility pattern across Europe is orders of magnitude above the rumour threshold. Fig. 3: Results from numerical simulations of a rumour spreading in real-world networks. a , b , The average final fraction of stiflers as a function of the rescaling mobility factor in the European commuting and transportation network ( a ) and the rescaling factor of the inter-community interaction probability within the DBLP collaboration network ( b ). Simulations used a spreading probability of λ = 0.1 and stifling probability of α = 1. Error bars represent the 90% reference range of the simulations where at least 5% of the subpopulations experienced a rumour epidemic. c , d , The temporal evolution of the rumour spreading process taken from individual simulations corresponding to the ω values highlighted in a , b with red circles. c , In the European commuting and transportation network, the rumour was initiated in Paris, France by seeding one individual. d , In the DBLP collaboration network, nodes represent publication venues where node size corresponds to the population size and the line thickness corresponds to the amount of inter-community interaction between each pair of communities. The activity probability per individual is a = 0.1, while the inter- and intra-community probability is derived from the data. Map in c from GADM ( ). Source data Full size image We also model the collaboration process of the DBLP co-authorship network using the modular activity-driven network scheme 47 . Nodes in the network are individual researchers that can form collaborations with others either within or outside their own communities. In particular, a link represents co-authorship on at least one paper and each community is a specific publication venue. We measure the amount of interaction μ between communities by calculating the frequency of cross-community links relative to the total number of internal and external links. We simulated the rumour model to analyse how information would propagate in this system by rescaling the actual individual’s tendency to link outside their current community (analogous to the μ parameter in the virtual structured population framework) by a factor ω to study the effects of lowering inter-community interaction rates. In Fig. 3b,d , we show the results of this rescaling on the final fraction of affected disciplinary communities and observe a transition point characterizing the amount of inter-community collaboration needed in order for a rumour or idea to spread globally. Similar to the transportation network, the critical rescaling value ω c is extremely small. Both data-driven network applications extend our modelling framework by incorporating heterogeneous and non-trivial subpopulation interactions. Consequently, the assumptions of statistical equivalence of nodes and an uncorrelated network structure made in our calculations are no longer valid. Therefore, in these realistic systems, the critical value ω c cannot be easily computed analytically. However, we do see a similar phenomenology between the synthetic and data-driven structured populations in that there does exist a critical transition point in the amount of interaction between subpopulations or communities that is necessary for a rumour to propagate. In both cases, the critical transition point ( ω c ) is very small, highlighting the role of our interconnected world in facilitating the diffusion of information across geographical boundaries as well as through disciplinary communities. However, this result is not necessarily universal across all types of structured population. Information spreading is fundamentally dependent on the strength of interactions among elements of the network, thus calling for specific case-by-case studies on the location of critical transition points in real-world situations. Discussion and conclusion In this work, by using a classic rumour spreading model lacking any critical threshold in a single homogeneous population, we show that the contagion process in structured populations exhibits a phase transition with a critical threshold dependent on the amount of interactions/coupling between subpopulations. The analytical and numerical results presented here emphasize the importance of accounting for the complex structure observed in socio-technical systems when studying social contagion processes. The features observed in real-world systems can potentially alter the theoretical picture and the understanding provided by only studying stylized models. Our results show that successful information or rumour spreading is the result of a complex interaction between the intrinsic properties of the contagion process and the dynamics of interactions between subpopulations/communities that comprise social systems. The flexibility of the framework allows for further study of different types of emergent behaviour that may be more complex than the rumour model used here. For example, in order for a contagion to spread, individuals must be contacted by multiple neighbours in their social network. Additional features can also be incorporated into the interaction process and network structure such as age-dependent contact patterns, socio-economic conditions and data-driven human mobility. These features have the potential to not only provide unexpected results of theoretical nature but also actionable insights crucial to understand and control social contagion phenomena. Methods Final rumour size in a single population The mean-field rate equations for the Maki–Thompson rumour model in a homogeneously mixed population are listed below. The densities of spreaders ( S ), ignorants ( I ) and stiflers ( R ) in a population are defined by s = S ∕ N , i = I ∕ N and r = R ∕ N , respectively, where N is the total number of individuals in the population, yielding: $$\begin{array}{ll}\frac{{\mathrm{d}}i}{{\mathrm{d}}t}&=-\lambda s(t)i(t)\\ \frac{{\mathrm{d}}s}{{\mathrm{d}}t}&=\lambda s(t)i(t)-\alpha s(t)(s(t)+r(t))\\ \frac{{\mathrm{d}}r}{{\mathrm{d}}t}&=\alpha s(t)(s(t)+r(t))\end{array}$$ (8) Using the initial conditions i (0) ≈ 1 and r (0) = 0, a solution to these differential equations can be obtained analytically in the infinite time limit where \(r_{\infty} = 1-{\rm{e}}^{{-(1+{\lambda}/{\alpha})}{r_{\infty}}}\) . This transcendental equation has a trivial solution when r ∞ = 0, but a non-trivial solution when λ ∕ α + 1 > 1, confirming that a rumour will propagate through a population and reach a macroscopic fraction of individuals 37 . Assuming that \((1+\frac{\lambda }{\alpha }){r}_{\infty }\ll 1\) , we can obtain the approximate solution: $${r}_{\infty }\simeq \frac{2\frac{\lambda }{\alpha }}{{(1+\frac{\lambda }{\alpha })}^{2}}\approx 2\frac{\lambda }{\alpha }$$ (9) We can see that the final density of stiflers scales with λ ∕ α . This relationship is verified through numerical simulations of the rumour model for a single homogeneously mixed population as detailed in the Supplementary Information . Average spreading time The average spreading time, 〈 τ 〉, is the time elapsed from when the individual was first told the rumour to the time the individual became a stifler. Supplementary Fig. 1b shows the average spreading time as a function of \(\frac{1}{\lambda }+\frac{1}{\alpha }\) from simulations done on a single population. A linear line is fit to the data, producing the equation: $$\langle \tau \rangle =\frac{1}{\lambda }+\frac{1}{\alpha }+\frac{1}{2}$$ (10) Number of potential spreaders The number of potential spreaders β that could transmit the rumour to another population can be calculated as \(\beta =\langle \tau \rangle \overline{N}{r}_{\infty }\) , where 〈 τ 〉 is the average amount of time an individual remains a spreader and \(\overline{N}{r}_{\infty }\) is the final average number of individuals that know the rumour at the end of the spreading process. Using the approximated equations for the final stifler density as well as the average spreading time, one obtains: $$\beta ={r}_{\infty }\overline{N}\langle \tau \rangle =\frac{2(1+\frac{\lambda }{\alpha })\overline{N}}{\alpha }$$ (11) In the modular activity-driven network model, the expression for β is not altered by the fact that individuals are activated with probability a . The average spreading time 〈 τ 〉 should be measured by considering the duration of the contagion process, which should be on the order of 1 ∕ a (average number of time steps between activations), and the activity of the individual at each time step, which is a . It follows that these terms cancel each other out, so the effective number of interactions of each spreader will not be dependent on a as also shown numerically in the Supplementary Information . Data availability The data represented in Fig. 3b are available through the Stanford Network Analysis Project (SNAP) 48 . All other data that supports the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Code availability Code is available upon request from the corresponding author.
Think of all the false rumors that went viral about COVID-19—it got so bad, the World Health Organization called it an "infodemic." Whether it is in hoaxes or a viral conspiracy theory, information travels fast these days. Just how fast and far information moves depends on who shares it, and where, from discussions on social media to conversations with fellow commuters on your way to work. So, how can our interactions and their infrastructures affect the spread of rumors and information? That's a question that researchers are beginning to answer with complex math models of social contagion, the concept that social behavior and ideas spread like a pathogen. "The thing with social contagion is that it is like the threading of some type of behavior, or an idea, or information," says Jessica Davis, a third-year doctoral student at Northeastern's Network Science Institute. Davis recently led a study that uses mathematical equations to model the way rumors and information spread in different types of environments. In a paper published Monday in Nature Physics, Davis's team outlined a new way to incorporate into their calculations aspects of the way information is shared in the physical world—such as people's commute to work and the online groups they interact with—that might influence how information spreads. The model lays the groundwork for more realistic ways to study how information travels, Davis says. "These models can be used to point out different structural, social, and other factors," she says, "that aren't normally taken into account when you're thinking about how information spreads." Alessandro Vespignani, Sternberg Family Distinguished University Professor of physics, computer science, and health sciences, says the inclusion of such realistic features is essential to accurately model the way information spreads in real time. Vespignani, a co-author of the study, has also modeled the spread of the COVID-19 outbreak. "The study opens the path to more realistic modeling of the diffusion of information and misinformation that takes into account the geographical and social structure of social networks," he says. The team's approach to modeling the way information spreads among people is based on similar efforts by Vespignani and other scientists to model how infectious diseases spread, and takes advantage of data already available from epidemiological studies. "We have a lot more data in the world now, and we can use it to understand how things are spreading," Davis says. "We have people who are using transportation networks, people using Google, Twitter, and other social media, to get an understanding of how a disease is spreading." Davis and her team also used a classic rumor propagation model as the basis of their model. That approach, known as the Maki-Thompson model, factors in people who spread, ignore, and refrain from spreading the rumor. All of those individuals mirror the function of infected, susceptible, and recovering people in models of disease and infection. In their study, the team tested how people's ability to move and travel in Europe could influence the spread of a rumor. Other tests included models constrained to online databases to simulate the way information permeates different academic disciplines. The idea is to calculate the tipping point at which rumors and information go viral. "We write down a set of equations, and we can solve for this threshold," Davis says. "It's a function of both the rumor model parameters, as well as the structure of this network." Those equations are what social contagion models need in order to be as insightful as they can be, Davis says. And, in the long run, it's what could set network scientists up to model the spread of information in the real world with more precision, including the roles that different groups of people play. "Some types of information spreading in the teenage range might not affect the elderly population," Davis says. "If we could understand who is being affected by that information, that could help us or help maybe social media sites monitor or get a better understanding of who's been impacted by this information."
10.1038/s41567-020-0810-3
Physics
Time-reversal of an unknown quantum state
A. V. Lebedev et al. Time-reversal of an unknown quantum state, Communications Physics (2020). DOI: 10.1038/s42005-020-00396-0 Seth Lloyd et al. Quantum principal component analysis, Nature Physics (2014). DOI: 10.1038/nphys3029 Gonzalo Manzano et al. Quantum Fluctuation Theorems for Arbitrary Environments: Adiabatic and Nonadiabatic Entropy Production, Physical Review X (2018). DOI: 10.1103/PhysRevX.8.031037 Journal information: Communications Physics , Nature Physics , Physical Review X
http://dx.doi.org/10.1038/s42005-020-00396-0
https://phys.org/news/2020-08-time-reversal-unknown-quantum-state.html
Abstract For decades, researchers have sought to understand how the irreversibility of the surrounding world emerges from the seemingly time-symmetric, fundamental laws of physics. Quantum mechanics conjectured a clue that final irreversibility is set by the measurement procedure and that the time-reversal requires complex conjugation of the wave function, which is overly complex to spontaneously appear in nature. Building on this Landau-Wigner conjecture, it became possible to demonstrate that time-reversal is exponentially improbable in a virgin nature and to design an algorithm artificially reversing a time arrow for a given quantum state on the IBM quantum computer. However, the implemented arrow-of-time reversal embraced only the known states initially disentangled from the thermodynamic reservoir. Here we develop a procedure for reversing the temporal evolution of an arbitrary unknown quantum state. This opens the route for general universal algorithms sending temporal evolution of an arbitrary system backward in time. Introduction An origin of the arrow of time, the concept coined for expressing one-way direction of time, is inextricably associated with the Second Law of Thermodynamics 1 , which declares that entropy growth stems from the system’s energy dissipation to the environment 2 , 3 , 4 , 5 , 6 . Thermodynamic considerations 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , combined with the quantum mechanical hypothesis that irreversibility of the evolution of the physical system is related to measurement procedure 18 , 19 , and to the necessity of the anti-unitary complex conjugation of the wave function of the system for time reversal 20 , led to understanding that the energy dissipation can be treated in terms of the system’s entanglement with the environment 1 , 21 , 22 , 23 , 24 . The quantum mechanical approach to the origin of the entropy growth problem was crowned by finding that in a quantum system initially not correlated with an environment, the local violation of the second law can occur 25 . Extending then the solely quantum viewpoint on the arrow of time and elaborating on the implications of the Landau–Neumann–Wigner hypothesis 18 , 19 , 20 , enabled to quantify the complexity of reversing the evolution of the known quantum state and realize the reversal of the arrow of time on the IBM quantum computer 26 . In all these past studies, a thermodynamic reservoir at finite temperatures has been appearing as a high-entropy stochastic bath thermalizing a given quantum system and increasing thus its thermal disorder, hence entropy. We find that most unexpectedly, it is exactly the presence of the reservoir that makes it possible to prepare the high-temperature thermal states of an auxiliary quantum system governed by the same Hamiltonian \(\hat{H}\) as the Hamiltonian of a given system. This enables us to devise the operator of the backward-time evolution \(\hat{U}=\exp (i\hat{H}t)\) reversing the temporal dynamics of the given quantum system. The necessary requirement is that the dynamics of the both, auxiliary and given, systems were governed by the same Hamiltonian \(\hat{H}\) . The time-reversal protocol comprises the cyclic sequential process of quantum computation on the combined auxiliary and the given systems and the thermalization process of the auxiliary system. A universal time-reversal procedure of an unknown quantum state defined through the density matrix \(\hat{\rho }(t)\) of a quantum system \({\mathcal{S}}\) will be described as a reversal of the temporal system evolution \(\hat{\rho }(t)\to \hat{\rho }(0)=\exp (i\hat{H}t/\hslash )\hat{\rho }(t)\exp (-i\hat{H}t/\hslash )\) returning it to system’s original state \(\hat{\rho }(0)\) . Importantly, we need not know the quantum state of this system in order to implement the arrow of time reversal. A dramatic qualitative advance of the new protocol is that it eliminates the need of keeping an exponentially huge record of classical information about the values of the state amplitudes. Moreover, the crucial step compared with the protocol of time reversal of the known quantum state 26 is that we now lift the requirement that initially the evolving quantum system must be a pure uncorrelated state. Here, we develop a procedure where the initial state can be a mixed state and, therefore, include correlations due to system’s past interaction with the environment. Results Universal procedure The calculations are organized as follows. First, we describe how the time reversal of an unknown state can be implemented in a universal manner and estimate its computational complexity. Next, we outline a somewhat more resource-demanding procedure, where, however, one can relax the need of knowing the Hamiltonian \(\hat{H}\) . Then we show that if in addition to the quantum system \({\mathcal{S}}\) one is provided by an auxiliary system \({\mathcal{A}}\) , so that \(\dim {\mathcal{S}}=\dim {\mathcal{A}}\) , whose dynamics is governed by the same Hamiltonian \(\hat{H}\) , one can devise \({\hat{U}}^{\dagger }(t)\) without knowing an exact form of \(\hat{H}\) . Finally, we discuss how the partial knowledge on the state \(\hat{\rho }(t)\) can reduce and optimize the complexity of the time-reversal procedure. The starting point of the reversal procedure is drawn from the observation of S. Lloyd et al. 27 that having an ancilla system in a state \(\hat{\sigma }\) one can approximately construct a unitary operation \(\exp (-i\omega \hat{\sigma }\delta t)\) acting on a system \({\mathcal{S}}\) simulating its evolution under Hamiltonian \({\hat{H}}_{a}=\hslash \omega \hat{\sigma }\) during the infinitesimal time interval δ t . Here, ω refers to some arbitrary rate, which for a moment, we leave unspecified. Having N identical copies of ancillas, one generates a finite time evolution \(\rho (t)\to \rho (t+\tau )={e}^{-i\omega \tau \hat{\sigma }}\hat{\rho }(t){e}^{+i\omega \tau \hat{\sigma }}\) over the time interval τ = N δ t with the accuracy ∝ ( ω τ ) 2 / N (see “Methods”). The first step of the time-reversal procedure is then constructing the density matrix \(\hat{\sigma }\) . Consider the density operator defined by the given finite-dimensional Hamiltonian \(\hat{H}\) having the maximal eigenvalue \({\epsilon }_{\max }\) : $$\hat{\sigma }=\frac{1}{Z}\left({\mathbb{1}}{\epsilon }_{\max }-\hat{H}\right),$$ (1) where \(Z={\epsilon }_{\max }\dim {\mathcal{S}}-\,{\text{Tr}}\,\{\hat{H}\}\) is the normalization factor. Then the Lloyd (LMR) procedure maps the initial density matrix \(\hat{\rho }(t)\) to $$\hat{\rho }(t)\to \exp \left(\frac{i\omega }{Z}\hat{H}\tau \right)\hat{\rho }(t)\exp \left(-\frac{i\omega }{Z}\hat{H}\tau \right).$$ (2) One sees that application of the LMR procedure with the specific density matrix σ realizes approximately the time-reversed evolution of the system $$\hat{\rho }(t)\to \hat{\rho }\left(t-\frac{\hslash \omega }{Z}\ \tau \right)+\delta \hat{\rho }(\tau )$$ (3) to a backward delay τ R = ( ℏ ω / Z ) τ . The accuracy \(\delta \hat{\rho }(\tau )\) of such a time-reversal procedure is given by (see “Methods”), $$| | \delta \hat{\rho }(\tau )| | \le \frac{{(\omega \tau )}^{2}}{N}\ \left(| | \hat{\sigma }| | +| | \hat{\rho }(t)| | +2| | \hat{\rho }(t)| | \ | | \hat{\sigma }| {| }^{2}\right),$$ (4) where \(| | \hat{A}| |\) is the operator norm: \(| | \hat{A}| | ={\sup }_{\left|\psi \right\rangle }\sqrt{\left\langle \psi \right|\hat{A}\left|\psi \right\rangle /\langle \psi | \psi \rangle }\) . From Eqs. ( 3 ) and ( 4 ), one draws two important conclusions. First, the above time-reversal procedure for a backward delay τ R requires time τ to be completed. Therefore, while exercising the reversal, the system still maintain the forward evolution governed by its own Hamiltonian. Taking this into account, one has to modify Eq. ( 3 ) to $$\hat{\rho }(t)\to \hat{\rho }\left(t-\tau \frac{\hslash \omega }{Z}+\tau \right),$$ (5) which immediately poses the constraint on the operation rate ω of the LMR procedure: the actual time reversal occurs only for ℏ ω > Z . If this constraint is not satisfied, the time-reversal procedure only slows down the forward time evolution of the system. For a quantum system \({\mathcal{S}}\) , the threshold rate Z / ℏ is proportional to the Hilbert space dimension \(\dim {\mathcal{S}}\) : \(Z=\hslash \tilde{\omega }\dim {\mathcal{S}}\) with \(\hslash \tilde{\omega }=\left({\epsilon }_{\max }-\,{\text{Tr}}\,\{\hat{H}\}/\dim {\mathcal{S}}\right)\) , which is typically an exponentially large number. In particular, in order to make the time reversal with the same rate as the forward time evolution, one has to demand ω > 2 Z / ℏ . This brings straightforwardly the second conclusion: as far as ω is large, the infinitesimal time step δ t of the procedure has to be small so that ω δ t ≪ 1, therefore the number N has to be large. Indeed, fixing the backward delay τ R , the operation rate ω = 2 Z / ℏ , and setting the reversal accuracy ϵ : \(| | \delta \hat{\rho }(\tau ={\tau }_{R})| | \le \epsilon\) one finds from Eq. ( 4 ): $${N}_{\epsilon }=\frac{| | \hat{\rho }(t)| | }{\epsilon }\ {\left(\dim {\mathcal{S}}\frac{{\tau }_{R}}{\tilde{\tau }}\right)}^{2},$$ (6) where \(\tilde{\tau }={\tilde{\omega }}^{-1}\) is the typical timescale of the system dynamics, and \(| | \hat{\sigma }| | \propto {(\dim {\mathcal{S}})}^{-1}\ll | | \hat{\rho }(t)| |\) is assumed. Equation ( 6 ) implies that the computational complexity of the time-reversal procedure for an unknown quantum state is proportional to the square of the system’s Hilbert space dimension. In contrast, the time reversal of a known pure quantum state \(\hat{\rho }(t)=\left|\psi (t)\right\rangle \left\langle \psi (t)\right|\) is proportional to the dimension of the Hilbert space, which is swept by the system in the course of its forward time evolution \(\left|\psi (0)\right\rangle \to \left|\psi (t)\right\rangle\) 26 . As follows from Eq. ( 6 ), the time-reversal computational cost of an unknown pure state is maximal as long as \(| | \hat{\rho }| | =1\) in this case. For a mixed high-entropy state \(\hat{\rho }\) , the reversal complexity is reduced: given a state \(\hat{\rho }\) with the entropy \({S}_{\rho }={\mathrm{ln}}\,(\dim {\mathcal{S}})-k \, {\mathrm{ln}}\,(2)\) , where only \(k\ll {\mathrm{log}\,}_{2}(\dim {\mathcal{S}})\) bits of information is known, the upper estimate for the reversal complexity is given by (see “Methods”) $${N}_{\epsilon }\le \frac{k}{\epsilon \,{\mathrm{log}\,}_{2}(\dim {\mathcal{S}})}\ {\left(\dim {\mathcal{S}}\frac{{\tau }_{R}}{\tilde{\tau }}\right)}^{2}.$$ (7) Having complete information about the Hamiltonian \(\hat{H}\) allows one to construct a corresponding quantum circuit realizing the forward time evolution operator \(\hat{U}=\exp (-i\hat{H}t/\hslash )\) through a specific fixed set \({\mathcal{G}}\) of universal quantum gates: \(\hat{U}={\hat{U}}_{1}\cdots {\hat{U}}_{N}\) , \({\hat{U}}_{i}\in {\mathcal{G}}\) . As far as \({\mathcal{G}}\) is an universal set, for every \({\hat{U}}_{i}\in {\mathcal{G}}\) one can construct the inverse gate \({\hat{U}}_{i}^{\dagger }\) . Therefore, the time-reversed evolution operator \({\hat{U}}^{\dagger }\) can be constructed in a purely algorithmic way given the gate decomposition of \(\hat{U}\) . Thus, the above procedure may appear extremely ineffective for a practical time-reversal task. However, the situation turns completely different if we relax the requirement of the exact knowledge of \(\hat{H}\) and assume that one, instead, is provided by the equivalent copy of the system \({\mathcal{S}}\) governed by the same Hamiltonian \(\hat{H}\) . Auxiliary system Let one be equipped with the thermodynamic bath at the temperature T = β −1 in addition to the ancilla. One can then thermalize the ancilla and prepare it in the equilibrium state \({\sigma }_{\beta }={Z}_{\beta }^{-1}\exp (-\beta \hat{H})\) with \({Z}_{\beta }={\rm{Tr}}\{\exp (-\beta \hat{H})\}\) being a statistical sum. For high-enough temperature β → 0, one has \(\beta {\epsilon }_{\max } \sim 1\) and, therefore, \({\sigma }_{\beta }\approx {Z}_{\beta }^{-1}(1-\beta \hat{H})\) which gives the desired state of the ancilla to implement the reverse evolution through the LMR procedure. In this case, $$\hat{\rho }(t)\to \hat{\rho }\left(t-\frac{\hslash \omega \beta }{{Z}_{\beta }}\ \tau +\tau \right)+\delta \hat{\rho }(\tau ).$$ (8) As can be seen from the above equation the actual time reversal requires the operation rate of the LMR procedure to exceed the threshold $$\omega \, > \, {\omega }_{{\rm{th}}}=\frac{T}{\hslash }{Z}_{\beta }\approx \frac{T}{\hslash }\dim {\mathcal{S}}.$$ (9) The approximation error \(\delta \hat{\rho }\) splits now to two contributions, \(\delta \hat{\rho }=\delta {\hat{\rho }}_{1}+\delta {\hat{\rho }}_{2}\) , where \(\delta {\hat{\rho }}_{1}\) is the approximation error resulting from the LMR procedure, see Eq. ( 4 ) with \(\hat{\sigma }\to {\hat{\sigma }}_{\beta }\) , while the error \(\delta {\hat{\rho }}_{2}\) describes the error due to the β expansion of the thermal state \({\hat{\sigma }}_{\beta }\) . Assuming ω = 2 ω th , i.e. the backward evolution goes with the same rate as the forward time evolution one finds $$\delta {\hat{\rho }}_{2}(\tau )=-i\frac{\tau \beta }{\hslash }\ \left[{\hat{H}}^{2},\hat{\rho }\left(t-\tau \right)\right].$$ (10) Then for \(| | {\hat{\sigma }}_{\beta }| | \ll | | \hat{\rho }(t)| |\) one can estimate the net error as $$| | \delta \hat{\rho }(\tau )| | \le \left(4\frac{{Z}_{\beta }^{2}}{N}\ {\left(\frac{\tau }{{\tau }_{\beta }}\right)}^{2}+\frac{\tau }{{\tau }_{\beta }}{(\beta {\epsilon }_{\max })}^{2}\right)| | \hat{\rho }(t-\tau )| | .$$ (11) where τ β = ℏ β . The temperature dependence of two error contributions in Eq. ( 11 ) oppositely depends on the inverse temperature: the error due to thermal expansion (second term) reduces as β → 0, while the error due to LMR dynamics (first term) increases with decreasing β . For a given reverse time delay τ and the number of LMR iterations \(N\gg {Z}_{\beta }^{2}\approx {(\dim {\mathcal{S}})}^{2}\) , one has an optimal temperature $$\beta {\epsilon }_{\max }={\left(8\frac{{Z}_{\beta }^{2}}{N}\frac{{\epsilon }_{\max }\tau }{\hslash }\right)}^{1/3},$$ (12) and the corresponding net accuracy of the reversal procedure is then given by $$| | \delta \hat{\rho }(\tau )| | =\epsilon =3{\left(\frac{{Z}_{\beta }^{2}}{N}\right)}^{1/3}{\left(\frac{{\epsilon }_{\max }\tau }{\hslash }\right)}^{4/3}| | \hat{\rho }(t-\tau )| | .$$ (13) Comparing with the case of the known Hamiltonian time-reversal procedure, see Eq. ( 6 ), the reversal complexity here is again proportional to the square of the system’s Hilbert space dimension, but, at the same time, has more adverse scaling with the reversal duration and the net accuracy. The above analysis does not need any prior information about the state \(\hat{\rho }\) which would require very high temperature of the auxilliary thermostat in order to cover all the possible energy states of the system’s Hilbert space that finally results in a tremendously high rate \(\sim \hslash \beta \dim {\mathcal{S}}\) of the LMR procedure. If, however, some information about the energy content of the state \(\hat{\rho }\) is available, one can appreciably reduce the reversal cost. Indeed, let the state \(\hat{\rho }\) have the average energy \(\bar{E}={\rm{Tr}}\{\rho \hat{H}\}\) with an energy variance \({(\delta E)}^{2}={\rm{Tr}}\{\hat{\rho }((\hat{H}-\bar{E})^2)\}\) . Then one can present the density matrix as the result of the low-energy contribution, \({\hat{\rho }}_{\,{<}\,}=\hat{P}\hat{\rho }\hat{P}/{\rm{Tr}}\{\hat{P}\hat{\rho }\}\) and the high-energy remainder \({\hat{\rho }}_{ \,{> }\,}=(1-\hat{P})\hat{\rho }(1-\hat{P})/{\rm{Tr}}\{(1-\hat{P})\hat{\rho }\}\) , where \(\hat{P}={\sum }_{E{<}{E}_{\max }}\left|E\right\rangle \left\langle E\right|\) is a projection operator to the subspace with energies below some cutoff energy \({E}_{\max } \, > \, \bar{E}\) : \(\hat{\rho }=(1-{\epsilon }_{{\rm{E}}}){\hat{\rho }}_{\,{<}\,}+{\epsilon }_{{\rm{E}}}{\hat{\rho }}_{ \,{> }\,}\) . The additional error due to truncating the system’s Hilbert space to the low-energy subspace is given by the constant ϵ E , which is a probability for the system to be found in the energy state \(E \, > \, {E}_{\max }\) , and, according to the Chebyshev inequality, is bound by $${\epsilon }_{{\rm{E}}}\le {\left(\frac{{E}_{\max }-\bar{E}}{\delta E}\right)}^{-2}.$$ (14) Single-particle wave packet Now, we consider an exemplary time-reversal procedure for a spreading single-particle wave packet with the quadratic spectrum. Let the packet at the time t = 0 be localized at the origin and have the Lorentzian shape with the width ξ 0 : $$\Psi (x,0)=\sqrt{\frac{{\xi }_{0}}{2\pi }}\ \frac{2{\xi }_{0}}{{x}^{2}+{\xi }^{2}}\equiv \sum \limits_{p}\sqrt{2\pi {\xi }_{0}}{e}^{-| p| {\xi }_{0}}{e}^{ipx}.$$ (15) A subsequent free evolution with quadratic Hamiltonian \(\hat{H}={\hslash }^{2}{\hat{p}}^{2}/2m\) during the time interval τ > 0 broadens the particle’s wave function into $$\Psi (x,\tau )\approx \frac{{e}^{-| x| m{\xi }_{0}/\hslash \tau }}{\sqrt{2\pi \hslash \tau /m}}\exp \left(\frac{im{x}^{2}}{2\hslash \tau }\right),$$ (16) having the typical size ξ τ = ℏ τ / m ξ 0 or, equivalently, \({\xi }_{\tau }/{\xi }_{0}=4\bar{E}\tau /\hslash\) , where \(\bar{E}={\hslash }^{2}/4m{\xi }_{0}^{2}\) is the average energy carried by the wave packet. The statistical sum Z β within the volume ~ ξ τ is given by $${Z}_{\beta } \sim {\xi }_{\tau }\int\ dE{\nu }_{{\rm{1D}}}(E){e}^{-\beta E} \sim \frac{\tau }{\hslash }\sqrt{\frac{\bar{E}}{\beta }},$$ (17) where \({\nu }_{{\rm{1D}}}(E)={(m/2{\pi }^{2}{\hslash }^{2}E)}^{1/2}\) is one-dimensional density of states. Assuming \({E}_{\max } \sim \bar{E}\) , the reversal complexity for the time-reversal procedure with the accuracy ϵ is given by (see Eqs. ( 12 ) and ( 13 )) $${N}_{\epsilon } \sim \frac{1}{{\epsilon }^{4}}{\left(\frac{\bar{E}\tau }{\hslash }\right)}^{7}=\frac{1}{{\epsilon }^{4}}{\left(\frac{{\xi }_{\tau }}{{\xi }_{0}}\right)}^{7}.$$ (18) The optimal inverse temperature of the thermostat is then given by $$\beta \sim \frac{1}{\bar{E}}\frac{\epsilon }{{\xi }_{\tau }/{\xi }_{0}}.$$ (19) Comparing this with the reversal complexity of a known state of the wave packet, \({N}_{\epsilon }^{\prime} \sim {\epsilon }^{-1}({\xi }_{\tau }/{\xi }_{0})\) , see ref. 26 , one finds that the reversal of an unknown wave-packet state is a more laborious computational task. Discussion We have described the time-reversal procedure of an unknown mixed quantum state. The procedure relies on the ability to perform the LMR protocol and on the existence of an ancilla system whose dynamics is governed by the same Hamiltonian as the Hamiltonian of the reversed system, which is not required to be known to us. The reversal procedure is comprised of N ≫ 1 sequential applications of the LMR procedure to the joint state of the system and ancilla prepared in a thermal state. In contrast to the known state-reversal procedure, the introduced algorithm does not require to keep an information about all amplitudes of the reversed state. Yet the reversal complexity given by N scales typically as squared dimension of a Hilbert space spanned the unknown state. Moreover, the operation rate of the LMR procedure has to be sufficiently high to overrun the forward time evolution of the reversed system during the execution of the reversal protocol. The experimental realization of such a protocol is a feasible yet challenging task. As a first step, it will require an upgrade of the existing design of quantum chips. In particular, one needs a set of interacting qubits (denoted by \({{\mathcal{Q}}}_{{\rm{A}}}\) ) capable to get thermalized on-demand being connected with the high-temperature environment. For superconducting qubits 28 , this can be implemented by coupling them with a transmission line, where the high-temperature thermal radiation is fed in, once one needs to set the qubits into a high-temperature state. Next, the second set of qubits \({{\mathcal{Q}}}_{{\rm{B}}}\) , \(\dim {{\mathcal{Q}}}_{{\rm{B}}}=\dim {{\mathcal{Q}}}_{{\rm{A}}}\) is required, where one can store a quantum state prepared in the set \({{\mathcal{Q}}}_{{\rm{A}}}\) . Then the time-reversal procedure goes as follows. First, one prepares some state ψ A (0) of the qubits \({{\mathcal{Q}}}_{{\rm{A}}}\) , and lets it evolve according to an intrinsic Hamiltonian of the qubits \({{\mathcal{Q}}}_{{\rm{A}}}\) : \({\psi }_{{\rm{A}}}(0)\to {\psi }_{{\rm{A}}}(\tau )={e}^{-i{\hat{H}}_{{\rm{A}}}\tau /\hslash }{\psi }_{{\rm{A}}}(0)\) . Second, at the end of the evolution, one swaps the states between \({{\mathcal{Q}}}_{{\rm{A}}}\) and \({{\mathcal{Q}}}_{{\rm{B}}}\) , ψ A ↔ ψ B . We assume that the set \({{\mathcal{Q}}}_{{\rm{B}}}\) can keep its quantum state untouched for a sufficient time. The procedure then continues as described above: one subsequently thermalizes the set \({{\mathcal{Q}}}_{{\rm{A}}}\) and implements the joint LMR evolution \({e}^{-i\omega \delta t{\hat{S}}_{{\rm{AB}}}}{\hat{\rho }}_{{\rm{A}}}\otimes {\hat{\rho }}_{{\rm{B}}}{e}^{i\omega \delta t{\hat{S}}_{{\rm{AB}}}}\) . As a result, the qubits \({{\mathcal{Q}}}_{{\rm{B}}}\) will undergo the time-reversed dynamics under the same Hamiltonian \({\hat{H}}_{{\rm{A}}}\) . This procedure is to be implemented on the emergent quantum computers with the on-demand thermalizable qubits. Methods LMR procedure The LMR procedure goes as follows: one considers a combined system of the system in question and an ancilla \(\hat{\rho }\otimes \hat{\sigma }\) and performs the joint unitary evolution over an infinitesimal time instant δ t under a Hamiltonian \(\hslash \omega \hat{S}\) , $$\hat{\rho }\otimes \hat{\sigma }\to \exp \left(-i\omega \delta t\hat{S}\right)\left[\hat{\rho }\otimes \hat{\sigma }\right]\exp \left(+i\omega \delta t\hat{S}\right),$$ (20) where \(\hat{S}\) is a unitary SWAP operator 29 acting on the system and ancilla: \(\hat{S}\left({\left|x\right\rangle }_{{\mathcal{S}}}\otimes {\left|y\right\rangle }_{a}\right)={\left|y\right\rangle }_{{\mathcal{S}}}\otimes {\left|x\right\rangle }_{a}\) . The operator \(\hat{S}\) is itself Hermitian and, therefore, the unitary operator \(\exp (-i\omega \delta t\hat{S})\) can be implemented. Making use of its property \({\hat{S}}^{2}=\hat{1}\) , one gets \(\exp (-i\omega \delta t\hat{S})=\hat{1}\cos (\omega \delta t)-i\hat{S}\sin (\omega \delta t)\) and, therefore, its computational complexity is equivalent to the complexity of the unitary swap operator acting on the direct product of Hilbert spaces with the dimensions \(\dim {\mathcal{S}}\) . Next, we trace out the ancilla and get the quantum channel for the system’s density matrix $$\begin{array}{c}\rho \to {\Phi }_{\delta t}[\hat{\rho }]={\cos }^{2}(\omega \delta t)\hat{\rho }+{\sin }^{2}(\omega \delta t)\hat{\sigma }\\ -i\sin (\omega \delta t)\cos (\omega \delta t)\left[\hat{\sigma },\hat{\rho }\right].\end{array}$$ (21) At the infinitesimal time instant ω δ t → 0, one gets the channel, \({\Phi }_{\delta t}\left[\hat{\rho }\right]=\hat{\rho }-i\omega \delta t\left[\hat{\sigma },\hat{\rho }\right]+{(\omega \delta t)}^{2}\left(\hat{\sigma }-\hat{\rho }\right)\) . In this expression, the term \(\hat{\rho }-i\omega \delta t\left[\hat{\sigma },\hat{\rho }\right]\) corresponds to the infinitesimal time evolution of the density matrix \(\hat{\rho }(t)\) under the Hamiltonian \({\hat{H}}_{\sigma }=\hslash \omega \hat{\sigma }\) : \(\hat{\rho }(t)\to \hat{\rho }(t+\delta t)={e}^{-i{\hat{H}}_{\sigma }\delta t/\hslash }\hat{\rho }(t){e}^{i{\hat{H}}_{\sigma }\delta t/\hslash }\approx \hat{\rho }(t)-i\omega \delta t\left[\hat{\sigma },\hat{\rho }\right]-\frac{1}{2}{(\omega \delta t)}^{2}\left[\hat{\sigma }\left[\hat{\sigma },\hat{\rho }\right]\right]\) . Therefore, up to the ( ω δ t ) 2 terms, Eq. ( 21 ) can be transformed into the exponential form $${\Phi }_{\delta t}\left[\hat{\rho }(t)\right]\approx {e}^{-i\omega \delta t\hat{\sigma }}\hat{\rho }(t){e}^{+i\omega \delta t\hat{\sigma }}+{\left(\omega \delta t\right)}^{2}\left(\hat{\sigma }+\frac{1}{2}\left[\hat{\sigma }\left[\hat{\sigma },\hat{\rho }(t)\right]\right]-\hat{\rho }(t)\right).$$ (22) Repeating the above procedure N times one can generate the forward time evolution \(\exp (-i\omega \tau \hat{\sigma })\) over a finite time interval τ = N δ t $${\Phi }_{\delta t}^{N}[\hat{\rho }(t)]\approx \exp (-i\omega \tau \hat{\sigma })\hat{\rho }\exp (+i\omega \tau \hat{\sigma })+\delta \hat{\rho }(\tau ),$$ (23) where the approximate accuracy is given by $$\delta \hat{\rho }(\tau )=\frac{{\left(\omega \tau \right)}^{2}}{N}\left(\sigma +\frac{1}{2}\left[\hat{\sigma }\left[\hat{\sigma },\hat{\rho }(t+\tau )\right]\right]-\hat{\rho }(t+\tau )\right),$$ (24) with \(\hat{\rho }(t+\tau )=\exp (-i\omega \tau \hat{\sigma })\hat{\rho }(t)\exp (+i\omega \tau \hat{\sigma })\) being the final state of the system. High-entropy-state-reversal complexity Here, we derive the Eq. ( 7 ) for the time-reversal complexity of the state \(\hat{\rho }\) with the entropy \(S={\mathrm{ln}}\,\dim (N)-k \, \mathrm{ln}\,(2)\) , where \(N=\dim ({\mathcal{S}})\) is the Hilbert space dimension of the system. The norm \(| | \hat{\rho }| |\) is given by its maximum eigenvalue \(| | \hat{\rho }| | ={p}_{1} \, > \, {p}_{i}\) , i = 2, … N of the density operator. The von Neumann entropy can be decomposed into a sum $$S=H({p}_{1})+(1-{p}_{1})\mathop{\sum }\limits_{i = 2}^{N}{\tilde{p}}_{i} \, {\mathrm{ln}}\,({\tilde{p}}_{i}),$$ (25) where \({\tilde{p}}_{i}={p}_{i}/(1-{p}_{1})\) with \(\mathop{\sum }\nolimits_{i = 2}^{N}{\tilde{p}}_{i}=1\) , \(H(x)=-x{\mathrm{ln}}\,(x)-(1-x){\mathrm{ln}}\,(1-x)\le {\mathrm{ln}}\,(2)\) . Let us find a maximal possible p 1 for a given S . One sees straightforwardly that p 1 is maximal if all \({\tilde{p}}_{i}\) , i = 2, … N are uniform and Eq. ( 25 ) is reduced to $${\mathrm{ln}}\,(N)-k\,{\mathrm{ln}}\,(2)=H({p}_{1})+(1-{p}_{1}){\mathrm{ln}}\,(N-1).$$ (26) For N ≫ 1, one can assume p 1 ≪ 1 and get the approximate solution \({p}_{1}\approx k/{\mathrm{log}\,}_{2}(N)\) that results in Eq. ( 7 ). Data availability Data sharing is not applicable to this article, as no data sets were generated or analyzed during this study.
Physicists have long sought to understand the irreversibility of the surrounding world and have credited its emergence to the time-symmetric, fundamental laws of physics. According to quantum mechanics, the final irreversibility of conceptual time reversal requires extremely intricate and implausible scenarios that are unlikely to spontaneously occur in nature. Physicists had previously shown that while time-reversibility is exponentially improbable in a natural environment—it is possible to design an algorithm to artificially reverse a time arrow to a known or given state within an IBM quantum computer. However, this version of the reversed arrow-of-time only embraced a known quantum state and is therefore compared to the quantum version of pressing rewind on a video to "reverse the flow of time." In a new report now published in Communications Physics, Physicists A.V. Lebedev and V.M. Vinokur and colleagues in materials, physics and advanced engineering in the U.S. and Russia, built on their previous work to develop a technical method to reverse the temporal evolution of an arbitrary unknown quantum state. The technical work will open new routes for general universal algorithms to send the temporal evolution of an arbitrary system backward in time. This work only outlined the mathematical process of time reversal without experimental implementations. The arrow of time and developing a time-reversal protocol The arrow of time originates from expressing the direction of time in a singular route relative to the second law of thermodynamics, which implies that entropy growth stems from energy dissipation of the system to the environment. Scientists can therefore consider energy dissipation relative to the system's entanglement with the environment. Previous research solely focused on the quantum viewpoint of the arrow of time and on understanding the effects of the Landau-Neumann-Wigner hypothesis to quantify the complexity of reversing the arrow of time on an IBM quantum computer. In the present work, the scientists propose using a thermodynamic reservoir at finite temperatures to form a high-entropy stochastic bath to thermalize a given quantum system and experimentally increase thermal disorder or entropy in the system. However, experimentally, the IBM computers do not support thermalization, which forms the first step in the currently proposed cycle. In theory, the presence of the thermal reservoir unexpectedly made it possible to prepare high-temperature thermal states of an auxiliary (alternative) quantum system elsewhere, governed by the same Hamiltonian (an operator corresponding to the sum of kinetic energy and potential energies for all particles in the system). This allowed Lebedev and Vinokur to mathematically devise an operator of backward-time evolution to reverse the chronological dynamics in a given quantum system. Universal procedure and the auxiliary system The team defined the universal time-reversal process of an unknown quantum state using the density matrix of a quantum system (a mixed state); to describe reversal of the temporal system's evolution to return to its original state. The quantum state of the new system could remain unknown while implementing the arrow of time reversal. In contrast to the previous protocol of time reversal of a known quantum state, the initial state did not have to be of a purely uncorrelated state either and could remain in a mixed state and correlate to past interactions with the environment. The team noted reduced time-reversal complexity for a mixed high-entropy state in the system. Lebedev et al. drew upon the reversal procedure previously detailed by S. Lloyd, Mohseni and Rebentrost (LMR procedure) to construct or map the initial density matrix. The LMR procedure considered the combined arrangement of the system in question and an ancilla to accomplish reversible computation. The experimental system will be equipped with a thermodynamic bath to thermalize the ancilla and provide the desired state for reverse evolution. The hotter the system, the more chaotic it would become. By using a heat reservoir to expose the auxiliary system to an extremely high temperature, Lebedev et al. paradoxically aim to experimentally observe the primary system's cold and ordered past using the LMR formula. The authors reason that a universal time reversal algorithm can run a computation in reverse, without a specific quantum state to rewind to, as long as the algorithm facilitates time reversal to its point of origin. Computational complexity of the time-reversal procedure The work only outlined the mathematical analysis of time reversal without specifying experimental implementations. While exercising time reversal, the proposed system continued to maintain the forward evolution governed by its own Hamiltonian. The computational complexity of time reversal for an unknown quantum state was proportional to the square of the system's Hilbert space dimension (an abstract vector space). To accomplish this in practice, the experimental system will require a natural system that evolves under an unknown Hamiltonian alongside thermalization, which quantum computers do not support, paired with universal quantum gates to achieve time reversal. As a result, practical implementation of this work will require an upgrade to existing quantum computers to meet the outlined requirements. A route to upgrade the existing design of quantum chips Lebedev et al. therefore aim to upgrade the existing design of quantum chips to achieve a set of interacting qubits (quantum bits) that can thermalize on-demand in a high-temperature environment. To accomplish this, superconducting qubits can be coupled with a transmission line where high-temperature thermal radiation will be fed to set the qubits to a high-temperature state. Thereafter, they will require a second set of qubits that can store a quantum state similar to the original set of qubits. When the original set of qubits are then experimentally thermalized to implement the joint LMR evolution, subsequent qubits will be able to undergo time-reversed dynamics under the same Hamiltonian to reach the original state. If accurately implemented, the proposed mechanism will also facilitate error-correction of an upgraded quantum computer to confirm its correct function. Lebedev et al. envision implementing the procedure on emergent computers with on-demand thermalized qubits. In this way, Lebedev and Vinokur demonstrated the time reversal procedure of an unknown mixed quantum state. The process relies on performing the LMR protocol and the existence of an ancilla system, whose dynamics can be governed by the same Hamiltonian as the Hamiltonian of the reversed system. To accomplish the reversal procedure the LMR protocol will need to be applied sequentially to the joint state of the system and ancilla, prepared in a thermal state. The work developed a formula to highlight the number of cycles that should be repeated to reverse the state of a given system towards earlier states in the past. This number will depend on the system's complexity and how far back in time it is supposed to go. When implementing the time reversal protocol, the operation rate of the LMR procedure should be sufficiently high, to overrun the forward time evolution of the reversed system.
10.1038/s42005-020-00396-0
Medicine
Motivation depends on how the brain processes fatigue
Tanja Müller et al, Neural and computational mechanisms of momentary fatigue and persistence in effort-based choice, Nature Communications (2021). DOI: 10.1038/s41467-021-24927-7 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-24927-7
https://medicalxpress.com/news/2021-07-brain-fatigue.html
Abstract From a gym workout, to deciding whether to persevere at work, many activities require us to persist in deciding that rewards are ‘worth the effort’ even as we become fatigued. However, studies examining effort-based decisions typically assume that the willingness to work is static. Here, we use computational modelling on two effort-based tasks, one behavioural and one during fMRI. We show that two hidden states of fatigue fluctuate on a moment-to-moment basis on different timescales but both reduce the willingness to exert effort for reward. The value of one state increases after effort but is ‘recoverable’ by rests, whereas a second ‘unrecoverable’ state gradually increases with work. The BOLD response in separate medial and lateral frontal sub-regions covaried with these states when making effort-based decisions, while a distinct fronto-striatal system integrated fatigue with value. These results provide a computational framework for understanding the brain mechanisms of persistence and momentary fatigue. Introduction Most daily tasks require the exertion of effort over an extended period of time. From a workout at the gym to deciding whether to persist with a task at work, much of our activities require us to keep deciding whether the effort is ‘worth it’. However, declines in our willingness to work often co-occur with sensations of ‘fatigue’. Such sensations are a common debilitating symptom across many psychiatric and neurological disorders and have dramatic impacts on levels of daily activity 1 , 2 , 3 . Research has begun to provide a richer understanding of the computational and neural mechanisms underlying how people value, and decide, whether a given amount of effort is ‘worth it’ for a certain magnitude of reward 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 . However, implicitly such studies have not accounted for the effects of fatigue or have attempted to control for any of its potential effects. Yet, the willingness to work is not static 13 . Sometimes even though the objective difficulty of a task remains the same individuals give up or take a break 14 , 15 , 16 , 17 , 18 . What are the hidden internal states that change how we subjectively value effort over time and prevent us from persisting? Theories suggest that the willingness to work can be characterised by cost-benefit trade-offs, where the value of a reward is subjectively discounted by the effort required to obtain it. Theoretically, we are willing to work when we consider the value of a reward worth the effort we have to exert to obtain it 13 , 19 . Although a number of factors can influence such valuations, it has been argued that sensations of fatigue induced by the exertion of effort can lead to subsequent reductions in the willingness to work. Theoretically, as fatigue intensifies, it increases the devaluation of rewards by effort, leading to reductions in the willingness to persist with a task, with less rewarding and more effortful acts likely to be avoided 13 , 20 , 21 , 22 . In addition, time spent resting can have a restorative effect 18 , reducing fatigue and concomitantly increasing the willingness to exert effort to obtain rewards. Despite these claims being at the cornerstone of theoretical accounts, few studies have directly tested these tenets. Existing research has shown that higher levels of fatigue are related to a reduced willingness to exert effort for rewards 18 , 23 . But little work has examined the dynamic, moment-to-moment changes in how willing we are to decide that a reward is worth it for the effort 13 , 19 , 20 , 21 , 24 . Moreover, separate lines of evidence suggest that fatigue may be comprised of distinct components that operate on different timescales. There are short-term increases in fatigue during tasks, which can be reduced by short periods of rest ( recoverable fatigue ( RF )) 14 . In addition, there are also longer-term changes that occur after extended periods of exertion for which simply resting may not lead to restoration ( unrecoverable ) 25 . As they increase, both components putatively also increase the devaluation of rewards by effort. However, although these components have been examined separately, there has yet to be a formal framework that unifies them, quantifies dynamic changes in fatigue, and how fatigue shifts the value people attribute to exerting effort to obtain rewards on a moment-to-moment basis. Previous neurophysiological and neuroimaging accounts have highlighted a core system in the brain that processes the costs and benefits of engaging in effortful activities. Activity in sub-regions of the supplementary motor area (SMA)/anterior cingulate cortex (ACC), the middle frontal gyri (MFG), frontal pole (FP), and ventral striatum (VS) have been implicated in computing value and motivating the exertion of effort 4 , 5 , 7 , 26 , 27 , 28 , 29 , 30 , 31 , 32 . Evidence suggests that these regions also change their response with time on task 13 , 15 , 17 . However, it is unclear how this distributed system changes on a moment-to-moment basis, and how that might lead to changes in the value ascribed to exerting effort for reward. Do separate sub-regions within this network signal fatigue on different timescales and integrate this into computations of value? Here, we hypothesise that the brain regions outlined above, that have previously been linked to effort-based decision-making, covary differentially with RF, unrecoverable fatigue (UF), and the momentary value of working (fatigue-weighted value). To test this notion, we designed an effort-based decision-making task (Fig. 1 ) where participants had to exert physical effort (grip force) to obtain rewards (credits) whilst undergoing functional Magnetic Resonance Imaging (fMRI). On each trial of this task, they chose between a 5 s ‘rest’ (no effort, low reward [1 credit]) and 5 s of ‘work’ which varied in effort (30–48% of maximum grip strength) and reward (6–10 credits). Using a computational model combining previous cost-benefit valuation models with latent fatigue variables, we predicted how effort-based decisions would change across the task, as well as in a separate behavioural study where people rated their momentary fatigue ( Supplementary Materials ). Using this design, we were then able to identify and dissociate brain regions in which the blood oxygen level dependent (BOLD) signal varied with the hidden variables of RF, UF, and fatigue-weighted subjective value (SV) on a trial-by-trial basis at the time of making effort-based decisions. Fig. 1: Trial structure and experimental design. a Participants made choices between a work offer and a rest, over 216 trials. Rest was always worth a low reward (1 credit), with the work offer varying in reward (6, 8, 10 credits) and effort (30, 39, 48% of maximum voluntary contraction [MVC] on a hand-held dynamometer). Each participant’s MVC was obtained prior to the experiment, in order to set the force levels idiosyncratically for each participant based on their grip strength. Effort levels were depicted as the number of elements in a pie chart—more elements signified higher effort. Participants performed a training session to familiarise themselves with how much force was required at each level of effort and to associate those effort levels with the corresponding pie charts prior to entering the scanner. The location (left/right) of the work offer and rest was randomised across trials. After making a choice, it was highlighted by a green frame. Participants then either rested or exerted the effort that was offered for 5 s. To obtain the credits offered, participants had to exert the required force, indicated by a yellow line, for a sum total of 3 s out of the 5 s window, with a red colouring providing online visual feedback. If unsuccessful they would receive 0 credits, if successful they would receive the credits of the offer that was chosen. The offer period was jittered independently of the other events allowing us to examine activity time-locked to effort-based decisions. In a pre-task, only a random subset of 10% of trials were selected and participants required to exert the effort (or rest) to obtain rewards. b Offers in the main and pre- tasks. Offers in the main task (dark orange) were restricted to those highest in value (higher reward, lower effort) to ensure that participants did not rest in any of the options in the main task simply because they would never value the options as worth working for. In a pre-task conducted outside the scanner, a wider range of offers (light orange) were included to ensure that each participant’s effort-discounting behaviour could be quantified when they were not fatigued. Full size image We find distinct contributions of the MFG, and two distinct sub-regions of the medial frontal cortex, in which activity covaries with the hidden ‘fatigue’ states that modulate the willingness to exert effort for reward. In addition, we distinguish these regions from a fronto-striatal system that integrates these hidden states with reward and effort to reflect the current value of working. The same computational model could also explain trial-to-trial ratings of fatigue, highlighting that the effects on effort-based decisions may be directly linked to sensations of fatigue. Our approach dissects hidden variables that underlie moment-to-moment changes in fatigue and effort-based decisions, providing a framework for examining how fatigue impacts behaviour in health and disease. Results The aim of this study was to examine how the value attributed to exerting effort to obtain rewards changes on a moment-to-moment basis, due to fluctuations in internal states putatively linked to fatigue. We designed an effort-based decision-making task in which participants would choose between 5 s of ‘work’ or 5 s of ‘rest’ whilst undergoing fMRI (Fig. 1 ; Supplementary Fig. 1 ). Rest required no exertion, but only resulted in accruing a small reward (1 credit). The work offer varied on every trial in both reward magnitude (6, 8, 10 credits), as well as the amount of physical effort (grip force) required to obtain it. These effort levels were calibrated to participants’ own maximum grip strength (30, 39, 48% of maximum voluntary contraction [MVC]). Participants had to exert force at the required level for a total of 3 s out of the 5 s window in order to receive the reward. Failure to do so resulted in 0 credits. Although these effort levels were demanding, they were easily achievable, and participants were successful at executing the required levels of force on over M > 96.8% (SD < 4.6) of trials at all levels of grip force. In addition, we required participants to rate their level of ‘tiredness’—a more commonly used synonym of fatigue—between 0 and 10 before the start of the main task, and then again after completion of the experiment. Although participants could freely choose to rest, and thus prevent a significant build-up of fatigue, a repeated measures t -test revealed that ratings of fatigue were higher at the end of the experiment than at the beginning ( t (34) = 4.27, two-tailed p < 0.001, Cohen’s d = 0.72, 95% CI = [0.54, 1.52]; Supplementary Fig. 2 ). Effort discounting and persistence depend on the history of effort exerted We hypothesised that people’s willingness to exert effort for reward would change across trials. More specifically, our computational model built around theories of fatigue suggests that exerting effort increases levels of fatigue, and when fatigue is higher, the same levels of effort and reward would have a reduced value compared to when fatigue is low. Such an account would predict that (i) participants would be more likely to work in situations where fatigue is low and (ii) people would shift their valuations, such that while higher effort/lower reward offers would be worth working for at some times, they would be avoided in favour of a rest at other times. To test the first of these predictions we compared choices in the main part of the experiment, where every choice of work resulted in the requirement to exert force for reward, with a pre-task in which only a random 10% of trials resulted in the requirement to exert force (Fig. 1 ). The pre-task also contained a wider range of offers, including options that were lower in reward ( 2, 4 , 6, 8, 10 credits), but higher in effort to ensure we could capture the full range of variability in people’s tendency to discount rewards by effort (30, 39, 48, 57, 66 % MVC). This pre-task therefore allowed us to measure people’s tendency to discount rewards by effort in a task where little fatigue would be accrued. First, we wanted to show that in this pre-task, participants were discounting rewards by effort. A logistic regression (with a Wilcoxon test across participants) on choices to work or rest showed evidence of this, with participants more likely to choose ‘work’ at higher rewards ( Z = 4.43, p < 0.001, 95% CI = [1.15, 2.17]), but less likely to work at higher efforts ( Z = −5.11, p < 0.001, 95% CI = [−2.55, −1.35]). However, despite showing effort-discounting effects, participants chose to work on almost all of the trials ( M ≥ 95.4%) for each combination of the higher reward (6–10 credits) and lower effort levels (30–48% MVC) that were included in the main task (Fig. 2 ). Thus, when little fatigue could accumulate, participants valued these offers consistently higher than the value of rest. Fig. 2: The shifting of subjective value of effort and reward. a Proportions (means) of choices to work in the pre-task (where only 10% of trials resulted in subsequent work or rest). Participants were more likely to choose to work at higher reward and lower effort. The ‘high value’ work options (inside the dotted line) were almost always chosen as worth working for. b Logistic regression on the pre-task choice data shows significant positive effects of reward and negative effects of effort. The asterisks show significant t -scores (two-tailed p < 0.001) and the error bars represent SEM; n = 36. c Proportions (means) of choices to work in the main task, where all choices resulted in subsequent work or rest. Some of the high value options previously worth working for are now sometimes avoided. The lowest value of the higher value options (48% effort, 6 credits) is often avoided in favour of a rest in the main task. d Percentage of participants who accepted the work offer, illustrated separately for each trial in the main task. Values reflect the consistency with which trials were accepted or rejected across the experiment. This shows considerable variability in choices, but high levels of choices to work in early trials, rather than late. e Proportion (mean) of accepting the work offers in first and second half of the main task. Gradually increasing effort discounting reflects a shift in valuation of previously high valued offers across time in the main task. f Logistic regression predicting choices to work or rest in the main task. The effort level in the current trial’s work offer interacts with the sum total of effort accumulated up to that trial (cumulative effort) to predict choice. The asterisks show significant t -scores (two-tailed p < 0.001) and the error bars represent SEM; n = 36. Full size image The second claim that would be predicted by our model is that the value of exerting effort for rewards declines as fatigue accumulates, i.e. participants would shift the value they ascribed to work offers across the main task. To test this, we performed a logistic regression on choices to work or rest (with a Wilcoxon test across participants), using effort and reward (offered on each trial), cumulative effort (sum total of effort exerted from all previous trials) and corresponding interactions as predictors (Fig. 2 ). Consistent with a dynamic change in the value of working, there was a significant interaction between effort and cumulative effort, as well as main effects of effort, reward and cumulative effort (cumulative effort × effort: Z = −4.19, p < 0.001, 95% CI = [−1.39, −0.47]; effort: −4.35, p < 0.001, 95% CI = [−1.83, −0.96]; reward: Z = 3.96, p < 0.001, 95% CI = [0.69, 1.71]; cumulative effort: Z = −3.83, p < 0.001, 95% CI = [−2.45, −0.84]). The three-way ( Z = −0.27, p = 0.79, 95% CI = [−0.25, 0.37]) and the cumulative effort × reward interactions ( Z = 1.40, p = 0.16, 95% CI = [−0.06, 0.42]) were not significant. Importantly, offers that were considered as higher in value and were chosen to work for at a high proportion in the pre-task, became gradually less and less likely to be selected across trials in the main task with effort and reward having strong effects even in the last 27 trials of the experiment (Fig. 2 ; Supplementary Fig. 2 ). Such findings are inconsistent with boredom or other factors leading to generally more noisy or random behaviour. Instead, these results are indicative of participants changing their subjective valuation of working across trials, with accumulated efforts increasing the discounting effect of effort and reducing the value of working across the experiment. Computational modelling: unrecoverable and recoverable fatigue states impact the subjective value of work To more precisely quantify changes in valuation of effort across the task, we developed a computational model of fatigue and its effect on effort-based decisions (Fig. 2 ; Supplementary Fig. 3 ). We hypothesised that fatigue has several components that impact on the willingness to exert effort, each of which operates on a different timescale. We predicted that the value of working fluctuates on a short-term basis due to a build-up of (recoverable) fatigue after exerting effort that is recovered by rest 14 . A separate line of evidence suggests that demanding tasks also cause (unrecoverable) ‘executive’ fatigue that cannot be easily restored just by taking some time resting 25 . Here, we formalised a computational model of how these two sources of fatigue would fluctuate trial-to-trial during the task (Fig. 3 ; see Methods). This model contained three free parameters estimated on choices to work or rest in the main task. One parameter ( \(\alpha\) ) scaled the amount that RF increased by the exertion of effort, with a second ( \(\delta\) ) scaling the amount that RF was reduced by time spent resting. The third parameter scaled the amount that UF increased by exerting effort ( \(\theta\) ). The fluctuating recoverable, and gradually increasing unrecoverable, fatigue values were integrated into a parabolic effort-discounting model 4 , 33 , 34 , in which rewards were devalued more by effort as levels of the UF and RF increased. To account for individual differences in people’s effort-discounting when participants were not fatigued, we also included a free parameter ( \(k\) ) which scaled how much participants devalued rewards by effort, fitted to choices in the pre-task, and carried over as a fixed parameter to model main task choices. Fig. 3: Modelling the fatigue-weighted subjective value of effort and reward. a List of main models compared. All models assume that rewards ( R ) increase subjective value ( SV ), effort ( E ) decreases SV, and people discount the rewards by effort idiosyncratically—modelled with a discount parameter ( k ) fitted to the pre-task. Two null models assume that the willingness to exert effort for reward remains static across the trials of the main task, either with the same discounting parameter as for the pre-task ( k ; model 1) or with a new discounting parameter ( \(\gamma\) ; model 2). Models 3–5 capture changes in effort discounting due to fatigue. The full model (model 5) assumes that exerting effort increases recoverable fatigue ( RF ), but time ( T rest ) spent resting decreases it. Both of these are scaled for each participant by two corresponding free parameters that define a person’s short-term fatigability ( \(\alpha\) , \(\delta\) ). Unrecoverable fatigue also increases through exerted efforts, but never declines. This is also weighted by an idiosyncratic free parameter ( \(\theta\) ), which defines long-term fatigability. These are summed to create F , which then serves to increase the discounting of rewards by effort as they increase. Models 3 and 4 include only the effects of UF or RF . b Schematic representation for how F (both UF and RF combined) affect value and choices to work or rest, with greater discounting of rewards by effort as fatigue increases. c Model comparison highlights that the full model is a better predictor of choice data than the simpler models in terms of AIC (left) and in exceedance probabilities (right), highlighting that both RF and UF are necessary to understand the willingness to exert effort. Star indicates winning model. d Furthermore, the model-estimated RF and UF —from model 5—significantly interacted with effort to predict choice behaviour in a logistic regression. The asterisks show significant t -scores (two-tailed p < 0.001) and the error bars represent SEM; n = 36. Full size image To test whether shifts in people’s valuations of effort across the main task were related to hidden recoverable and unrecoverable states, and whether three parameters were necessary to explain changes in the willingness to work, we performed model comparison. The results showed that the full model fitted better to participants’ decisions to work than four other models in a factorial design (Fig. 3 ) as well as two further mathematically plausible models, in which fewer parameters scaled or removed the unrecoverable and recoverable components (Supplementary Table 5 ). Findings were comparable when using BIC or AIC suggesting that extra parameters in the full model increased explanatory power. In addition, to test whether the full model was the most frequent in the population we calculated exceedance probabilities for each model 35 . The full model had the highest probability of being the most frequently best fitting model to participants’ choice data (Fig. 3 and Supplementary Table 5 ). This would suggest that participants made decisions to work based on a fatigue-weighted value, where fatigue depended on recoverable and unrecoverable states. Notably the UF parameter ( \(\theta\) ) also correlated with the change in ratings of fatigue taken before and after the main task ( r s = 0.361, two-tailed p = 0.033, 95% CI = [0.032, 0.620]) suggesting that choice behaviour may have been linked to sensations of fatigue ( Supplementary Results ). To test that this model was not only better than the alternatives but also significantly predicted choice behaviour, we performed a logistic regression on work versus rest decisions including z -scored reward and z -scored interactions of effort and trial-by-trial model-estimated RF and UF as predictors. As in the previous logistic regressions (with a Wilcoxon test across participants), reward significantly predicted choice ( p < 0.001), but crucially there were also significant negative interactions of effort and both fatigue components (RF × effort: Z = −4.98, p < 0.001, 95% CI = [−4.93, −2.98]; UF × effort: Z = −5.17, p < 0.001, 95% CI = [−5.06, −3.39]). This was the case whether using the average estimated fatigue across participants or the model’s idiosyncratic estimate of fatigue from each participant (all p s < 0.001). Thus, when the levels of fatigue in the model were higher, it was predictive of a greater tendency to rest, in particular when higher effort levels were on offer. Therefore, the willingness to exert effort for reward is not static but fluctuates moment-to-moment. When fatigue states in our model are higher this is related to a greater discounting of reward by effort. To further examine whether the computational model was able to capture sensations of fatigue, we performed an additional, similar behavioural experiment ( Supplementary Methods ; Supplementary Fig. 5 ). In this study, participants ( n = 40) performed a task with identical effort (0, 30, 39, 48%) and reward levels (6, 8, or 10 credits) to those used in the main task of the fMRI experiment. However, rather than being able to freely choose whether to work or rest on each trial, instead they were required to exert a level of effort (or take a rest) and then rate their level of ‘tiredness’ (a synonym for fatigue) on each trial. The computational model would predict that fatigue ratings would (i) increase as a function of effort exerted, (ii) would decrease after a trial of rest, (iii) the build-up would be best characterised by both RF and UF factors and (iv) would change independently of reward. In line with the predictions of our model, we found a significant effect of effort on trial-by-trial changes in fatigue ratings, a significant reduction in ratings after a trial of rest, but no significant effect of reward on ratings (Supplementary Fig. 6 ). To directly test these claims, we fit the three models that aim to capture changes in fatigue, that were fitted to effort-based choices in the fMRI experiment, to the trial-by-trial ratings in the behavioural study (Supplementary Fig. 7 and Supplementary Table 5 ). The full model, containing separate RF and UF parameters better explained ratings than the other models. These results support the notion that our model is able to capture trial-by-trial changes in fatigue induced by effort, as well as its effects on the value ascribed to exerting effort for reward. fMRI results We hypothesised that regions of the brain previously linked to effort-based decision-making would be dissociable in terms of signalling different hidden states within the computational model. That is, the BOLD signal in some regions would fluctuate with levels of recoverable and unrecoverable fatigue. To test this notion, we fitted parametric trial-by-trial regressors of the model-estimated UF and RF time-locked to the moment when participants were presented with the work and rest offers. Although this was the moment when people were evaluating the work offer, these regressors carried information only about levels of fatigue—the history of exerted effort—and thus were not correlated with the effort level and reward of the work offer on the current trial. In addition, we examined activity covarying trial-by-trial with the model-estimated, fatigue-weighted, SV of work time-locked to the same event. These parametric regressors were not strongly correlated ( r < 0.4, see Methods) and thus activity independently covarying with each could be identified. Distinct regions signal hidden recoverable and unrecoverable fatigue states during choice To test our hypotheses, we first wanted to examine whether distinct regions signalled fatigue states on different timescales. We therefore examined voxels in which activity at the whole-brain level, and within our hypothesised regions of interest ([ROI]—see Methods), significantly covaried with parametric regressors reflecting the trial-by-trial values of RF, UF and SV. Then we tested whether the same voxels significantly covaried with one parametric regressor and did not significantly covary with the others. Such an approach of examining overlap avoids the problems of double-dipping in ROI based analyses. Thus, results we report reflect a response exclusively to RF and UF. A t-contrast on RF—to extract beta values corresponding to that regressor—revealed a significant negative relationship between the BOLD signal in a cluster extending across the posterior rostral cingulate zone (RCZp: 9, 5, 50; Z = 3.57, p = 0.038 small-volume FWE correction [svc]; Fig. 4 ). T-contrasts on UF and SV did not reveal voxels in this region for either contrast ( p > 0.001 uncorrected). T-contrasts between RF and UF, as well as RF and SV, revealed significant clusters in the RCZp ( p < 0.05 svc) overlapping with that showing a significant effect in the contrast on RF. Thus, at the time of evaluating and making effort-based decisions, activity in a region extending across the RCZp covaried negatively with a hidden recoverable state of fatigue that is increased through effort but recovered through rest. Fig. 4: Hidden states of fatigue during effort-based decision-making. a The BOLD signal in two distinct sub-regions of the ACC covaried trial-to-trial with unrecoverable (UF) and recoverable fatigue (RF) states estimated by the model. Overlay of clusters in the anterior Rostral Cingulate Zone (RCZa; dark blue) with activity covarying with UF, and the posterior Rostral Cingulate Zone (RCZp; cyan) with activity covarying with RF. Inset shows non-overlapping clusters. RCZ regions defined with respect to the parcellation of Neubert et al. 43 . b Activity in the middle frontal gyri (MFG) covaried with UF. Images displayed at p < 0.001 uncorrected. Parameter estimates (arbitrary units) at peak coordinates from RCZa ( c ), RCZp ( d ) and MFG ( e ) for UF, RF and fatigue-weighted subjective value (SV). Each dot represents one subject. Error bars reflect SEM. All n = 36. Full size image A t-contrast on UF revealed a significant negative relationship with the BOLD signal in clusters in the left MFG of the DLPFC (Fig. 4 and Supplementary Table 2 ) as well as in a cluster spanning the anterior rostral cingulate zone (RCZa) and pre-SMA (−6, 20, 47; Z = 3.67; p . = 0.030 svc; Fig. 4 ). T-contrasts on RF and SV did not reveal voxels in the left MFG or RCZa for either contrast ( p > 0.001 uncorrected). Contrasts between UF and RF, as well as UF and SV, revealed significant clusters in both the MFG and RCZa, albeit at a reduced threshold ( p < 0.05), overlapping with those showing a significant effect in the contrast of UF above. Thus, a distinct portion of the ACC from that which signalled RF, showed an effect of UF, as did a portion of the MFG. Moreover, in both of these regions activity did not covary with the other components of the model. Thus, at the time of evaluating and choosing whether to exert effort for reward, RCZa and MFG activity negatively covaried with a gradually increasing state of fatigue that was associated with reductions in the willingness to work. In addition, a similar level of specificity was identified in a region of the insula in which activity positively correlated with UF (30, 17, 14; Z = 4.67; p = 0.026 FWE). It has been suggested that signals in some regions linked to effort-based decision-making may signal the difficulty of the decision. Although different metrics of decision difficulty and conflict exist, we used the probabilities calculated by the softmax function as a metric of choice difficulty. Notably, activity in none of the regions outlined above covaried with decision difficulty ( p > 0.001 uncorrected). Moreover, we did not find that these regions specifically signalled variation in trial-by-trial reaction times ( p > 0.001 uncorrected, Supplementary Table 4 ). A fronto-striatal system integrating value and fatigue Next we examined activity that covaried with the fatigue-weighted value of the work offer estimated by the model (Fig. 5 and Supplementary Table 3 ), using the same approach outlined above to identify activity that scaled only with SV. A t-contrast on SV revealed a significant positive relationship between the BOLD signal in the superior frontal gyrus (SFG) extending into the FP (−12, 68, 17; Z = 4.67, p = 0.025 FWE) as well as in the VS, with the peak voxel within the nucleus accumbens of the Harvard-Oxford atlas (9, 11, −10; Z = 4.31, p = 0.001 svc). T-contrasts of RF and UF did not reveal voxels in this region for either contrast ( p > 0.001 uncorrected). Contrasts between SV and RF, as well as SV and UF, revealed significant clusters in both FP and VS ( p < 0.05 svc) overlapping with those showing a significant effect in the contrast on SV. Consistent with the idea that activity in the VS was signalling fatigue-weighted value in a subjective manner, we also found significant correlations between the strength of signalling in the VS for UF and RF and each participant’s corresponding parameter weights ( \(\alpha\) , \(\theta ,\) \(\delta\) ) from the model. To avoid double-dipping we performed an independent analysis to identify voxels in which SV related activity correlated with parameters from the computational model. We found that the degree to which the VS signalled UF covaried with the UF parameter ( \(\theta\) ; 6, 5, −7; Z = 4.55; p = 0.043 FWE). In addition, the degree to which the VS signalled RF correlated with the effect of rest on the RF ( \(\delta\) ; 9, 17, −10; Z = 3.36; p = 0.028 svc) and the effect of effort on RF, albeit at a reduced threshold ( \(\alpha\) ; 9, 17, −10; Z = 2.93; p < 0.005 uncorrected). No such effects were found in voxels in the other ROIs, with individual differences in fatigue and value only reflected in the VS response. Thus, a distinct fronto-striatal system processed the current value of exerting effort to obtain rewards, integrating momentary levels of fatigue that modulate the value ascribed to working. In the VS, variability between people in the degree to which the fatigue variables covaried with activity, correlated with the parameters from the model that dictated how much someone’s willingness to work was under the influence of fatigue. Fig. 5: Fatigue-weighted subjective value in a fronto-striatal circuit. a The BOLD signal in the ventral striatum (VS) covaried with model-estimated subjective value (SV), which is weighted by momentary levels of fatigue. Parameter estimates (arbitrary units) from peak coordinates for left and right VS (below) for the responses to SV, recoverable fatigue (RF) and unrecoverable fatigue (UF). b BOLD signal in superior frontal gyrus (SFG), extending across SFG/frontal pole areas 9 and 10, signals SV. Parameter estimates (arbitrary units) from peak coordinates in SFG/frontal pole (below) for the responses to SV, RF and UF. Each dot represents one subject. Error bars reflect SEM. Images displayed at p < 0.001 uncorrected. All n = 36. Full size image Discussion Many of our daily activities require us to persevere and keep exerting effort to obtain rewards. Here we show that two hidden states, one longer-term unrecoverable and one short-term recoverable, impact on people’s decisions to work and exert effort for reward on a trial-by-trial basis. When these levels of fatigue are higher, it leads to a decrease in the value of working, resulting in choices to rest, particularly when working will be higher in effort and lower in reward. The BOLD response in distinct portions of the frontal cortex covaried separately with these two hidden states, with the MFG and the RCZa signalling the unrecoverable component, and a distinct RCZp region signalling the recoverable component of fatigue. These regions carried no information about the SV of working. Instead, activity in a distinct fronto-striatal system comprising the VS and the FP integrated the latent states to signal the current value of working weighted by levels of fatigue. Moreover, the same computational model captured trial-by-trial ratings of fatigue, such that sensations of fatigue appear to similarly occur on two timescales, suggesting reductions in the willingness to exert effort may directly follow increases in feelings of exhaustion on a moment-to-moment basis. These results highlight that the willingness to exert effort is not static, and changes in fatigue states shift how much value we ascribe to working on a momentary basis. Moreover, different brain regions are involved in dynamically signalling different components of fatigue. In this study, people’s subjective valuations of exerting effort for reward shifted constantly. In particular, choices to work were avoided at reward and effort levels in the main task that participants had readily chosen to undertake in a pre-task where fatigue would not have accumulated. The results suggest that these changes in the willingness to work were reactive, as a result of changes in internal states, rather than related to pre-emptive shifts in valuation. Such a conclusion is supported by the fact that planning was prevented by the experimental design, with offers of effort and reward presented in a pseudorandom order. As such, participants could not plan for the offers to-be-presented in upcoming trials. In addition, our results showed that a model containing fluctuating fatigue components better explained choices than a model in which participants shifted their valuations before the main task, but then held them constant across it. Thus, participants’ willingness to work fluctuated and gradually declined across the main task. Offers that participants considered as worth working for at one moment would be rejected in favour of a rest at another. This study unifies separate lines of research that have theorised that the effects of fatigue may occur on more than one timescale 3 , 13 , and provides a formalised account of their effects on effort-based decisions. One line of research had suggested that extended periods of work lead to exhaustion that has consequences for tasks performed after the one which caused the fatigue 16 , 25 , 36 , 37 , 38 . This executive fatigue influences activity in the MFG in tasks performed after having been exhausted, an effect exacerbated in athletes who are over-trained 25 , 39 . This form of fatigue appears to be unrecoverable in the sense that simply taking short rests does not have a restorative effect. Although in this study we cannot fully rule out the possibility that this effect is simply due to time-on-task or boredom effects, we are able to show that it affects both effort-based decisions and trial-by-trial self-reported fatigue ratings, supporting the notion that it is a component of fatigue that changes, and not only other subjective effects. Moreover, we show that this longer-term effect is independent from a short-term recoverable component and that it covaries with activity during effort-based choice in the left MFG, largely overlapping with an area which has previously been associated with subjective aversion to cognitive effort 40 . Our results shed new light on this long-term, unrecoverable state. We show that this component is indeed processed in the MFG during effort-based choice but builds slowly during demanding tasks, reducing the willingness to exert effort for reward over an extended time period. Moreover, this effect is localised not only to the MFG but also to a connected sub-region of the cingulate lying in the RCZa 41 , 42 , 43 . Lesions to this region reduce the willingness to exert effort in rodents 44 and neurophysiological recordings here have revealed neurons that respond to effort costs 45 . Taken together, these findings would suggest that the MFG processes a longer-term accumulating fatigue that impacts both effort-based decision-making, performance, and choice behaviour in other tasks 39 . In contrast, the RCZa processes similar information, but perhaps more specifically when deciding whether it is worth exerting effort. Moreover, the fact that activity negatively correlated with UF in RCZa with higher activity when fatigue was lower, and previous evidence that stimulating the RCZa causes a sensation of a willingness to persist through oncoming challenges 46 , suggest the RCZa may play a key role in sustaining motivation and persisting during effortful tasks. In addition, our results highlight a short-term fatigue effect that crucially is recovered by taking rests. Such a component had long been theorised in accounts of physical fatigue 9 , 13 , 47 . However, to date no study had directly examined the changes in neural activity that covary with changes in RF when people are choosing whether a reward and effort are worth it. A previous study had examined how people gave up and returned to work during continuous grip force 9 but did not examine how this influenced effort-based decisions, nor neural activity when ascribing value to work and making an effort-based choice. Here, at the time of making effort-based decisions we found computations of RF in the RCZp. This region has also previously been linked to persistence in decision-making tasks 48 , 49 , 50 , 51 , but here we show its role in signalling a short-term momentary fatigue state that influences decisions and the value of working. The findings presented here provide empirical evidence for theories which suggest that the willingness to exert effort fluctuates on a moment-to-moment basis, and they highlight the need for examining such momentary fluctuations to understand variability in cognitive processes over time. The notion of time-on-task effects in cognitively and physically demanding tasks is well known 13 , 19 , 24 , 52 , 53 , 54 . Accuracy and speed decline over time in many effortful cognitive tasks. However, our results suggest that such changes in behaviour may be at least partially driven by a reduction in the value ascribed to persisting with exerting the effort required by the task demands. Considerable research has shown that task performance depends on the balance between the costs and benefits of acts. Rewards can increase the speed and accuracy of both movements and cognitive processes, by paying off the effort costs 48 , 55 . If the same difficulty of task is treated as more costly over time, as our model predicts, it will devalue rewards to a greater degree, and reduce task performance. Whilst this study evaluated the willingness to work, rather than task performance, the results suggest that there could be moment-to-moment fluctuations in performance due to fluctuations in fatigue happening on multiple timescales. Moreover, they point to a role of the VS for integrating current levels of fatigue with the value of persisting with a demanding task, and variability between people in such tendencies. A limitation of the experiment is that comparing value, without any influence of fatigue, to fatigue-weighted value signals is challenging, as they are necessarily correlated within the design. However, importantly, we found that variability in signalling in the VS between people correlated with the parameters of the computational model. Such a finding is consistent with activity in the VS signalling a dynamically changing estimate of value, which is weighted by each participant’s tendency to persist in the face of momentary fatigue. Such effects would be missed or confounded by typical analysis approaches, e.g. when examining changes correlated with trial number or behaviour pre vs post exhaustion, but they can be examined using the formal framework outlined here. Future work will need to understand how the VS integrates fatigue and value-related information leading to fluctuations in the willingness to persist with ongoing behaviour. A striking aspect of the results was that the different hidden internal variables of fatigue and value mapped on to activity in discrete brain regions, with each area processing information about a distinct component of the model. All of these regions—ACC, MFG, FP and VS—have previously been linked to the processing of effort and reward 4 , 5 , 9 , 10 , 27 , 28 , 30 , 32 , 44 , 51 , 56 , 57 , 58 , but a considerable amount of this work, particularly that focused on the cingulate cortex, had assumed that valuations are static. Our results suggest that dynamic shifts in the valuation of effort correspond to changes in response across two sub-regions of the cingulate cortex, the RCZp and RCZa, and this information is integrated in the VS. Importantly, activity relating to fatigue does not explicitly represent the objective magnitude of the task difficulty. Indeed, responses in these regions covaried with the fatigue variables that carried no information about the reward and effort of the work offer. Rather, when activity in the RCZa and RCZp was reduced, it was at a time when levels of fatigue were higher, irrespective of the value of the offer . However, although both regions negatively correlated with levels of fatigue according to the model, they were dissociable, processing fatigue levels on different timescales. Such findings parallel recent evidence from studies examining how different parts of cingulate cortex are activated by learning at different timescales in reinforcement learning tasks 59 . Our results suggest that this may be a wider ranging principal of organisation that extends to other types of decision variables in the cingulate cortex, with different aspects of fatigue shifting how effort and reward are valued in extended tasks. It was beyond the scope of this investigation to examine whether the different components of fatigue map onto purely psychological changes, physiological or metabolic changes in the state of the body, or fluctuations in neuromodulatory systems 13 , 17 , 60 . However, the computational approach taken here was able to best explain changes in decisions about whether to exert effort, and in self-reported sensations of fatigue, and thus are unlikely to be explained by simply time on task or accumulated reward effects. Although accumulated reward and accumulated effort were correlated in the fMRI study, rewards did not influence fatigue ratings trial by trial in the behavioural study. Such findings, that sensations of fatigue were fluctuating in the experiment and could be quantified using the same computational model in which effort exerted causes changes in fatigue, suggest that changes to choice behaviour in the fMRI experiment are more likely to be due to exerted effort than accrued reward. In line with this, the UF and RF components fluctuated in regions that have previously been linked to effort processing, rather than in regions that have been found to signal accumulated reward 61 , 62 . Future work will need to identify the source of these fluctuating, putative fatigue states, and fully disentangle them from other processes, such as opportunity cost processing, boredom, task switching and time-on-task. Thus our model may be fruitful for examining such core questions relating to fatigue, including whether similar principles hold when using a cognitively effortful task 8 , 37 , 63 , 64 . By using a model that can quantify, idiosyncratically, each individual’s sensitivity to the efforts they have exerted, and to their recovery through rest, variables underlying fatigue can be probed more precisely. Such an approach is also ideally suited for probing fatigue in the multitude of clinical disorders in which fatigue is present 1 , 65 . Future research may begin to examine how fatigue accumulates and subsides in clinical disorders, in order to develop more appropriate treatments for such a poorly understood symptom of disease. Overall, this study provides insights into the neural and computational basis of the dynamics of fatigue. The willingness to exert effort fluctuates on a moment-to-moment basis, with shifts in the value of exerting effort for reward depending on a recoverable and an unrecoverable state of fatigue. These states covaried with neural activity in distinct brain regions previously linked to effort-based decision-making, namely in the RCZa and MFG, as well as in the RCZp, when making choices about whether exerting effort is worth for the reward. However, persistence also depended on a fronto-striatal circuit, which integrates fatigue and value, with variability in people’s VS response predictive of the influence fatigue had on effort-based choice. These results reveal the hidden determinants of fatigue that underlie persistence in the face of effort. Methods Participants Thirty-nine young, right-handed participants with normal or corrected-to-normal vision and no history of neurological or psychiatric illness were recruited through the Oxford Psychology Research participant recruitment scheme and online bulletin boards. Written informed consent was obtained from all participants prior to the experiment. The study was approved by the University of Oxford Central University Research Ethics Committee (MSD-IDREC-C1-2014-037). One participant did not fully complete the experiment because of feeling uncomfortable in the MRI scanner and was therefore excluded from the analyses, and a further two participants were excluded due to excessive head motion (more than 6 mm of translation). This sample size was selected based on previous fMRI studies in which similar samples evoked responses in hypothesised regions 4 , 25 . The final sample of 36 participants (16 females) had a mean age of 25.31 years (SD = 4.90; range 18–40). Participants were remunerated with £25 for taking part in the study, plus a possible £10 further as a bonus payment. The bonus depended on the credits accrued on all successfully executed trials of the main task, as well as trials executed during the training and pre-task phase. Thus an increase in the bonus was an incentive on every trial. Experimental design The aim of this experiment was to examine the hidden states that lead to changes in the willingness to exert effort over time and their neural correlates. We developed a physical effort-based decision-making task, in which effort was operationalised as the amount of grip force that needed to be exerted for a sum total of 3 s in order to obtain reward (credits). The main task of the experiment was performed inside the MRI scanner, but in total the task consisted of four different phases: i. Calibration Phase - To calibrate the levels of effort (grip force), accounting for individual differences in grip strength, each participant’s MVC was measured at the beginning of the experiment by squeezing a hand-held dynamometer on three consecutive trials with their dominant (right) hand. Participants were required to apply as much force as possible on each trial, and they received strong verbal encouragement while squeezing. During each attempt, a bar presented on the screen provided feedback of the force being generated. In the second and third attempts, a benchmark representing 105% and 110% respectively, of the previous best attempt was used to encourage participants to improve on their score. The maximum level of force generated throughout the three attempts was used as the participant’s MVC to calculate levels of force required for each participant at each effort level. ii. Training Phase – Participants were able to familiarise themselves with the effort levels used across the experiment and learnt how many segments of a pie chart represented each level of force. Participants practiced reaching each of six effort levels (0, 30, 39, 48, 57, and 66% of each participant’s MVC). A successful trial occurred only when the force generated by the participant exceeded the required level for a sum total of at least 3 s in a five-second window. Practice of the effort levels was repeated three times, resulting in 18 practice trials in total. Each trial commenced with a pie chart, with the number of red segments indicating the upcoming effort level. During the exertion period, participants were presented with a vertical bar, providing them with real-time feedback on their force and indicating the target effort level by a yellow line superimposed on the bar. When it was a rest trial, indicated by one element in the pie chart, the bar was presented for the same amount of time but with the yellow line displayed at the bottom of the bar. To make sure that participants carefully and successfully completed this training, they were awarded one credit for each successful squeeze, but 0 credits for a failure. iii. Pre-task – Participants performed 75 trials of an effort-based decision-making task before entering the scanner, aimed at measuring participants’ devaluation of rewards (credits) by effort in a situation where they would not be experiencing higher levels of fatigue. Participants were required to decide whether they find the rewards on offer are worth the required effort by making a series of choices between two alternatives: a rest (baseline) option for a low reward (1 credit), and an effort (work) option for a higher reward. Work options consisted of one of five different effort levels, represented by two to six filled segments in a pie chart cue that corresponded to 30, 39, 48, 57, and 66% of each participant’s MVC, and one of five different reward levels (2, 4, 6, 8, 10 credits) numerically displayed below the pie chart. The rest option was represented by one filled segment in a pie chart and “1 credit” numerically displayed below it. The presentation side (left versus right side of the screen) for ‘work’ versus rest options was counterbalanced across offers and trials. Effort and reward levels for the ‘work’ option were varied independently and presented in a pseudorandom order to ensure that each effort/reward combination was distributed evenly across the task. Responses had to be made within 3.5 s, using their left hand on a button box. Otherwise, “0 credits—Make your choices faster” was written on the screen for the time that participants would have spent working or resting. If participants chose to ‘work’, they were required to exert the chosen force on the dynamometer for at least 3 out of 5 s in order to receive the credits associated with the work offer. In this pre-task, only a random 10% of trials actually resulted in the requirement to work or rest if chosen. Otherwise, a blank screen was presented instead of the work or rest screen and the outcome screen for the equivalent duration. Participants were instructed of this before the beginning of the task, but they were not told which of the choices would count and which ones would not, and they were instructed to always make their decisions as if they would have to squeeze if they chose the work option. Furthermore, this part included two breaks, i.e. was split up into three blocks, and participants were free to decide when to continue with the task. Thus, levels of fatigue would not be induced in the pre-task. Following this, participants received feedback on each trial regarding their success or failure. If all the required force was exerted for 3 s they would receive all of the credits, if they failed to meet this requirement they would receive 0 credits. The choice period in the following trial was separated from the outcome period in the preceding trial and the successive work/rest period by a random jitter of 2–4 s. iv. Main task – This task was performed inside the MRI scanner, aimed at measuring how participants’ valuations of effort and reward change across time. Participants performed 216 trials in the same pseudorandom sequence of a task similar to the pre-task, that differed in three ways. Firstly, the range of ‘work’ offers was lowered to include only those of high value, the three lower effort levels (30, 39, 48% MVC) and the three higher reward levels (6, 8, 10 credits). This ensured that if participants chose to rest, they were doing so for options that they would choose to work for in the pre-task. This would indicate a shift in the value ascribed to working, which was indeed the case in the behavioural data. Secondly, in the main task every choice counted. Thus, every choice to work resulted in the requirement to exert the required level of effort to obtain the offered reward. Thirdly, there were no breaks included. If participants wanted to take a rest, they were instructed they had to choose to do so. The duration of all trials (except for jittering) was the same regardless of choices to rest or work, and across the pre and main task. This ensured that participants’ choices and associated neural correlates were not due to temporal discounting 66 , 67 , 68 . Participants were very successful across the whole experiment at reaching the required force levels, successfully obtaining credits on M > 96.8% (SD < 4.6) of the respective trials, suggesting that participants’ choices were unlikely confounded by outcome uncertainty. The effort levels used in this experiment were chosen as they have been shown not to cause significant build-up of lactate and muscle pain, and the stopping of exertion is driven more by the perception of effort and not pain 22 , 69 . This ensured that our results are unlikely to be due to muscle pain, which can be incurred at higher levels of grip force. In addition, prior to the main task and at the end of the experiment participants were required to rate between 0 and 10 their level of ‘tiredness’. One participant was excluded from analyses of these ratings as data were not appropriately captured. Behavioural analysis To determine whether the willingness to exert effort changes as a function of effort, reward, and the history of effortful exertion, logistic regressions (using Matlab’s glmfit function) on choices, with offered effort and reward levels of the work option as predictor variables were conducted for each participant. To analyse choices in the main task, cumulative effort (the sum total of effort exerted during the task prior to the current trial), as well as interactions of effort and reward levels ( z -scored) with cumulative effort were added as additional predictor variables. All regressors were z -scored for each participant, i.e. mean centred and divided by the standard deviation. Regression models were fitted to each participant’s choice data, and statistical inference was made at the group level by comparing t -scores across participants against zero. Beta values for each participant’s regression coefficients were normalised to t -scores as β/SE(β) in order to compensate for the possibility of poor estimates of βs in participants with low levels of variance. Because the t -scores were not normally distributed, they were tested for significant deviation from zero using two-tailed non-parametric Wilcoxon signed-rank tests. Confidence intervals (CIs) for Wilcoxon tests are based on the Hodges–Lehmann estimate (median). Computational modelling Modelling subjective value Theoretical accounts and existing empirical data have suggested, but largely not formalised, the notion that fatigue can influence motivation on multiple timescales. Here, we developed a computational model of fatigue based on theoretical accounts and integrated it into a parabolic effort-discounting model to explain how effort-based decisions change over time due to hidden recoverable and unrecoverable components. This model could be fitted separately to each participant’s behaviour. In line with previous work on how rewards are parabolically discounted by physical effort 4 , 33 , 34 , we fitted a simple discounting model to the pre-task choice behaviour. The model assumed that the value of the work offer depends on how rewarding it is, how much effort is required and how participants subjectively weigh these to guide their choices to work or rest. That is: $${{SV}}_{(t)}={R}_{(t)}-(k\ast {E}_{\left(t\right)}^{2})$$ (1) where \({{SV}}_{(t)}\) represents the SV of the work option on trial \(t\) , and \(k\) the subject-specific discount parameter, scaling the devaluation of a reward ( \(R\) , reward level 2, 3, 4, 5, or 6) by the effort ( \(E\) , effort level 2, 3, 4, 5, or 6) required to obtain the reward. The higher an individual’s \(k\) parameter, the steeper an individual’s discount function, i.e. the more this individual’s valuation of rewards is discounted by the effort required to obtain the rewards. To fit the model to the data, we used a softmax function, which estimates the probability \({P}_{\left(i,t\right)}\) that a participant will choose the work option \(i\) that has a \({SV}\) over the rest option that has a value of 1 (1 credit, no effort), defined as: $${P}_{\left(i,t\right)}=\frac{{e}^{{{SV}}_{(i,t)}\ast \beta }}{{e}^{1\ast \beta }+{e}^{{{SV}}_{(i,t)}\ast \beta }}$$ (2) Since the baseline \({SV}\) was fixed at 1 (one credit, no effort), when the baseline was chosen \({P}_{\left(i,t\right)}\) was calculated according to \({P}_{\left(i,t\right)}\) = 1− \({P}_{\left(i,t\right)}\) . Maximum likelihood estimation, using fminsearch function in Matlab, was used to minimise the difference between each participant’s actual choices and the model estimates for each trial, i.e. to minimise the negative log-likelihood ( LL ). This fitting procedure was used to fit choices in both the pre-task and main task. The estimates of the discounting parameter \(k\) and the level of stochasticity in the choices ( \(\beta\) ) were restricted not to go below 0.0276 (in which case even the combinations of lowest reward and highest effort are always accepted) and 0, respectively. The model was fitted 50 times using different random starting values (using rand ) to ensure that the optimisation function had not settled on a local minimum. By fitting this model to the pre-task, we were able to quantify a participant’s typical willingness to exert effort for reward, and the noisiness in such choices, during a task that would not evoke fatigue. The \(k\) and \(\beta\) parameters obtained for each participant in the pre-task were used as fixed parameters in the models fitted to choices in the main task. Modelling fatigue-weighted subjective value (full model) Based on theoretical accounts we hypothesised that fatigue would increase with exerted effort, would be partially recoverable and decrease with time spent resting, but would also have a gradually increasing unrecoverable component which did not recover with rest 9 , 13 , 20 , 25 . This fatigue impacts value, such that when the levels of fatigue were higher, participants would be less willing to work. Thus, we developed a model including recoverable and unrecoverable components of fatigue that would fluctuate over the experiment and integrated them into the value-based model in Eq. 1 : $$S{V}_{(t)}={R}_{(t)}-(({{RF}}_{\left(t\right)}+{{UF}}_{\left(t\right)})\ast k\ast {E}_{\left(t\right)}^{2})$$ (3) In this full model, rewards ( \(R\) ) are devalued by effort ( \(E\) ), subjectively weighted by the discount parameter \(k\) from the pre-task. In addition, this discounting effect fluctuates trial-to-trial by levels of recoverable ( \({RF}\) ) and unrecoverable ( \({UF}\) ) fatigue. \({RF}\) subjectively increases if a person exerts effort, i.e. accepts the work offer (Eq. 4 ), with the work parameter \(\alpha\) scaling the amount that effort increases \({RF}\) , and subjectively recovers by time resting ( \({T}_{{{{{{\rm{rest}}}}}}}\) ), as captured by the rest parameter \(\delta\) (Eq. 5 ). These individual parameters scale how fatigable a person is. \({UF}\) subjectively accumulates depending on the effort exerted across the whole task, scaled by parameter \(\theta\) , and is not restored by resting (Eq. 6 ): $${{RF}}_{(t)}={{RF}}_{(t-1)}+(\alpha \ast {E}_{\left(t-1\right)})$$ (4) $${{RF}}_{(t)}={{RF}}_{(t-1)}-(\delta \ast {T}_{{{{{{\rm{rest}}}}}}\left(t-1\right)})$$ (5) $${{UF}}_{(t)}={{UF}}_{(t-1)}+(\theta \ast {E}_{(t-1)})$$ (6) The SV \({SV}\) and the fatigue levels \({RF}\) and \({UF}\) were updated for each trial (initial \({RF}\) and \({UF}\) = 0.5) and fed into the softmax (Eq. 2 ) as above, to estimate \(P\) in each trial. Based on theoretical considerations, only parameter values ≥ 0 and \({RF}\) estimates ≥ initial \({RF}\) were allowed. Missed trials, which were very rare ( M = 0.57% of all trials, SD = 1.71), were treated as rest trials. To maximise the chances of finding global rather than local minima, parameter estimation for the full model and for all alternative models (see below) was repeated over a grid of initialisation values, with 12 initialisations (ranging from 0 to 1.1) per parameter. The optimal set of parameters for each model was used for model comparison and for further analyses. Model comparison To verify whether the three parameters used to quantify the effects of fatigue were necessary, alternative models were also fitted to participants’ choices in the main task. These models fit within a factorial structure of models containing no effect of fatigue (two null models), an effect of \({UF}\) only (i.e. \(\theta\) being fitted), an effect of \({RF}\) only (i.e. \(\alpha\) and \(\delta\) being fitted) or the full model with both \({RF}\) and \({UF}\) . The two null models predicted no effect of fatigue in the main task, one which used the original pre-task discounting parameter ( \(k\) ) and thus assumed that the willingness to exert effort stayed the same across the whole experiment, and a second where a new discounting parameter ( \(\gamma\) ) was calculated across all trials in the main task which assumed a fixed change in the willingness to work between the pre and main tasks. In addition, two further mathematically plausible, but theoretically unlikely, models were included which used only one parameter to scale the effect of effort and rest on RF (i.e. only \(\alpha\) being fitted across both work and rest trials). In one of these models this fatigue was only comprised by this one parameter \({RF}\) , while in a second model, fatigue comprised \({UF}\) plus the one parameter \({RF}\) . These two models had higher AIC values and thus worse fits than versions of the \({RF}\) model including separate parameters and are thus not shown in figures. In the models including a fatigue term, initial \({RF}\) and \({UF}\) values were defined such that the initial total fatigue was always equal to 1. In order to investigate the models’ relative ability to predict the behavioural data, model fits were compared using the Akaike Information Criterion (AIC) 70 and Bayesian Information Criterion (BIC) 71 with lower values indicating better fit. Model fit to a given data pattern can be improved by simply adding additional parameters, and thereby models with more parameters may be overfitted. AIC and BIC punish models with more free parameters and favour the most parsimonious solutions by adding a penalty term to the LL which depends on the number of parameters ( \(d\) ) and in the case of BIC also on the number of observations, i.e. the number of trials ( \(n\) ): $${{{{{\rm{AIC}}}}}}=-2* {{{{{\rm{LL}}}}}}+2* d$$ (7) $${{{{{\rm{BIC}}}}}}=-2* {{{{{\rm{LL}}}}}}+d* {{{{{\rm{ln}}}}}}(n)$$ (8) Functional imaging and analysis fMRI scan acquisition For anatomical localisation, a high-resolution, three-dimensional structural T1-weighted image was acquired using a magnetisation-prepared rapid gradient echo sequence with 192 slices [slice thickness = 1 mm, repetition time (TR) = 1900 ms, echo time (TE) = 3.97 ms, flip angle = 8°, field of view = 192 mm × 192 mm, voxel size = 1 × 1× 1 mm]. A total of 2355 whole-brain functional T2*-weighted echo planar images (EPIs) were acquired with a tilted-plane sequence with a pitch of 30°, in order to reduce potential image distortions and signal losses caused by susceptibility gradients near air/tissue interfaces 72 , using multiband factor acceleration interleaved slice acquisition [72 slices, slice thickness = 2 mm, TR = 1570 ms, TE = 30 ms, flip angle = 70°, field of view = 216 mm × 216 mm, voxel size = 2 × 2 × 2 mm]. Subsequent to the functional sequence, a gradient echo field map sequence was used to collect phase and magnitude maps (TE 1 = 4.92 ms, TE 2 = 7.38 ms) in order to correct for geometric distortions caused by magnetic field inhomogeneities. Image preprocessing Imaging data were preprocessed and analysed using Statistical Parametric Mapping (SPM12, Wellcome Department of Imaging Neuroscience, University College London, ). First, to correct for head motion within participants, each EPI in a participant’s time-series was realigned to the mean image using a least squares approach and a six parameter, rigid body spatial transformation 73 with B-spline interpolation. In addition, the acquired field maps were used to estimate the amount of non-linear distortion from magnetic field inhomogeneities for each functional image and to correct for the movement-by-distortion interactions 74 , 75 . Following this, the mean of the realigned and unwarped functional images was coregistered to each participant’s own structural image, based on Collignon et al. 76 , to ensure better anatomical localisation and greater precision in spatial normalisation. Next, the coregistered structural image was segmented into tissue probability maps based on standard stereotaxic space (Montreal Neurological Institute, MNI), bias-corrected and normalised to the MNI template 77 . The same normalisation parameters were used to convert the realigned and unwarped functional images into standard space. Functional images were then resampled to a voxel size of 3 × 3 × 3 mm and spatially smoothed using an 8 mm full-width-half-maximum Gaussian kernel in order to improve the signal-to-noise ratio. First-level statistical analysis First level whole-brain statistical analyses were performed using general linear models. To examine whether BOLD activity in any voxel parametrically varied with the trial-by-trial estimates of SV, RF and UF from the winning computational model (full model) during decision-making, the averaged and z -scored trial-by-trial estimates for SV, RF, and UF were used as parametric modulators for the offer cue event-related regressor, i.e. the onset of the options screen. To improve overall model fit and account for potential confounds, the design matrix included the following additional regressors which were not analysed: Four regressors that modelled the onset of the work/rest screen and the onset of the outcome screen, both separately for work trials and rest trials, as well as a regressor that included all events onsets from trials with a missed response. Regressors were modelled with a stick (delta) function with 0 s duration, convolved with a canonical hemodynamic response function (HRF). For the parametric modulators, a 1st order modulation was selected, i.e. it was assumed that the stick function heights will change linearly over different values of each parametric modulator. Parametric modulators were not orthogonalised with respect to each other. To ensure that the model could be estimated and that respective inferences could be made, the regressors of interest were checked for rank deficiency and statistical independence. Correlations between parametric regressors were all below 0.4 ( r = −0.08 between RF and UF; r = −0.09 between RF and SV; r = −0.36 between UF and SV). The six rigid body motion parameters estimated during the realignment step (three translations and three rotations) were added as separate regressors that were not convolved with the HRF to control for nuisance effects resulting from head motion. The high-pass filter cut-off was set to 128 s in order to remove low-frequency noise. Regression coefficients were estimated using a restricted maximum likelihood algorithm, using an autoregressive AR(1) model to account for autocorrelations intrinsic to the fMRI time series. Contrasts for each of the three parametric modulators as well as contrasts between them were conducted to identify regions in which the BOLD signal covaried with each regressor independently and in comparison to each other. In addition, we ran a control analysis in which we added an index of trial-by-trial decision difficulty as a parametric modulator to the design matrix, time-locked to the onset of the offer cue. Decision difficulty for each participant was calculated as -| P −0.5 | , with P representing the choice probabilities derived from the softmax function from the full model. That is, more difficult decisions should be reflected as probabilities closer to 0.5 (or difficulty = 0), while easier decisions should be reflected as probabilities closer to 0 or 1 (difficulty < 0). Trial-by-trial estimates were then averaged across participants and z -scored. Although this approach is limited by averaging across participants, it ensures that variance is scaled similarly across participants. In addition, to examine activity that covaried with reaction times, we also ran a separate GLM in which only individual trial-by-trial reaction times were included as a parametric regressor (Supplementary Table 4 ). Second-level statistical analysis In order to be able to make inferences about the sample population, a random effects second-level statistical analysis was conducted. Therefore, the contrast images from the first level were analysed using one-sample t -tests at each voxel for each contrast of interest. T-contrasts were then applied to identify areas in which activity varied statistically with the parametric modulators. To correct for multiple comparisons, we used a statistical threshold of p < 0.05 with voxel-level family-wise error (FWE) correction across the entire brain volume. Because previous studies have emphasised the importance of the VS and the dACC/pre-SMA region in processing effort-based decisions, and in order to be able to specifically localise activity to anatomically and functionally distinct regions, we also probed these areas using a priori ROI. Therefore, t-contrasts were conducted at the whole-brain level at an uncorrected statistical threshold of p < 0.001, and then a FWE small-volume correction was applied using a combined mask taken from appropriate atlases (bilateral VS: from Harvard-Oxford Atlas; bilateral dACC and pre-SMA: areas RCZa, RCZp and pre-SMA defined through resting-state parcellations of the frontal cortex by Neubert et al. 43 . By combining these masks together we provide a more conservative statistical threshold than individual ROI analyses, balancing possible false negatives that can occur with whole-brain correction. Full tables of results are reported at an uncorrected threshold of p < 0.001 in Supplementary Tables 1 – 3 . At this reduced threshold clusters in the VS and RCZp lie within a larger cluster and thus do not show in the list of peak results only. In addition, to examine whether activity was modulated by how strongly a participant’s choices were affected by RF and UF, each participant’s UF parameter, RF work parameter and RF rest parameter from the full model were used as a covariate for the respective UF and RF t-contrasts in separate second-level group analyses. To avoid double-dipping when correlating parameters with voxels which are already known to show a significant result in a non-independent analysis, we performed these analyses by examining whether any voxels showed a significant effect within the ROI masks. Such an approach does have the limitation that the significance between correlations cannot be determined formally. For these analyses, we excluded one participant who had excessively high RF work and rest parameters. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Source data for Figs. 2 b, 2 f, 3 d, 4 c-e, 5a-b and Supplementary Figs. 2a , 3 , 4c , 6a-c are provided with this paper, and are also available on the Open Science Framework (OSF; ; Digital Object Identifier: DOI 10.17605/OSF.IO/XR84W). Unthresholded statistical maps underlying Figs. 4 and 5 are available at the same link. Further fully anonymised behavioural and fMRI data that support the findings of this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper. Code availability Main experimental code is available at this link ( ; Digital Object Identifier: DOI 10.17605/OSF.IO/XR84W). Custom Matlab code to implement the computational models is available from the corresponding authors upon reasonable request.
How do we decide whether or not an activity which requires work is 'worth the effort'? Researchers at the University of Birmingham & University of Oxford have shown that the willingness to work is not static, and depends upon the fluctuating rhythms of fatigue. Fatigue—the feeling of exhaustion from doing effortful tasks—is something we all experience daily. It makes us lose motivation and want to take a break. Although scientists understand the mechanisms the brain uses to decide whether a given task is worth the effort, the influence of fatigue on this process is not yet well understood. The research team conducted a study to investigate the impact of fatigue on a person's decision to exert effort. They found that people were less likely to work and exert effort—even for a reward—if they were fatigued. The results are published in Nature Communications. Intriguingly, the researchers found that there were two different types of fatigue that were detected in distinct parts of the brain. In the first, fatigue is experienced as a short-term feeling, which can be overcome after a short rest. Over time, however, a second, longer term feeling builds up, stops people from wanting to work, and doesn't go away with short rests. "We found that people's willingness to exert effort fluctuated moment by moment, but gradually declined as they repeated a task over time," says Tanja Müller, first author of the study, based at the University of Oxford. "Such changes in the motivation to work seem to be related to fatigue—and sometimes make us decide not to persist." The team tested 36 young, healthy people on a computer-based task, where they were asked to exert physical effort to obtain differing amounts of monetary rewards. The participants completed more than 200 trials and in each, they were asked if they would prefer to 'work' – which involved squeezing a grip force device—and gain the higher rewards offered, or to rest and only earn a small reward. The team built a mathematical model to predict how much fatigue a person would be feeling at any point in the experiment, and how much that fatigue was influencing their decisions of whether to work or rest. While performing the task, the participants also underwent an MRI scan, which enabled the researchers to look for activity in the brain that matched the predictions of the model. They found areas of the brain's frontal cortex had activity that fluctuated in line with the predictions, while an area called the ventral striatum signalled how much fatigue was influencing people's motivation to keep working. "This work provides new ways of studying and understanding fatigue, its effects on the brain, and on why it can change some people's motivation more than others" says Dr. Matthew Apps, senior author of the study, based at the University of Birmingham's Centre for Human Brain Health. "This helps begin to get to grips with something that affects many patients lives, as well as people while at work, school, and even elite athletes.
10.1038/s41467-021-24927-7
Medicine
Drug's impact on amino acid transporter may offer non-small cell lung cancer patients new hope
Xiangming Ji et al. xCT (SLC7A11)-mediated metabolic reprogramming promotes non-small cell lung cancer progression, Oncogene (2018). DOI: 10.1038/s41388-018-0307-z Journal information: Oncogene
http://dx.doi.org/10.1038/s41388-018-0307-z
https://medicalxpress.com/news/2018-07-drug-impact-amino-acid-non-small.html
Abstract Many tumors increase uptake and dependence on glucose, cystine or glutamine. These basic observations on cancer cell metabolism have opened multiple new diagnostic and therapeutic avenues in cancer research. Recent studies demonstrated that smoking could induce the expression of xCT (SLC7A11) in oral cancer cells, suggesting that overexpression of xCT may support lung tumor progression. We hypothesized that overexpression of xCT occurs in lung cancer cells to satisfy the metabolic requirements for growth and survival. Our results demonstrated that 1) xCT was highly expressed at the cytoplasmic membrane in non-small cell lung cancer (NSCLC), 2) the expression of xCT was correlated with advanced stage and predicted a worse 5-year survival, 3) targeting xCT transport activity in xCT overexpressing NSCLC cells with sulfasalazine decreased cell proliferation and invasion in vitro and in vivo and 4) increased dependence on glutamine was observed in xCT overexpressed normal airway epithelial cells. These results suggested that xCT regulate metabolic requirements during lung cancer progression and be a potential therapeutic target in NSCLC. Introduction Although many molecular targets have been identified to improve the treatment strategies in non-small cell lung cancer (NSCLC), 5-year overall survival rate for patients with NSCLC is still 16% [ 1 ]. A subgroup of tumors has been found to be driven by genetic alterations in NSCLC, such as EGFR mutations and ALK rearrangements. Tumors with these targetable oncogenic alterations tend to respond to EGFR or ALK inhibitors [ 2 , 3 , 4 ]. However, most responders ultimately develop drug resistance and tumor progression. The determinants of tumor progression complicated by the tremendous heterogeneity in molecular alterations in lung cancer are only partially understood. Thus, there is a pressing need for further our understanding of the molecular mechanisms of progression and for the pursuit of innovative therapeutic targets to improve the quality of care and survival of patients with NSCLC. Recent evidence suggests that metabolic changes, caused by oncogenic activation of signal transduction pathways and transcription factors such as MYC, satisfy the large biosynthetic requirements associated with cancer cell proliferation [ 5 , 6 , 7 , 8 ]. These metabolic changes include increased glucose consumption, lactate production, and glutamine dependency. xCT (SLC7A11) is a cystine/glutamate antiporter that imports cystine into the cells while exporting glutamate [ 9 , 10 ]. One molecule of cystine can then be converted into two molecules of cysteine, which is a committed step for glutathione (GSH) biosynthesis. GSH plays a necessary role in maintaining cancer cell function [ 11 ]. To quench reactive oxygen species (ROS), GSH is oxidized to GSH disulfide (GSSG), a reaction requiring nicotinamide adenine dinucleotide phosphate (NADPH). Thus, GSH appears as an exciting therapeutic target due to its role in ROS neutralization and detoxification of xenobiotics such as chemotherapeutics. Sulfasalazine (SASP), a FDA-approved drug, has been shown to be functional in the treatment of inflammatory bowel diseases such as rheumatic diseases, Crohn’s disease, and ulcerative colitis [ 12 ]. SASP shows inhibitory effects on xCT’s function by decreasing the supply of cystine, an essential step for GSH production [ 13 ]. Although high levels of ROS induce cell death and cellular damage, cancer cells tend to maintain a high concentration of GSH to optimize the appropriate redox balance [ 14 ]. Targeting xCT may therefore compromise cellular redox defense balance and prevent tumor growth [ 15 ]. To maintain the intracellular glutamate pool, cells overexpressing xCT consume more glutamine for glutamate synthesis, a process of glutamine addiction [ 16 ]. The dependency on glutamine for cell function is considered a hallmark of cancer metabolism [ 17 ]. Different isoforms of glutaminases (GLS), such as GAC and KGA, play major roles in modulating the intracellular glutamine/glutamate concentration [ 18 ]. The major function of GLS is to convert glutamine to glutamate with ammonia production. GLS, especially GLS1, is commonly considered as not only a biomarker of glutamine dependence but also a therapeutic target for many types of cancer [ 19 , 20 , 21 ]. Recently, higher xCT activity along with elevated intracellular levels of cystine has been shown to promote tumor survival [ 22 ] and to contribute to breast cancer progression [ 16 ]. Investigators have established the expression pattern of xCT in the NCI 60 cancer cell lines, which suggests that the expression of xCT could act as a predictor of cellular response to chemotherapy [ 23 , 24 ]. However, the role of this protein has not been studied in details in lung cancer. Therefore, we decided to conduct detailed functional studies to determine whether xCT may cause significant metabolic changes and reprogram the cells for cancer development. The specific goals of this study were: first, to evaluate the expression pattern of xCT protein in different lung cancer subtypes; second, to assess its relevance to the clinical outcomes in NSCLC; and finally, to establish the metabolic functional contribution of xCT in supporting cell growth and viability of lung cancer cells in vitro and in vivo. Results xCT is overexpressed and is correlated with worse survival in NSCLC patients To identify the abundance of xCT expression in NSCLC, we first examined mRNA expression of xCT in tumors including adenocarcinomas (ADC; N = 511), squamous cell carcinomas (SCC; N = 502) and normal lung tissues ( N = 109) from The Cancer Genome Atlas (TCGA) database ( ). Our results illustrated that xCT was significantly overexpressed in SCC and ADC compared with normal lung tissues ( p < 0.0001; Fig. 1a ). A cut-off value of mean + 2 SD for xCT in the control group was used to compare the mRNA level of xCT in TCGA lung cancer dataset. With these criteria, we demonstrated that 51% of ADCs (260/511) and 61% of SCC (306/502) were overexpressing the transcripts of xCT. Fig. 1 xCT is overexpressed in human NSCLC tumors and predicts worse overall survival. a The mRNA expression of xCT is significantly overexpressed in SCC and ADC compared with normal lung tissues in TCGA ( p < 0.001). b Log-rank test demonstrated that the elevated xCT protein expression in TMA is correlated with shorter five-year survival ( p = 0.003). c Representative images of IHC staining for xCT protein, and hematoxylin for nuclei. Top, two representative photomicrographs at ×100 magnification of tumor sections with stage III SCC and III ADC. A zoomed in ×200 magnification of a small area of the same sections in the top right corner showed membranous staining pattern of cancer cells. Bottom, representative photomicrographs at ×100 magnification of normal lung tissue from patients with lung cancer, which stained negative for xCT. d Western blots of xCT protein expression from normal human lung (N) and their paired tumor samples (T) Full size image We then tested the association between xCT protein expression by immunohistochemistry and clinical outcomes in tissue microarrays built in the laboratory and representing a total of 254 NSCLCs (Table 1 ). Survival analysis using Kaplan-Meier (KM) estimates indicated that elevated xCT expression was correlated with shorter five-year survival rate ( p = 0.003, Fig. 1b ). The pattern of xCT staining in both ADCs and SCCs in the tissue microarrays (TMA) was concentrated along the cytoplasmic membrane, with less intense cytoplasmic staining (Fig. 1c ). Further survival analysis using multivariable Cox regression model data showed that elevated expression of xCT protein was significantly associated with worse 5-year overall survival after adjusting for gender, age, smoking history, and stage ( p = 0.02, Table 2 ). In addition, we demonstrated overexpression of xCT in seven out of ten primary lung ADCs and SCCs compared to matched normal lung tissues by Western blotting (Fig. 1d and Supplementary Fig. 10A ). These data suggest that xCT is overexpressed in NSCLC and the expression of xCT is a potential candidate biomarker in NSCLC. Table 1 Characteristics of patients (n = 254) Full size table Table 2 Multiple variables Cox proportional hazards analysis for 5-year survival in 254 NSCLC patients Full size table xCT regulates lung cancer cell growth in vitro and in vivo To explore the role of xCT in lung cancer development, we silenced protein expression in four xCT overexpressed NSCLC cell lines by shRNA (H520, A549, HCC15, and HCC95, Supplementary Fig. 1A ). Consistent with previous results [ 25 ], we observed that silencing of xCT inhibited cell growth in four xCT overexpressed cell lines compared with their controls (Fig. 2a and Supplementary Fig. 1B, C and D ). Given that xCT is the major cystine transporter, we asked whether knockdown of xCT can reduce cystine and other amino acids consumption. As shown in Fig. 2b , knockdown of xCT reduced the consumption of cystine and release of glutamate in H520 cells. Interestingly, we found that knockdown of xCT also inhibited the consumption of several essential amino acids such as leucine, lysine, and valine, as well as non-essential amino acids such as glutamine and serine (Fig. 2b ). Consistent with the hypothesis that xCT mediates the parallel transportation of cystine and glutamate, we observed that knockdown of xCT lowers the secretion glutamate in H520 (Fig. 2b and Supplementary Fig. 5A ). We also demonstrated that knockdown of xCT significantly reduced glucose consumption and lactate production in H520 (Fig. 2c,d ), A549, and HCC95 cell lines (Supplementary Fig. 3A and Supplementary Fig. 3B ) compared with their respective controls. Given that xCT is known to maintain the cellular redox balance by controlling the intracellular glutathione concentration, we asked whether knockdown of xCT could affect glutathione levels. As shown in Fig. 2e and Supplementary Fig. 3C , we found that overexpression of xCT lowered the GSH/GSSG ratio, which created a more oxidative intracellular microenvironment in A549, H520, HCC15, and HCC95 cells. The suppression of xCT expression significantly reduced the glutamine dependency in H520 cells (Fig. 2f ), HCC15 cells (Supplementary Fig. 2C ) and HCC95 cells (Supplementary Fig. 2D ). In addition, xCT_KD cells exhibited impaired invasion in H520, A549, HCC15, and HCC95 (Fig. 2g and Supplementary Fig. 1 E, 1 F, 1G ). We also found that the suppression of xCT expression caused significant inhibition of anchorage-independent colony formation in H520 and A549 cells compared with their control (Supplementary Fig. 1H and Supplementary Fig. 1I ). Fig. 2 xCT suppression by shRNA inhibits proliferation and tumorigenicity of H520 in vitro and in vivo. a Cell proliferation assays reveal significant growth inhibition induced by xCT knockdown in H520 ( n = 4). b Evidence for xCT knockdown in H520 cells to affect the amino acids consumption and excretion in media ( n = 4). c The knockdown of xCT significantly decreases the glucose consumption in H520 cells. d The knockdown of xCT significantly decreases the lactate production in H520 cells ( n = 4). e The knockdown of xCT promotes more oxidative intracellular condition in the H520 cells ( n = 6). f The knockdown of xCT significantly decreases the glutamine dependency in H520 cells ( n = 4). g Knockdown of xCT significantly reduces the H520 cell invasion compared with the H520_Ctrl by matrigel cell invasion assay ( n = 3). h The effect of xCT knockdown on tumorigenicity in nude mice. Both H520_Ctrl (left flank) and H520_xCT_KD (right flank) cells were injected subcutaneously into the nude mice (n = 10). Tumor burden was monitored twice a week by caliper. Tumor size was calculated as 3.14 × (Min) 2 × (Max)/6, where Min and Max were from (length, width, and depth of tumor measurements). All the tumors (mm 3 ) were measured in both flanks of 10 mice Full size image Based on the results obtained in vitro, we sought to understand the role of xCT in tumor progression in vivo by injecting H520_Ctrl and H520_xCT_KD subcutaneously in the flanks of nude mice. We observed after 24 days that the tumor sizes of all H520_xCT_KD injected mice were significantly smaller than the control group ( p < 0.001; Fig. 2h ). Immunostaining analyses revealed the downregulated expression of xCT and Ki 67 protein in H520_xCT_KD cells (Supplementary Figs. 2 A and 2B ). Taken together, these results suggested that xCT is essential for the growth and development of NSCLC cells in vitro and in vivo, and knockdown of xCT reduces the cell invasion and glutamine dependency in NSCLC. Targeting xCT function attenuates tumor growth in vitro and in vivo Next, we examined the expression of xCT in a subset of NSCLC cells (Fig. 3a ). The expression of xCT in A549, H520, and H1869 cells was confirmed in our cell line microarray (Fig. 3b ). We found that SASP inhibited the proliferation of xCT-positive cells lines (A549 and H520) at 72 h after exposure in a dose-dependent manner. In contrast, SASP had little effect on the proliferation of cells expressing low levels of xCT (Fig. 3c ). On the basis of these in vitro data, we tried to determine whether targeting xCT by SASP had an anti-tumor effect in xCT-positive cells in vivo. We injected H520 cells into ten nude mice and randomly assigned the mice into two groups treated either with PBS or SASP for three weeks (Supplementary Fig. 2E ). As shown in Fig. 3d , mice receiving SASP (200 mg/kg) had significantly smaller tumor size compared with the PBS-injected group. Immunostaining analysis revealed that SASP lowered the expression of xCT and decreased the Ki67 staining compared with control group (Fig. 3e ). In addition, we observed a major elevated expression of cleaved Caspase 3 (Fig. 3e and Supplementary Fig. 2F ) in the SASP-treated group compared with PBS group. These results provide in vivo evidence for targeting xCT as a potential therapeutic strategy for NSCLC. Fig. 3 Targeting xCT in NSCLC by SASP. a xCT is highly expressed among different types of lung cancer cell lines. b Representative images of IHC staining for xCT protein in NSCLC cells, A549, H520, and H1869. c Cell proliferation assays reveals dose-dependent growth inhibition induced by SASP in A549 and H520 but not in H1869 NSCLC cells ( n = 4). d Treatment of SASP reduces the tumor size in nude mice. Five million H520 cells were implanted into flanks of nude mice ( n = 10). After two weeks, mice were randomly assigned into two groups. Tumor burden was monitored twice a week by caliper. Tumor size was calculated as 3.14(Min) 2 × (Max)/6, where Min and Max were from length, width, and depth of tumor measurement. e H&E staining, immunohistochemical staining for Ki67, cleaved Caspase 3 and xCT in tumor formed by H520 in SASP treated or control mice Full size image xCT overexpression promotes proliferation and glutamine dependency in normal human airway epithelial cells To elucidate the mechanisms by which xCT regulates cell proliferation, we first examined its expression in normal airway epithelial cells. As shown in Fig. 4a,b , the protein expression of xCT at baseline was lower in normal airway epithelial cells (16HBEs and BEAS2Bs) compared with the staining of xCT in cancer cells (Fig. 3b ). Overexpression of xCT significantly promoted the proliferation of 16HBE (Fig. 4c ) and BEAS2B cells (Fig. 4d ) as compared with their controls. Fig. 4 Overexpression of xCT promotes the proliferation and glutamine dependency of human normal airway epithelium cells. a , b Representative immunohistochemical staining for xCT protein expression in a section of normal airway epithelium cells A (16HBE) and B (BEAS2B). Overexpression of xCT promotes cell proliferation in 16HBE ( c ) and BEAS2B ( d ) ( n = 4). Glutamine deprivation assays shows that xCT overexpression promotes the glutamine dependency in 16HBE ( e ) and BEAS2B ( f ) in 72 h ( n = 4). g Western blot demonstrates that overexpression of xCT is associated with MYC and its downstream molecular pathways such as GLS1, BCL-xL, and Cyclin D1 overexpression in 16HBE and BEAS2B cells ( n = 3). h Effects of xCT overexpression in BEAS2B cell cycle distribution. Representative flow cytometry profiles and the corresponding ratio of cells in G1, S, and G2 phase at serum starvation baseline and after 48 h ( n = 3) Full size image Glutamine dependency is a hallmark of cancer development [ 26 ], and xCT contributes to glutamine metabolism by depleting the glutamate pool. We, therefore, tested whether cells overexpressing xCT enhanced the cell dependency on glutamine. We performed cell proliferation assays under glutamine deprivation and found that cells overexpressing xCT were more sensitive to glutamine deprivation. This was observed in 16HBE_xCT (Fig. 4e ) and BEAS2B_xCT (Fig. 4f ) compared with their matched controls. Cancer cells rewired their metabolic pathways such as elevated glucose consumption, lactate production, and glutaminolysis. Glutaminases 1 (KGA and GAC) are the major enzymes that catalyze glutaminolysis by converting glutamine to glutamate. As shown in Fig. 4g , overexpression of xCT promoted the expression of KGA in 16HBE cells and GAC in BEAS2B cells. Previous studies have suggested glutaminases are targets of the MYC pathway which leads to the glutamine dependency [ 27 , 28 ]. Consistent with these data, we observed increased MYC expression in 16HBE_xCT and BEAS2B_xCT cells compared with their controls. In addition, flow cytometry data demonstrated that overexpression of xCT reduced the ratio of cells in G1 phase, which increased cells in G2 phase (Fig. 4h ). Consistent with the previous report [ 29 ], cells that arrested in G2 phase became aneuploid as a result of MYC overexpression. This process was known as endoreduplication [ 30 ]. Collectively, these results demonstrated that xCT promoted proliferation and glutamine dependency in normal airway epithelial cells. xCT overexpression in normal airway epithelial cells induces metabolic reprogramming Metabolic reprogramming characterized by elevated glucose consumption, lactate production, and glutamine addiction, was one of the hallmarks of cancer biology [ 31 ]. We then asked if increased xCT expression in normal airway epithelial cells would be sufficient to cause metabolic reprogramming and increase the sensitivity to xCT inhibition. Because xCT is the major cystine/glutamate antiporter, we speculated that overexpression of xCT might induce profound metabolic alterations in culture media of BEAS2B_EV and BEAS2B_xCT. As shown in Fig. 5a , the consumption of glutamine and cystine were significantly increased in BEAS2B_xCT cells compared with BEAS2B_EV cells. By measuring the intracellular and extracellular concentration of glutamate, we found overexpression of xCT promotes the glutamate secretion (Fig. 5a and Supplementary Fig. 5B ). In addition, we found that SASP more effectively potentiate the cell migration in BEAS2B_xCT compared with the control (Supplementary Fig. 5C ). We also found that overexpression of xCT promoted glucose consumption (Fig. 5b ), and lactate production (Fig. 5c ) in BEAS2B cells. Uptake of a fluorescent labeled glucose analog (2NBDG) confirmed that overexpression of xCT promoted glucose consumption in 16HBE (Supplementary Fig. 4C ) and BEAS2B (Supplementary Fig. 4D ). Because cystine was reduced to cysteine for glutathione (GSH) biosynthesis in the cytoplasm, we tested GSH production in our system and found that overexpression of xCT increased the GSH/GSSG ratio (Fig. 5d ), as well as in 16HBE cells (Supplementary Fig. 4A ). Because the function of GSH was to serve as reactive oxidant species (ROS) scavenger, we also investigated changes in ROS concentrations. We found that overexpression of xCT significantly reduced the total amount of ROS in BEAS2B_xCT compared with BEAS2B_EV (Fig. 5e ). Similar results were confirmed in 16HBE cells (Supplementary Fig. 4B ). In addition, we found overexpression of xCT substantially increased the sensitivity to SASP in two normal airway epithelia cells (Supplementary Fig. 4G ). Given that cigarette smoking is one of the major risk factor for lung cancer development, we surmise that cigarette transform the normal cells by the induction of xCT expression. After 72 h treatment, we found that cigarette smoking condensate slightly increased the expression of xCT in BEAS2B cells (Supplementary Fig. 4E ). By co-culturing the primary bronchial epithelial cells with irradiated feeder cells, we did not find the significant elevated expression of xCT after 14-days treatment (Supplementary Fig. 4F ). Fig. 5 xCT overexpression induces metabolic reprogramming in normal airway epithelial cells. a Evidence of xCT overexpression in BEAS2B cells modifies the amino acids consumption and excretion in media (n = 4). The overexpression of xCT increases the glucose consumption ( b ) and lactate production ( c ) ( n = 4). d The overexpression of xCT increases in GSH/GSSG ratio and promotes more reductive intracellular conditions in the BEAS2B cells ( n = 6). e The overexpression of xCT reduces ROS in BEAS2B ( n = 6). f Overexpression of xCT elevates the oxygen consumption rate (OCR) in BEAS2B cells. The plot of OCR showed over time with the addition of oligomycin (1 µM), mitochondrial uncoupler FCCP (1 µM), and electron transport inhibitors antimycin (0.5 mM) + rotenone (0.5 mM). Maximal respiration, proton leak, and coupled respiration were determined as indicated ( n = 6) Full size image In search for the effect of overexpression of xCT on mitochondrial activity in normal airway epithelial cells, we examined the status of mitochondrial respiration in the BEAS2B_EV and BEAS2B_xCT cells by seahorse analysis. After normalization to cell number, both basal oxygen consumption rate (OCR) and maximal respiratory capacity were significantly elevated after xCT overexpression (Fig. 5f ). Maximal respiratory capacity was measured by treating cells with oligomycin (1 uM) to block ATP production. After that, uncoupling agent carbonyl cyanide p-trifluoromethoxyphenylhydrazone (FCCP, 1 uM) was added to dissipate proton gradients which allowed electron transportation and oxygen consumption to operate at the maximal rate. This elevated OCR was suppressed by the electron transport inhibitors, antimycin A and rotenone (0.5 mM, each), showing that respiration was mitochondrial (Fig. 5f ). Thus, overexpression of xCT caused metabolic reprogramming in normal airway epithelial cells that was coupled with oxidative phosphorylation. Discussion Accumulating evidence demonstrated that xCT plays an important role in the development and the survival of different cancer cell types including breast, glioma, and lymphoma [ 16 , 32 , 33 , 34 ]. Here we described the functional significance of xCT overexpression in NSCLC. Specifically, we report that xCT is overexpressed in the cytoplasmic membrane and the overexpression is correlated with poor five-year survival in patients with NSCLCs (Fig. 1 ). Our results prove that xCT regulates glutamine dependency measured by cell proliferation assay. In order to understand the efficacy of targeting xCT in NSCLC, we chose a panel of cell lines with different expression levels of xCT to pursue functional analysis. Our results reveal that a subgroup of cells with higher xCT expression are particularly sensitive to xCT knockdown, which inhibits cell growth, colony formation in soft agar, and cell invasion (Fig. 2 ). SASP has been shown as the potent pharmacological agent to inhibit xCT transport activity in different cancer cell types, including those from the esophagus, breast, and bladder [ 16 , 33 , 35 ]. In addition, SASP was known as a potent inhibitor of xCT in small cell lung cancer cells by depleting glutathione [ 34 ]. Consistent with these studies, our results suggested that targeting xCT with SASP significantly inhibit the proliferation of xCT-expressing NSCLC cells, A549 and H520 (Fig. 3 ). Our in vitro data was confirmed in vivo by showing that intraperitoneal administration of SASP twice daily for three weeks significantly reduces tumor burden in nude mice. The growth inhibitory effect of SASP on nude mice xenografts is attributed to the induction of apoptosis in xCT overexpressing cells. Thus, the results presented in Fig. 3 together with the correlation between the expression of xCT and 5-year survival provide the rationality for targeting xCT in NSCLC. The intensity of the immunohistochemical staining for xCT by NSCLC has already been correlated with (4 S)-4-(3-[18 F]fluoropropyl)-l-glutamate uptake [ 36 ] making xCT expression a promising candidate biomarker predictive of cystine uptake. Our results show that the xCT expression is not only a good candidate biomarker for diagnosis of lung cancer but also associated with response to sulfasalazine, suggesting the potential pertinence of xCT expression as a novel candidate companion biomarker in NSCLC. xCT has known metabolic functions in normal and cancer cells as an antiporter of cystine and glutamate [ 16 , 37 ]. Yet, the specific function of xCT in lung cancer development had not been explored. Therefore, we investigated the contribution of xCT to metabolic reprogramming in airway epithelial cells. Previous in vitro studies reported that xCT expression is elevated at the mRNA level in oral cancer cells upon the exposure to cigarette smoke condensate [ 38 ]. Given that smoking is the top risk factor for lung cancer development, we hypothesize that smoking induced expression of xCT in normal airway epithelial cells could be an adaptation mechanism to enable lung cancer development. Our results indicate that overexpression of xCT increases the consumption of many amino acids such as glutamine, cystine, valine, leucine, and lysine. In addition, we also observed that cells overexpressing xCT consume more glucose, glutamine, and produce more lactate with upregulation of oxidative phosphorylation. Interestingly, overexpression of xCT is associated with glutamine dependency in normal airway epithelial cells. In addition, our results demonstrate that overexpression of xCT induces the expression of MYC and decreases ROS production in normal airway epithelium cells. Consistent with previous data, the overexpression of MYC induced by xCT causes cell cycle arrest in G2 [ 29 ]. This relaxation of G2 phase may be an essential early step in tumor initiation due to genomic instability [ 29 ]. Previous data demonstrated that overexpression of MYC induces the expression of mitochondrial genes and increases ROS production in cancer cells [ 39 ]. In order to maintain the integrity of cells, most cancer cells increase their antioxidant level to defend ROS-induced damage. It would be interesting to elucidate the relationship between the overexpression of xCT and role of ROS production during tumor development. Finally, our results suggest that overexpression of xCT play significant role in reprogramming glutamine metabolism and proliferation in lung cancer cells. This proliferative effect induced by xCT can be attributed to a MYC-dependent glutaminolysis pathway [ 27 ]. Furthermore, our results suggest that xCT significantly contributes to glutamine catabolism and induces MYC expression in normal airway epithelial cells. The arrest of the G2 cell cycle may be an early step in tumor development. In conclusion, our results demonstrate that xCT is a major regulator of metabolic reprogramming with overarching effects on glucose metabolism, glutamine dependency, and intracellular GSH/GSSG redox balance. All these metabolic effects contribute to lung cancer development. The expression of xCT is correlated with poor prognosis in NSCLC and represents a novel companion biomarker of a therapeutic target in a molecularly stratified NSCLC patients. Further studies direct toward understanding the cross-talk between xCT and other lung oncogenic pathways (MYC, KRAS, and NOTCH) in the development of tumorigenesis. Materials and Methods Patients and construction of tissue microarray Tissue microarray (TMA) was constructed using fixed and paraffin wax–embedded tissues from 254 patients according to protocols previously described [ 40 ]. The TMAs consisted of 53 adenocarcinomas, 178 squamous cell carcinomas, and 23 other types of NSCLC. Archived tissue blocks obtained from the pathology departments of the Vanderbilt University Medical Center were stained and reviewed at the pathology department of the Vanderbilt University Medical Center. To compare the mRNA level of xCT in TCGA lung cancer dataset, a cut-off value was set as a mean + 2 SD for xCT in the control group. Cell culture Total of fourteen human NSCLC cell lines were purchased from the American Type Culture Collection (ATCC) as following: six lung ADC cell lines (A549, Calu-3, H1435, H2009, H1395, and H23), six SCC cell lines (HCC95, HCC15, H1588, H520, H1869, and H226), one large-cell carcinoma cell line (H460), and one carcinoid cell line (H727). In addition, two immortalized lung epithelial cell lines were used in our study. Among them, BEAS2B was purchased from ATCC, and 16HBE was a gift kindly provided by Dr. Dieter Gruenert, UCSF. All cancer cell lines were maintained in RPMI-1640 or DMEM media (Life Technologies, Grand Island, NY). 16HBE and BEAS2B were cultured in DMEM. All cells were grown in 1% penicillin/streptomycin and 5% CO 2 with 10% fetal bovine serum. Lentiviral-based stable transfection was performed as previously reported [ 41 ] (Dharmacon, Lafayette, CO). We used CCSB-Broad lentiviral expression system to overexpress either human xCT (ccsbBroad304-02826) or its relative vector (Ctrl) in two airway epithelium cells (BEAS2B and 16HBE). After the antibiotic selection (10 µg/ml blasticidin), the expression of xCT was confirmed in 16HBE and BEAS2B cells by Western blotting. On the other hand, we used a pool of shRNA (V2LHS-251161, V3LHS-392254, and V3LHS-392256) (Dharmacon, Lafayette, CO) for xCT knock down in A549, H520, HCC15, and HCC95 cell lines with 1.5 μg/mL puromycin for selection. After selection, pooled cells were tested for xCT expression using Western blotting. Proliferation assays Proliferation assays were performed as previously described [ 41 ]. Briefly, cells were plated in 24-well tissue culture plates at 4 × 10 4 cells/well. Cultures were grown up to 6 days. The direct CyQUANT assay (Life Technologies, Grand Island, NY) was then done according to the manufacturer’s instructions at indicated times up to 6 days. We used a fluorescence microplate reader to record emission wavelength at 520 nm after 480 nm excitation. A representative viability experiment is shown with mean and standard deviation (SD). Soft agar colony formation assay To test the anchorage-independent growth in vitro, we perform soft agar colony formation assay as previous described [ 41 ]. The bottom agar contained cell culture growth medium with 0.8% agarose in 6-well plates. H520 with empty vector (H520_Ctrl) and H520 xCT knock down (H520_xCT_KD) cells were plated on top of the agar layer at 20,000/well with DMEM medium containing 10% fetal bovine serum and 0.4% agarose. The cells were incubated at 37 °C in 5% CO 2 for 30 days. Total colonies were stained with MTT and quantified using a dissecting microscope (Olympus, PA). Matrigel cell invasion assay Matrigel-coated transwell inserts (BD Biosciences, San Jose, CA) with the eight µm-pore membranes were used to test the cell invasive ability of xCT knockdown in A549, H520, HCC15, and HCC95 cells. A total of 5 × 10 4 Ctrl and xCT_KD cells in basal media were transferred into the upper chamber, and the lower chambers were filled with 400 µL of DMEM with 10% FBS. After 24 h incubation, the cells in the upper chamber were scraped, and adherent cells attached to the lower surface of the insert were incubated with a cell-permeant dye calcein, which can be converted by live-cell esterase to produce green fluorescence and counted. Metabolic assays High-performance liquid chromatography (HPLC, Agilent 1200 series) with a gradient elution method on a reverse-phase column was used to measure amino acid concentrations in the medium [ 42 ]. Briefly, media samples were derivatized using orthophthalaldehyde (OPA). After that, ZORBAX Eclipse PLUS C 18 column (Agilent Technologies, 4.6 × 150 mm, 3.5 μm) was used for amino acid concentration determination. The mobile phases were set up as the following: buffer A: 10 mM Na 2 HPO 4 , 10 mM Na 2 B 4 O 7 , and 8 ppm NaN 3 in water; buffer B: a mixture of methanol: acetonitrile: water as the ratio of 9:9:2. The gradient elution was: 0–0.5 min, 2% of B; 0.5–15.5 min, linear from 2% to 47% of B; 15.5–15.6 min, linear from %47 to 100% of B; 15.6-19 min hold at 100% of B; 19–19.1 min, linear to 2% of B; and 19.1–21 min,hold at 2% of B. The flow rate was kept at 1.5 mL/min, and the column was maintained at 40 ℃ during the running. Seahorse mitochondria function assays were normalized to cell number according to the commercial protocol (Agilent Technologies, Santa Clara, CA). The oxygen consumption rate (OCR) was determined with an XF96 extracellular flux analyzer (Seahorse Bioscience) using manufacturer-recommended protocols. For our experiments, OCR was measured over time following injection of final concentrations of oligomycin (1 µM), FCCP (1 μM), and Rotenone/antimycin A (0.5 µM). Relative concentrations of glutathione (GSH) and oxidative of glutathione (GSSG) were measured by GSH-Glo Glutathione assay according to the protocol (Promega, WI). Briefly, 10,000 cells were seeded and cultured in 96-well plates for 6 h. After that, GSH or GSSG lysis buffer were directly added into the culture wells, and luciferin was added for detection. Luminescence was then read and recorded after background subtraction. Six wells per experimental group were used. Glucose consumption and lactate production were measured using 2300 STAT Plus dual glucose/lactate analyzer (YSI Life Sciences, Yellow Springs, OH). These data were normalized to the total protein concentration. Glucose uptake was analyzed after treatment with the fluorescent glucose analog 2-[N-(7-nitrobenz-2-oxa-1, 3-diazol-4-yl) amino]-2-deoxy-D-glucose (2-NBDG) as described previously [ 43 ]. Cells were cultured in glucose-free media and incubated in 50 µM 2-NBDG for 45 min. Subsequently, cells were washed, and fluorescence was measured using flow cytometry. Flow cytometry analysis Propidium iodide (PI; Sigma-Aldrich, St. Louis, MO) was used to stain the cancer cells and then the cancer cells were subject to flow cytometry analysis as previously described [ 44 ]. All cells were starved of FBS for overnight as base line and then the cells were released into normal cell culture DMEM medium for 48 h. A total of 10,000–20,000 stained nuclei were collected for cell cycle analysis using the ModFit LT software (Verity Software House, Topsham, ME). Co-culture bronchial epithelial cells with feeder layer Swiss 3T3-J2 mouse fibroblasts were grown in DMEM medium containing 1% penicillin/streptomycin and 10% fetal bovine serum. When the mouse fibroblasts cells reached 70% confluency, the cells were irradiated at 30 Gy (3000 rads) as feeder layer. Bronchial epithelial cells were obtained by brushings (Bronchoscopy cytology brush, Cook Medical) and collected in normal saline on ice following an established protocol [ 45 ]. Bronchial brushings cells were centrifuged at 300 g for 5 min and applied on top of feeder layer and cultured in the presence of the ROCK inhibitor at a final concentration of 5 mM (Enzo Life Sciences, Farmingdale, NY). Western blotting Total protein extracts and Western blot (WB) analysis were performed as in previous studies [ 41 ]. Primary antibodies dilutions were performed as the following: GLS1 at 1:1,000 (#7485-1 Epitomics, Burlin-game, CA), xCT at 1: 1,000 (#12691, Cell Signaling Technology, Danvers, MA), Cyclin D1 at 1: 1,000 (#2978, Cell Signaling Technology), BCL-xL at 1: 1,000 (#2762, Cell Signaling Technology), MYC at 1: 1,000 (#13987, Cell Signaling Technology), and Actin at 1: 5,000 (#3700, Cell Signaling Technology, MA). Immunohistochemistry Immunohistochemical staining for xCT protein using anti-xCT (#12691, Cell Signaling Technology, Danvers, MA) and scoring was conducted as previously described [ 41 ]. Briefly, the final staining score was calculated by multiplying the staining distribution score (no staining = 0; 0.1, staining of 1%-9% of cells = 0.1; 10%–49% = 0.5 and if > 50% of cells, socre = 1) by the staining intensity score (no staining = 0; weak = 1; moderate = 2; strong = 3). The median value of all the final staining scores was used for distinguishing xCT_high tumors from xCT_low tumors. Mouse xenograft study Stably transfected H520 cells (5 × 10 6 ) with either xCT_KD or control construct were injected into the flanks of ten female nude mice-foxn1nu (Harlan Laboratories, Indianapolis, IN, USA). Tumor growth was monitored up to 4 weeks and measured with calipers. Immunohistochemistry for xCT, cleaved caspase 3, and Ki-67 was performed according to a previous protocol [ 41 ]. Images were taken and analyzed with Olympus BX41TF microscope and Olympus cellSens software. Statistics UCSC Cancer Genomics Browser [ 46 ] was used to download TCGA lung cancer mRNA expression dataset (IllumninaHiSeq_RNASeqV2). The Student’s t-test was used to compare the level of xCT mRNA expression in lung tumors vs. normal tissues. The association between xCT protein expression and overall survival after adjusting for age, gender, smoking status, and the stage was assessed using Cox proportional hazards regression model. The survival difference was analyzed by the log-rank test based on median xCT expression using the Kaplan-Meier method. Statistical analyses for the glutamine uptake kinetics and cell growth assays were conducted with GraphPad Prism (GraphPad Software, La Jolla, CA). All results were demonstrated as the mean ± SD which were performed at least three independent measurements (as indicated in figure legends). Individual p-values were reported in the figures with values of p < 0.05 considered as statistically significant.
An amino acid transporter named xCT may affect the growth and progression of non-small cell lung cancer, a discovery that may predict the five-year survival rate of patients suffering from this cancer, now at 16 percent, researchers at Georgia State University and Vanderbilt University Medical Center have concluded. The team, led by Xiangming Ji of Georgia State and Pierre Massion of Vanderbilt University Medical Center, published their findings in the current issue of Oncogene. xCT is an amino acid transporter, which carries the amino acid cystine into the cells and exports glutamate, a chemical that nerve cells use to send signals to other cells. It provides the key building blocks for glutathione (GSH) synthesis, which feeds cancer cell function and growth. The researchers used sulfasalazine, an anti-inflammatory drug often used to treat Crohn's disease, rheumatoid arthritis and related diseases, to reduce tumor formation by inhibiting the function of xCT. Previous studies published in cancer research journals show sulfasalazine's ability to affect xCT in other forms of cancer, including breast, bladder and small cell lung cancer. Researchers first examined xCT protein expression in non-small cell lung cancer cell lines and found larger quantities in the non-small cell lung cancer cells compared to normal lung tissue. By analyzing protein expression of patients from Vanderbilt-Ingram Cancer Center, the researchers found patients with higher xCT expression have a lower five-year cancer survival rate. On the positive side, the data show xCT as a therapeutic targeting candidate. Ji and Massion tested the cancer cells in the laboratory and in mice, discovering that targeting xCT genetically or therapeutically could reduce the tumor formation in vitro (in cell culture) and in vivo (in living organisms). They also found only cells with elevated xCT expression were more sensitive to glutamine withdrawal. The results show strong evidence that lowering xCT may improve survival rates for individuals with non-small cell lung cancer. "In conclusion, our results demonstrate that xCT is a major regulator of metabolic reprogramming with overarching effects on glucose metabolism, glutamine dependency and intracellular GSH/GSSG redox balance. All these metabolic effects contribute to lung cancer development," Ji said. The expression of xCT is correlated with a poor prognosis in non-small cell lung cancer and represents a new opportunity to therapeutically target this biomarker in molecularly strati?ed non-small cell lung cancer patients. Further studies are needed to better understand the unwanted communication between xCT and other tumor-associated cell signaling pathways such as MYC, KRAS and NOTCH in the formation of lung cancer tumors.
10.1038/s41388-018-0307-z
Physics
Scientists make first 'on demand' entanglement link
Deterministic delivery of remote entanglement on a quantum network, Nature (2018). DOI: 10.1038/s41586-018-0200-5 , www.nature.com/articles/s41586-018-0200-5 Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0200-5
https://phys.org/news/2018-06-scientists-demand-entanglement-link.html
Abstract Large-scale quantum networks promise to enable secure communication, distributed quantum computing, enhanced sensing and fundamental tests of quantum mechanics through the distribution of entanglement across nodes 1 , 2 , 3 , 4 , 5 , 6 , 7 . Moving beyond current two-node networks 8 , 9 , 10 , 11 , 12 , 13 requires the rate of entanglement generation between nodes to exceed the decoherence (loss) rate of the entanglement. If this criterion is met, intrinsically probabilistic entangling protocols can be used to provide deterministic remote entanglement at pre-specified times. Here we demonstrate this using diamond spin qubit nodes separated by two metres. We realize a fully heralded single-photon entanglement protocol that achieves entangling rates of up to 39 hertz, three orders of magnitude higher than previously demonstrated two-photon protocols on this platform 14 . At the same time, we suppress the decoherence rate of remote-entangled states to five hertz through dynamical decoupling. By combining these results with efficient charge-state control and mitigation of spectral diffusion, we deterministically deliver a fresh remote state with an average entanglement fidelity of more than 0.5 at every clock cycle of about 100 milliseconds without any pre- or post-selection. These results demonstrate a key building block for extended quantum networks and open the door to entanglement distribution across multiple remote nodes. Main The power of future quantum networks will derive from entanglement that is shared between the network nodes. Two critical parameters for the performance of such networks are the entanglement-generation rate r ent between nodes and the entangled-state decoherence rate r dec . Their ratio η link = r ent / r dec , which we term the quantum link efficiency 8 , 15 , quantifies how effectively entangled states can be preserved over the timescales necessary to generate them. Alternatively, the link efficiency determines the average number of entangled states that can be created within one entangled-state lifetime. A link efficiency of unity therefore represents a critical threshold above which entanglement can be generated faster than it is lost. Exceeding this threshold is central to allowing multiple entangled links to be created and maintained simultaneously, as is required for the distribution of many-body quantum states across a network 6 , 15 . Consider an elementary entanglement-delivery protocol that delivers states at pre-determined times. This can be achieved by making multiple attempts to generate entanglement and then protecting successfully generated entangled states from decoherence until the required delivery time (Fig. 1a , steps (1)–(3)). If we try to generate entanglement for a period t ent , then the cumulative probability of success will be \({p}_{{\rm{succ}}}=1-{{\rm{e}}}^{-{r}_{{\rm{ent}}}{t}_{{\rm{ent}}}}\) . For a given p succ , the average fidelity F succ with respect to a maximally entangled state of the successfully generated states is solely determined by the quantum link efficiency η link (Methods). We plot F succ versus p succ for several values of η link in Fig. 1b . Fig. 1: Deterministic remote-entanglement delivery. a , Deterministic entanglement delivery guarantees the output of states with an average entanglement fidelity of more than 0.5 at pre-specified times. In our protocol, underlying this deterministic delivery is a probabilistic but heralded entanglement process. Repeated entangling attempts (dashed helical links) are made (1) and then, upon heralded success (2), the entangled state (solid helical link) is protected from decoherence (represented by the lock) until the specified delivery time (3). If no attempt at entanglement generation succeeds within one cycle, an unentangled state must be delivered (4). b , For the underlying entanglement-generation and state-preservation protocol (steps (1)–(3) in a ), the effectiveness of the trade-off between the average fidelity of the entangled state that is delivered and the success probability is determined by the quantum link efficiency η link . The dashed line represents the classical threshold of F = 0.5, above which a state is entangled. c , Maximum fidelity of deterministically delivered states as a function of η link . A critical threshold of η link ≈ 0.83 (vertical green line) must be surpassed to deliver an entangled state at every cycle on average. Full size image This protocol allows entangled states to be delivered at specified times, but with a finite probability of success. By delivering an unentangled state (state fidelity F unent ≤ 1/2) in cycles in which all entanglement-generation attempts failed, the protocol can be cast into a fully deterministic black box (Fig. 1a , step (4)). The states output from such a black box will have a fidelity of $${F}_{{\rm{\det }}}={p}_{{\rm{succ}}}{F}_{{\rm{succ}}}+(1-{p}_{{\rm{succ}}}){F}_{{\rm{unent}}}$$ (1) The maximum achievable fidelity \({F}_{{\rm{\det }}}^{{\rm{\max }}}\) of this deterministic state-delivery protocol, found by optimizing p succ , is also determined only by the quantum link efficiency η link . For F unent = 1/4 (a fully mixed state), we find (Fig. 1c ) $${F}_{{\rm{\det }}}^{{\rm{\max }}}=\frac{1}{4}\left[1+3{\eta }_{{\rm{link}}}^{1/(1-{\eta }_{{\rm{link}}})}\right]$$ (2) For η link greater than about 0.83, there exists a combination of p succ and F succ high enough to compensate for cycles in which entanglement is not heralded, enabling the deterministic delivery of states that are entangled on average \(\left({F}_{{\rm{\det }}}^{{\rm{\max }}}\ge 1/2\right)\) . Deterministic entanglement delivery is therefore a critical benchmark of the performance of a network, certifying that its quantum link efficiency is of order unity or higher. Furthermore, the ability to specify in advance the time at which entangled states are delivered may assist in designing multi-step quantum-information tasks such as entanglement routing 16 , 17 . However, so far, quantum link efficiencies of order unity or greater have remained out of reach for solid-state quantum networks. Quantum dots have been used to demonstrate kilohertz entanglement rates r ent , but decoherence rates r dec of tens of megahertz limit their quantum link efficiencies 18 , 19 η link to around 10 −4 . Nitrogen–vacancy (NV) centres—point defects in diamond with a long-lived electron spin and bright optical transitions—have been used to demonstrate entanglement rates r ent of tens of millihertz 10 , 14 and, in separate experiments, decoherence rates r dec of the order of one hertz 20 , which together would give link efficiencies η link of roughly 10 −2 . Here we achieve quantum link efficiencies η link well in excess of unity by realizing an alternative entanglement protocol for NV centres in which we directly use the state heralded by the detection of a single photon (Fig. 2 ) 21 , 22 . The rate for such a single-photon protocol scales linearly with losses, which, in comparison with previously used two-photon-mediated protocols 9 , 14 , provides a substantial advantage in typical remote-entanglement settings. Recent experiments have highlighted the potential of single-photon protocols by generating local entanglement 23 , 24 , and remote entanglement in post-selection 18 , 19 . By realizing a single-photon protocol in a fully heralded fashion and protecting entanglement through dynamical decoupling, we achieve the deterministic delivery of remote-entangled states on an approximately 10-Hz clock. Fig. 2: Benchmarking single-photon entanglement generation. a , Experimental protocol. (1) Before entanglement generation, an NV-centre state check verifies that the NV centre is in the correct charge state (the negatively charged state) and resonant with the excitation laser (discussed further in Methods); this state is denoted ‘ready’. This is repeated until the check passes. (2) Entanglement generation is attempted until success is heralded, in which case we continue to readout (step (3)). If 250 attempts have been made without success, we revert to step (1). (3) Upon heralded success, the spin states are read out in a chosen basis by using microwaves to rotate the state, followed by single-shot readout (light bulbs indicate the detection of the bright (1) or dark (0) spin state). b , The left panel shows the optical set-up used for entanglement generation, in which the optical excitation pulses for each node are derived at a beam splitter. The single photons emitted by the nodes as a result of these excitation pulses are interfered on another beam splitter, completing an effective interferometer between the nodes. The optical phase difference Δ θ acquired in this interferometer must be known. At pre-determined intervals, light is injected into the interferometer, as shown in the right panel. This light is measured using the same detectors that herald entanglement, and the signal is fed back via a microcontroller to a piezo-electric fibre stretcher that is used to compensate for phase drifts. For the data reported here, we stabilize the phase difference every 180 ms. c , Measured \(\left\langle XX\right\rangle \) and \(\left\langle YY\right\rangle \) correlations (left) for ψ 0/1 (where 0/1 denotes the heralding detector) and α = 0.1 as the readout basis is swept at node A (inset). The right panel shows the measured \(\left\langle ZZ\right\rangle \) correlations. d , e , Fidelity of the heralded states with respect to a Bell state ( d ) and entanglement-generation success rat e ( e ), for different values of α . For c – e , solid lines (with shaded 1-s.d. statistical uncertainties) are the predictions of our model based solely on independently determined parameters (Methods). Error bars in c and d represent 1 s.d. Source Data Full size image Our experiment uses NV centres that reside in independently operated cryostat set-ups separated by 2 m (further experimental details are given in Methods). We use qubits formed by two of the ground-state spin sublevels of the NV centre ( \(\left|\uparrow \right\rangle \equiv \left|{m}_{{\rm{s}}}=0\right\rangle \) and \(\left|\downarrow \right\rangle \equiv \left|{m}_{{\rm{s}}}=-1\right\rangle \) , where m s is the projection of the spin along its quantisation axis). Single-photon entanglement generation (Fig. 2a ) proceeds by first initializing each node in \(\left|\uparrow \right\rangle \) by optical pumping 25 , followed by coherent rotation using a microwave pulse 26 to create the state $$\left|{\rm{NV}}\right\rangle =\sqrt{\alpha }\left|\uparrow \right\rangle +\sqrt{1-\alpha }\left|\downarrow \right\rangle $$ (3) where α is determined by the choice of microwave pulse. We then apply resonant laser light to excite selectively the ‘bright’ state \(\left|\uparrow \right\rangle \) to an excited state, which rapidly decays radiatively back to the ground state by emitting a single photon. This entangles the spin state of the NV centre with the presence \(\left(\left|1\right\rangle \right)\) or absence \(\left(\left|0\right\rangle \right)\) of a photon in the emitted optical mode: $$\left|{\rm{NV}},{\rm{optical}}\,{\rm{mode}}\right\rangle =\sqrt{\alpha }\left|\uparrow \right\rangle \left|1\right\rangle +\sqrt{1-\alpha }\left|\downarrow \right\rangle \left|0\right\rangle $$ (4) Emitted photons are transmitted to a central station at which a beam splitter is used to remove their ‘which path’ information. Successful detection of a photon at this station indicates that at least one of the NV centres is in the bright state \(\left|\uparrow \right\rangle \) and therefore heralds the creation of a spin–spin entangled state. However, given the detection of one photon, the conditional probability that the other NV centre is also in the \(\left|\uparrow \right\rangle \) state, but that the photon it emitted was lost, is p = α (in the limit \({p}_{{\rm{\det }}}\ll 1\) , where p det is the photon detection efficiency). This degrades the heralded state from a maximally entangled Bell state \(\left|\psi \right\rangle =(\left|\uparrow \downarrow \right\rangle +\left|\downarrow \uparrow \right\rangle )/\sqrt{2}\) to $${\rho }_{{\rm{NV}},{\rm{NV}}}=(1-\alpha )\left|\psi \right\rangle \left\langle \psi \right|+\alpha \left|\uparrow \uparrow \right\rangle \left\langle \uparrow \uparrow \right|$$ (5) The probability of successfully heralding entanglement is 2 p det α . The state fidelity F = 1 − α can therefore be traded off against the entanglement rate directly. The corresponding success probability of a two-photon protocol is \({p}_{{\rm{\det }}}^{2}/2\) ; for a given acceptable infidelity α , single-photon protocols will therefore provide a rate increase of 4 α / p det . For example, for our system’s p det ≈ 4 × 10 −4 , if a 10% infidelity is acceptable, then the rate can be increased by three orders of magnitude compared to two-photon protocols. The primary challenge in implementing single-photon entanglement is that the resulting entangled state depends on the optical phase acquired by the laser pulses used to create spin–photon entanglement at each node, as well as on the phase acquired by the emitted single photons as they propagate (Fig. 2b ). The experimental set-up therefore acts as an interferometer from the point at which the optical pulses are split to the point at which the emitted optical modes interfere. For a total optical phase difference of Δ θ , the entangled state created is $$\left|{\psi }_{0/1}({\rm{\Delta }}\theta )\right\rangle =\left|\uparrow \downarrow \right\rangle \pm {{\rm{e}}}^{i{\rm{\Delta }}\theta }\left|\downarrow \uparrow \right\rangle $$ (6) where 0/1 (with corresponding ± phase factor) denotes the detector at the central station that detected the incident photon. This optical phase difference must be known to ensure that entangled states are available for further use. We overcome this entangled-state phase sensitivity by interleaving periods of optical-phase stabilization with our entanglement generation. During phase stabilization we input bright laser light at the same frequency as the NV-centre excitation light and detect the light reflected from the diamond substrate using the same detectors that are used to herald entanglement. The measured optical phase, estimated from the detected counts, is used to adjust the phase back to our desired value using a piezoelectric fibre stretcher. We achieve an average steady-state phase stability of 14.3(3)°, limited by the mechanical oscillations of the optical elements in our experimental set-up (the error quoted here and elsewhere is one standard deviation; Methods, Extended Data Fig. 6 ). To demonstrate the controlled generation of entangled states, we run the single-photon entangling protocol with a bright-state population of α = 0.1. After entanglement is heralded, we apply basis rotations and single-shot state readout 25 at each node (A and B) to measure \(\langle {\sigma }_{i}^{{\rm{A}}}{\sigma }_{j}^{{\rm{B}}}\rangle \) correlations between the nodes, where hereafter the standard Pauli matrices are referred to in the shorthand σ X , σ Y , σ Z = X , Y , Z . We observe strong correlations for \(\left\langle XX\right\rangle \) and \(\left\langle YY\right\rangle \) and, when sweeping the readout basis for node A, oscillations of these coherences, as expected from the desired entangled state (Fig. 2c , left). In combination with the measured \(\left\langle ZZ\right\rangle \) correlations (Fig. 2c , right), this finding unambiguously demonstrates the establishment of entanglement between our nodes. We explore the trade-off between the entangled-state fidelity and the entanglement rate by measuring \(\left\langle XX\right\rangle \) , \(\left\langle YY\right\rangle \) and \(\left\langle ZZ\right\rangle \) correlations for a range of different initial bright-state populations α . Using these correlations, we calculate the fidelity of the heralded state relative to the desired maximally entangled Bell state for each value of α (Fig. 2d ), along with the measured success rate (Fig. 2e ). As predicted, the fidelity increases with decreasing α as the weight of the unentangled state \(\left|\uparrow \uparrow \right\rangle \left\langle \uparrow \uparrow \right|\) diminishes (equation ( 5 )). For small α , the fidelity saturates because the dark-count rates of the detectors become comparable to the detection rate. Choosing α to maximize fidelity, we find that our protocol allows us to generate entanglement with a fidelity of 0.81(2) at a rate of r ent = 6 Hz (for α = 0.05). Alternatively, by trading the entanglement fidelity for rate, we can generate entanglement at r ent = 39 Hz with an associated fidelity of 0.60(2) ( α = 0.3). This represents an increase in the entangling rate of two orders of magnitude compared to previous NV-centre experiments 10 and of three orders of magnitude compared to two-photon protocols under the same conditions 14 . Compared to the maximum theoretical fidelity for α = 0.05 of 0.95, the states we generate have a 3% reduction in fidelity due to residual photon distinguishability, 4% from double excitation, 3% from detector dark counts and 2% from optical-phase uncertainty (Methods). To reach a sufficient link efficiency η link to enable deterministic entanglement delivery, the single-photon protocol must be combined with robust protection of the remote-entangled states that are generated. To achieve this, we carefully shielded our NV centres from external noise sources, including residual laser light and microwave amplifier noise, leaving as the dominant noise the slowly fluctuating magnetic field induced by the surrounding nuclear spin bath. We mitigate this quasi-static noise by implementing dynamical decoupling with ‘XY8’ pulse sequences (Fig. 3a , Methods, Extended Data Fig. 9 ). The fixed delay between microwave pulses in these sequences is optimized for each node 27 . Varying the number of decoupling pulses allows us to protect the spins for different durations. This dynamical decoupling extends the coherence time of node A and node B from about 5 μs to 290(20) ms and 680(70) ms, respectively (Fig. 3b ). The difference in coherence times for the two nodes is attributed to differing nuclear-spin environments and microwave-pulse fidelities. Fig. 3: Coherence protection of remote-entangled states. a , Dynamical decoupling protects the state of the NV-centre spins from quasi-static environmental noise. To protect a spin for a time T , N = T /(2 t ) inverting (π) microwave pulses are applied at 2 t intervals. For node A, t = 40.320 μs; for node B, t = 36.148 μs. b , Fidelity with respect to the initial state for dynamical decoupling of the state \((\left|\uparrow \right\rangle +\left|\downarrow \right\rangle )/\sqrt{2}\) at each of our nodes. Solid lines show exponential fits with coherence times of 290(20) ms and 680(70) ms for nodes A and B, respectively. c , Dynamical decoupling of entangled states created using the single-photon entanglement protocol for α = 0.12 and α = 0.2. Solid lines (with shaded 1-s.d. statistical uncertainties) show the predictions of our model (Methods) based on the data in b , from which the entangled-state coherence time is expected to be τ = 200(10) ms. Error bars in b and c represent 1 s.d. Source Data Full size image To investigate the preservation of remote-entangled states, we incorporate dynamical decoupling for varying durations after successful single-photon entanglement generation (Fig. 3c ). We find an entangled-state coherence time of 200(10) ms (decoherence rate r dec = 5.0(3) Hz). The observed entangled-state fidelities closely match the predictions of our model, which is based solely on independently determined parameters (Methods, Extended Data Table 1 ). In particular, the decoherence of the remote-entangled state is fully explained by the combination of the individual decoherence rates of the individual nodes. The combination of dynamical decoupling and the single-photon entanglement protocol achieves a quantum link efficiency of η link ≈ 8, well above the critical threshold of η link ≈ 0.83 and comparable to the published 8 state-of-the-art in ion traps, η link ≈ 5. These innovations enable the design of a deterministic entanglement-delivery protocol that guarantees the delivery of entangled states at specified intervals, without any post-selection of results or pre-selection based on the nodes being in the appropriate conditions (Fig. 4a ). Phase stabilization occurs at the start of each cycle, after which there is a pre-set period before an entangled state must be delivered. This window must therefore include all NV-centre state checks (necessary to mitigate spectral diffusion via feedback control and to verify the charge-state and resonance conditions 9 ), entanglement-generation attempts and dynamical decoupling necessary to deliver an entangled state. Fast conditional logic is used to adapt the experimental sequence dynamically on the basis of the detection of a heralding signal 9 , 10 , 28 , 29 . Further details on the experimental implementation are given in Methods and Extended Data Fig. 1 . Fig. 4: Deterministic entanglement delivery. a , Each deterministic entanglement-delivery cycle combines: (1) optical phase stabilization; (2) NV-centre state checks, repeated until a threshold number of photons are detected at each node; (3) attempts at probabilistic entanglement generation (Fig. 2 ); and ( 4 ) upon heralded entanglement success, state protection by dynamical decoupling until the delivery time. b , Distribution of deterministic entanglement delivery outcomes for α = 0.12 (left) and α = 0.2 (right) and different delivery rates. The fraction of cycles in which a herald photon is detected (‘heralded success’), in which no herald is detected (‘no heralded success’) and in which the NV-centre state checks for at least one of the NV centres fail repeatedly for the whole cycle (‘offline’; often too small to be visible in the plot) are shown. The lines show the success rates predicted by our model. c , Average fidelity of deterministically delivered entangled states for α = 0.12 (left) and α = 0.2 (right) and different delivery rates (diamonds). The average fidelity if classically correlated states were delivered for cycles in which no success event was heralded is also shown (circles). The associated lines (with shaded 1-s.d. statistical uncertainties) are the corresponding predictions of our model (Methods). Error bars in b and c represent 1 s.d. Source Data Full size image We run our deterministic entanglement-delivery protocol at two values of α (0.2 and 0.12) and for delivery rates of 7–12 Hz. We divide the experiment into runs of 1,500 cycles (that is, 1,500 deterministic-state deliveries), for a total dataset of 42,000 cycles. We first confirm that heralded entanglement occurs with the expected probabilities (Fig. 4b ) by determining the fraction of cycles in which entanglement is heralded, in which no entangling attempts succeed and in which entanglement attempts do not occur at all because the NV-centre state check never succeeds. To establish reliable and useful quantum networks, it is important that entangled states can be delivered with high confidence over long periods. The nodes must therefore not be offline, for example, owing to uncompensated drifts in the resonant frequencies of the optical transitions. We therefore do not stop the experiment from running once it starts and include any such offline cycles in our datasets. Their negligible contribution (0.8% of cycles) confirms the robustness of our experimental platform and the effectiveness of our NV-centre frequency and charge-state control (discussed further in Methods). For each value of α and for each pre-set delivery interval, we determine the average fidelity of the deterministically delivered states by measuring their \(\left\langle XX\right\rangle \) , \(\left\langle YY\right\rangle \) and \(\left\langle ZZ\right\rangle \) correlations (Fig. 4c ). We find that for α = 0.2 and a rate of 9.9 Hz, we are able to create states with a fidelity of 0.56(1), demonstrating successful deterministic-entanglement delivery. Our model (solid lines in Fig. 4c ) captures the trends of the deterministic entanglement-delivery data effectively. However, the observed state fidelities are slightly lower than the predicted ones, hinting at sources of decoherence that are not included in our model (Methods, Extended Data Fig. 4 ). Identifying these potential sources will be the subject of future work. During cycles in which entanglement is not successfully heralded, the spin states are nonetheless delivered and read out. In these cases, we deliver the state that the NV centres are left in after a failed entanglement attempt, which has a low fidelity with respect to the desired Bell state (for example, F unent = 0.04 for α = 0.2). Although this stringent test highlights the robust nature of our protocol, we could instead deliver a mixed state ( F unent = 1/4) or a classically correlated state ( F unent = 1/2) when a successful event is not heralded. The resulting fidelities for our experimental data if classically correlated states were delivered are also plotted in Fig. 4b (grey circles). In this case, we would be able to deliver entangled states deterministically with fidelities of 0.62(1) at a rate of 9.9 Hz. The deterministic entanglement delivery between remote NV centres demonstrated here is enabled by a quantum link efficiency exceeding unity. Straightforward modifications to our experiment are expected to increase the quantum link efficiency further. Refinements to the classical experimental control will allow us to reduce the duration of the entanglement attempt from 5.5 μs to less than 2 μs, which would more than double the entangling rate. Furthermore, the entangled-state coherence time could be improved substantially by exploiting long-lived nuclear-spin quantum memories 10 , 30 , 31 . We anticipate that this will allow for link efficiencies in excess of 100 in the near future. Further improvements to the photon detection efficiency (including enhancement of the zero-phonon line emission) 32 , 33 would lead to an additional increase of at least an order of magnitude. In combination with recent progress on robust storage of quantum states during remote entangling operations 10 , 34 , the techniques reported here reveal a direct path to the creation of many-body quantum states distributed over multiple quantum network nodes. Moreover, given the demonstrated potential for phase stabilization in optical fibre over distances of tens of kilometres 22 , our results open up the prospect of entanglement-based quantum networks at metropolitan scales. Methods Deterministically delivered entangled-state fidelity as a function of quantum link efficiency We assume an entanglement-generation rate r ent and an entangled-state decoherence rate r dec . If the rate at which entanglement attempts occur is much faster than r ent (that is, there is a low probability of success), then we can approximate entanglement generation as a continuous process. In this case, the probability density for successfully generating entanglement at a time t after beginning our attempts is \({p}_{{\rm{ent}}}(t)={r}_{{\rm{ent}}}{{\rm{e}}}^{-{r}_{{\rm{ent}}}t}\) . The corresponding cumulative probability of success is \({p}_{{\rm{succ}}}(t)=1-{{\rm{e}}}^{-{r}_{{\rm{ent}}}t}\) . Once we succeed at creating entanglement, the state will decohere until the time at which we deliver it. For single-qubit depolarizing noise at each site, the fidelity of the resulting state after storage for a time t is $$F(t)=\frac{1}{4}+\frac{3}{4}{{\rm{e}}}^{-{r}_{{\rm{dec}}}t}$$ If we deliver our entangled state at time t ent = β / r dec (where β parameterizes the time in terms of the decoherence rate), then the average fidelity of the delivered state (given a success occurred) is $$\begin{array}{l}\,{F}_{{\rm{succ}}}=\frac{1}{{p}_{{\rm{succ}}}({t}_{{\rm{ent}}})}{\int }_{0}^{{t}_{{\rm{ent}}}}{p}_{{\rm{ent}}}(t)F({t}_{{\rm{ent}}}-t){\rm{d}}t\\ \,=\frac{1}{{p}_{{\rm{succ}}}({t}_{{\rm{ent}}})}{\int }_{0}^{{t}_{{\rm{ent}}}}{r}_{{\rm{ent}}}{{\rm{e}}}^{-{r}_{{\rm{ent}}}t}\left[\frac{1}{4}+\frac{3}{4}{e}^{-{r}_{{\rm{dec}}}({t}_{{\rm{ent}}}-t)}\right]{\rm{d}}t\\ \,=\frac{3{{\rm{e}}}^{-\beta }{\eta }_{{\rm{link}}}+(1-4{\eta }_{{\rm{link}}}){{\rm{e}}}^{-{\eta }_{{\rm{link}}}\beta }+{\eta }_{{\rm{link}}}-1}{4({\eta }_{{\rm{link}}}-1){p}_{{\rm{succ}}}({t}_{{\rm{ent}}})}\end{array}$$ Because \({p}_{{\rm{succ}}}({t}_{{\rm{ent}}})=1-{{\rm{e}}}^{-\beta {\eta }_{{\rm{link}}}}\) , \(\beta =-ln[1-{p}_{{\rm{succ}}}({t}_{{\rm{ent}}})]/{\eta }_{{\rm{link}}}\) . Using this, along with the shorthand p succ = p succ ( t ent ), we find that $${F}_{{\rm{succ}}}=\frac{3{\eta }_{{\rm{link}}}+{p}_{{\rm{succ}}}-3{\eta }_{{\rm{link}}}{(1-{p}_{{\rm{succ}}})}^{1/{\eta }_{{\rm{link}}}}-4{\eta }_{{\rm{link}}}{p}_{{\rm{succ}}}}{4{p}_{{\rm{succ}}}(1-{\eta }_{{\rm{link}}})}$$ As discussed in the main text, we can choose to draw a black box around this process, delivering an unentangled state (state fidelity F unent ≤ 1/2) for cycles in which no attempt at entanglement-generation succeeds so that a state is always delivered. This means that the states output from this black box will have a fidelity with respect to a Bell state F det given by equation ( 1 ), where F succ is as in the above equation. The maximum achievable fidelity when outputting a fully mixed state ( F unent = 1/4) upon failure \({F}_{{\rm{\det }}}^{{\rm{\max }}}\) (equation ( 2 )) is found by optimizing F succ for a given quantum link efficiency η link . The full state of a quantum system can only be experimentally determined using an ensemble of identical states. This means that, in the absence of information about which deterministic entanglement-delivery cycles have a heralded success, the only accurate description of the output of such a black-box system is that a statistical mixture is deterministically output at each cycle. Experiment design We use chemical-vapour-deposition homoepitaxially grown diamonds of type IIa with a natural abundance of carbon isotopes. Both diamonds were cut along the \(\left\langle 111\right\rangle \) crystal axis and were grown by Element Six. They are situated in custom-built confocal microscope set-ups within closed-cycle cryostats (4 K, Montana Instruments) separated by 2 m. We use fast microwave switches to shield both NV centres from microwave amplifier noise and therefore increase the coherence times substantially (node A uses Qorvo TGS2355-SM and node B uses Analogue Devices HMC544). All other parts of the set-up and sample details are described in the supplementary information of refs 9 , 10 . One cycle of the deterministic entanglement protocol consists of optical phase stabilization (described further below), charge-resonance checks to ensure that both NV centres are in the appropriate charge state and on-resonance 25 , heralded single-photon entanglement generation and finally dynamical decoupling to protect the state of the NV centres from their environment until the delivery time. The experimental sequences used in each step of this protocol (and the single-photon entanglement-generation experiment) are detailed in Extended Data Fig. 1 . After delivery, the state of each NV centre is measured in a chosen basis. We use spin-selective optical readout of the NV-centre electron spin to determine its state in a single shot via the optical E x transition on both nodes 25 . We measure single-shot readout fidelities of 0.959(3) (0.950(3)) for the bright \(\left|{m}_{{\rm{s}}}=0\right\rangle \equiv \left|\uparrow \right\rangle \) ground state and 0.995(1) (0.996(1)) for the dark \(\left|{m}_{{\rm{s}}}=-1\right\rangle \equiv \left|\downarrow \right\rangle \) state on node A (node B). These values are subsequently used to correct for readout errors of the electron spins in state-tomography measurements. Experiment control and communication logic Extended Data Fig. 2 gives the decision trees and control logic for the ADwin microprocessors (Jaeger ADwin Pro II) that control the experiments. These microcontrollers are responsible for controlling all other experimental hardware and also communicate with each other to synchronize the experiment. Herald photon-detection window We use a combination of polarization and temporal filtering to separate the excitation pulse from photons emitted by the NV centre. This necessitates a compromise between collecting as much of the emission light as possible, while ensuring that contamination from the pulse is minimized. In our experiment, we choose a temporal filter window (Extended Data Fig. 3 ) so that the pulse (assumed to have a Gaussian profile) is suppressed to the level of the detector dark counts by the beginning of the window. The end of the window about 30 ns after the pulse is chosen so that, for all of the datasets collected, the rate of detected NV-centre photons is greater than ten times the dark-count rate at all points within the window. We use a complex programmable logic device to apply this temporal filtering during our experiment and herald the successful generation of an entangled state in real time. Theoretical model of deterministic entanglement delivery We develop a detailed model to determine the expected performance of the deterministic entanglement-delivery experiment, based on the independently measured parameters given in Extended Data Table 1 . Once the set-ups are determined to be ready, the core entanglement sequence begins with single-photon entanglement generation. This proceeds by first initializing each node in \(\left|\uparrow \right\rangle \) , followed by a coherent rotation using a microwave pulse to create the state given in equation ( 3 ). Resonant excitation of the NV-centre nodes excites only the bright \(\left|\uparrow \right\rangle \) level to an excited state, which rapidly decays radiatively back to the ground state by emitting a single photon. This entangles the state of the NV centre with the presence \(\left(\left|1\right\rangle \right)\) or absence \(\left(\left|0\right\rangle \right)\) of a photon in the emitted optical mode (equation ( 4 )). The photons emitted by each NV centre are transmitted to a central station at which a beam splitter is used to remove their ‘which path’ information. Successful detection of a photon at this station indicates that at least one of the NV centres is in the bright \(\left|\uparrow \right\rangle \) state and thus heralds the creation of a spin–spin entangled state. This entangled state, expressed as \(\left|{{\rm{NV}}}_{{\rm{node}}{\rm{A}}},{{\rm{NV}}}_{{\rm{node}}{\rm{B}}}\right\rangle \) , is (in un-normalized form) $$\rho =\left|{\psi }^{\pm }\right\rangle \left\langle {\psi }^{\pm }\right|+{p}_{\uparrow \uparrow }\left|\uparrow \uparrow \right\rangle \left\langle \uparrow \uparrow \right|+{p}_{\downarrow \downarrow }\left|\downarrow \downarrow \right\rangle \left\langle \downarrow \downarrow \right|$$ where $$\left|{\psi }^{\pm }\right\rangle \left\langle {\psi }^{\pm }\right|=\left(\begin{array}{cccc}0 & 0 & 0 & 0\\ 0 & {p}_{\uparrow \downarrow } & \pm \sqrt{V{p}_{\uparrow \downarrow }{p}_{\downarrow \uparrow }} & 0\\ 0 & \pm \sqrt{V{p}_{\uparrow \downarrow }{p}_{\downarrow \uparrow }} & {p}_{\downarrow \uparrow } & 0\\ 0 & 0 & 0 & 0\end{array}\right)$$ This state is parameterized by $$\begin{array}{cc}{p}_{\uparrow \uparrow }= & {\alpha }^{2}\{{(1-{p}_{{\rm{d}}{\rm{c}}})}^{2}[{p}_{det}^{{\rm{A}}}(1-{p}_{det}^{{\rm{B}}})+{p}_{det}^{{\rm{B}}}(1-{p}_{det}^{{\rm{A}}})]\\ & +2(1-{p}_{{\rm{d}}{\rm{c}}}){p}_{{\rm{d}}{\rm{c}}}(1-{p}_{det}^{{\rm{A}}})(1-{p}_{det}^{{\rm{B}}})\}\\ {p}_{\uparrow \downarrow }= & \alpha (1-\alpha )[{(1-{p}_{{\rm{d}}{\rm{c}}})}^{2}{p}_{det}^{{\rm{A}}}+2{p}_{{\rm{d}}{\rm{c}}}(1-{p}_{{\rm{d}}{\rm{c}}})(1-{p}_{det}^{{\rm{A}}})]\\ {p}_{\downarrow \uparrow }= & \alpha (1-\alpha )[{(1-{p}_{{\rm{d}}{\rm{c}}})}^{2}{p}_{det}^{{\rm{B}}}+2{p}_{{\rm{d}}{\rm{c}}}(1-{p}_{{\rm{d}}{\rm{c}}})(1-{p}_{det}^{{\rm{B}}})]\\ {p}_{\downarrow \downarrow }= & 2{(1-\alpha )}^{2}{p}_{{\rm{d}}{\rm{c}}}(1-{p}_{{\rm{d}}{\rm{c}}})\end{array}$$ where V is the visibility of two-photon interference, p dc is the dark-count probability per detector (given by the product of the dark-count rate ν dark and the 25-ns length of the detection window), and \({p}_{{\rm{\det }}}^{{\rm{A}}}\) and \({p}_{{\rm{\det }}}^{{\rm{B}}}\) are the probabilities of detecting a photon emitted by node A and node B, respectively. In the limit \({p}_{{\rm{\det }}}\ll 1\) , for balanced detection probabilities \({p}_{{\rm{\det }}}={p}_{{\rm{\det }}}^{{\rm{A}}}={p}_{{\rm{\det }}}^{{\rm{B}}}\) and assuming no other imperfections, ρ tends to equation ( 5 ). The corresponding probability of successfully heralding entanglement is $$\begin{array}{cc}{p}_{{\rm{h}}{\rm{e}}{\rm{r}}{\rm{a}}{\rm{l}}{\rm{d}}}= & (1-{p}_{{\rm{d}}{\rm{c}}})\{\alpha ({p}_{det}^{A}+{p}_{det}^{B}-2{p}_{det}^{A}{p}_{det}^{B}\alpha )\\ & +{p}_{{\rm{d}}{\rm{c}}}[2-3({p}_{det}^{A}+{p}_{det}^{B})\alpha +4{p}_{det}^{A}{p}_{det}^{B}{\alpha }^{2}]\}\end{array}$$ The modelled success rate (plotted in Fig. 2e ) is calculated by dividing p herald by the entangling-attempt duration (5.5 μ s ). We model double excitation (discussed further below) by applying a Pauli Z transformation to each of the NV-centre states with probability p 2ph /2. Phase instability is modelled similarly, by applying a Pauli Z transformation to one of the states with probability $$\frac{1}{2}\left\{1-exp\left[\frac{-{({\nu }_{{\rm{int}}}{t}_{{\rm{p}}})}^{2}-{\sigma }_{{\rm{int}}}^{2}}{2}\right]\right\}$$ where t p denotes the time since phase stabilization. Finally, we model the effect of dynamical decoupling by assuming that it acts as a depolarizing channel for each qubit 27 . We therefore apply single-qubit depolarizing errors with a probability determined by the measured dynamical-decoupling coherence times. For decoupling for a total time of t d , the total probability of a depolarizing error (that is, the application of a Pauli X , Y or Z transformation with an equal probability) is \(3(1-{{\rm{e}}}^{-{t}_{{\rm{d}}}/{T}_{2}})/4\) . This model, based only on independently determined parameters (Extended Data Table 1 ), captures the trends of our deterministic entanglement generation data effectively (Fig. 4 ). However, we find that its predictions are slightly offset from the experimental measurements, suggesting that it does not include a small source of infidelity that is present in the experimental data. One potential origin of this discrepancy could be the increased number of attempts (up to two orders of magnitude) at generating entanglement after NV-centre state verification made here as compared to previous experiments 9 , 10 . Any additional sources of infidelity that may occur over this period (for example, owing to the passive charge-state stabilization process, discussed further below) are not included in the model. A detailed study of these potential imperfections is outside the scope of this work. Nonetheless, as an estimate of the order of this effect, we find that a small systematic correction of 3% to the heralded entangled-state fidelity is sufficient to effectively match our model to the data (Extended Data Fig. 4 ). Passive charge-state stabilization of individual NV centres The negatively charged NV centre (NV − ) can be ionized under optical illumination via a two-photon absorption process 35 . Owing to the different level structure of the neutral charge state NV 0 , the NV centre will remain dark if such an ionization event occurs during one of our entangling attempts. Ionization therefore hampers the performance of our deterministic entangling protocol by diminishing the success rate and delivery of a separable state upon success. Previous experiments 14 with NV centres that worked in the regime of probabilistically generated yet heralded remote entanglement overcame NV-centre ionization by frequent charge-state verification between protocols and by actively converting the NV centre back to NV − via interleaved resonant excitation of the optical transitions of NV 0 . Such active stabilization protocols would require additional logical overhead in our scenario, where entanglement is generated deterministically. Instead, we passively stabilize the charge state during our entangling sequence by shining in an additional weak laser beam that is resonant with the optical transition of NV 0 (Extended Data Fig. 5 ). This provides negligible disturbance to the spin-initialization fidelity of NV − while bringing the NV centre back into NV − if it was converted to NV 0 . We additionally identify that the optical reset beam (duration, 1.5 μs) is the main cause of ionization in our system and carefully balance the power of both beams so that the spin state is still well initialized and that ionization is a negligible process for our deterministic entangling protocol (up to 15,000 entangling attempts). Reducing the applied power further by elongating the spin-reset duration would decrease the entanglement rate and limit our quantum link efficiency. Extended Data Fig. 5 depicts the basic element that, in repetition, forms our sequence to probe the ionization rate. We use simultaneous charge- and spin-reset beams followed by a single microwave π rotation that brings the NV centre into \(\left|\downarrow \right\rangle \) and thus guarantees optical excitation during the next round. The NV centre is then read out after a final optical reinitialization into the bright state \(\left|\uparrow \right\rangle \) . By increasing the number of sequence repetitions, we observe a decay in the final readout fidelity that is associated with the ionization rate. By increasing the optical intensity of the charge-state reset beam, we obtain a negligible decay as a function of sequence repetitions, allowing us to overcome ionization in our deterministic entangling protocol. The illumination strength of the charge-reset beam is weak enough to avoid inducing noticeable spectral diffusion of the NV-centre emission; our measured entangled states are consistent with a high degree of indistinguishability for both NV-centre emission profiles (discussed further below). Optical-phase stabilization The single-photon entanglement experiment requires that the optical phase of an effective interferometer between the two nodes is known (Fig. 2 ). The optical-phase difference between the paths of this interferometer must be known to ensure that entangled states are available for further use. This is achieved by interleaving periods of optical-phase stabilization with our entanglement generation. For phase stabilization we input bright laser light at the same frequency as, but orthogonally polarized to, the light used for excitation of the NV centres. The orthogonal polarization is chosen because we use a crossed polarizer to filter out the excitation light from the NV-centre emission. Using orthogonally polarized light for phase stabilization allows us to collect more light reflected from the diamond substrate. Before doing this, we verified that there is no measurable difference in the relative phase of the two polarizations within our interferometer. Measurements of the phase drift (Extended Data Fig. 6a ) show a slow drift on second timescales, but several strong resonances at hundreds of hertz (Extended Data Fig. 6b ). These resonances are thought to be from mechanical elements in the path of the beam, including the microscope-objective mount. As we were unable to completely suppress these resonances in the current set-ups, we need to measure the phase over a complete oscillation to estimate the mean phase reliably. The phase must therefore be measured for approximately 10 ms. We calculate an estimate of the phase from the counts detected at the heralding single-photon detectors. This estimate is used to adjust the phase back to our desired value using a -custom-built piezoelectric fibre stretcher and a proportional-integral-derivative routine within our ADwin microcontroller. We find that it takes two to three proportional-integral-derivative cycles to stabilize the phase optimally. We stabilize the phase for three cycles during the single-photon entanglement experiment and for two cycles during the deterministic entanglement experiment. This difference is because phase stabilization occurs during every cycle of the deterministic entanglement-delivery experiment (about 100 ms), whereas it occurs only every 180 ms during the single-photon entanglement experiment and so the phase drifts slightly less after one experimental cycle. We achieve an average steady-state phase stability of 14.3(3)°, as measured by calibration routines spaced throughout the measurement of our dataset (Extended Data Fig. 6c, d ). This stability is limited by the previously identified mechanical oscillations of the optical elements in our experimental set-up. The standard deviation of the phase averaged over a 10-ms period during active stabilization is 4.8(1)°. Optical phase stabilization is also likely to be feasible for long-distance network links. Using long-wavelength off-resonant light for phase measurements would enable continuous stabilization during entanglement attempts with a negligible effect on the NV-centre state. An experimental study 22 has shown that two network nodes separated by 36 km over a commercial fibre network would still allow for interference visibilities of 99%. For longer distances, it would also be possible to track the phase passively at the time of entanglement delivery and feed this information back to the nodes in which the state is stored, requiring only a coherence time longer than the communication time. Two-photon quantum interference The quality of photon-mediated heralded entanglement between two emitters hinges on the indistinguishability of their emitted photons. We probe this indistinguishability by interfering emitted single photons on a beam splitter and measuring the number of events in which single-photon detectors connected to the output ports of the beam splitter both detect a photon. For completely indistinguishable single photons, Hong–Ou–Mandel interference ensures that both photons always exit from the same port of the beam splitter, so no coincident events should be detected. Our two-photon quantum interference experiment proceeds by exciting each emitter with a series of well-separated optical excitation pulses (separated by 1 μs). We collect statistics on coincidence events in which one detector registers a photon after one excitation pulse and then the other registers a photon after a later excitation pulse. For an infinite pulse train, the number of coincidence events detected for each number of pulses between the detection events should be constant. However, for a finite pulse train, there are some pulses for a given pulse separation for which there is no partner excitation pulse and therefore no coincident events will be detected. This leads to a linearly decreasing number of coincidence events as a function of pulse difference (Extended Data Fig. 7a ). We use a linear fit to the coincidence events to infer the number of coincidences that would be detected from the same pulse (pulse difference of zero) if fully distinguishable single photons were input (Extended Data Fig. 7b ). Because these are single photons, a counting argument shows that, for balanced emission probabilities from each emitter, the expected number of events is half of the value of the linear fit at zero pulse difference. The ratio r between the measured number of coincident events within the same pulse and the expected number of events for fully distinguishable photons is related to the single-photon wavefunction overlap \(V={\left|\left\langle {\psi }_{{\rm{a}}}| {\psi }_{{\rm{b}}}\right\rangle \right|}^{2}\) by V = (1 − r ) (again for balanced emission probabilities from each emitter). Incorporating the effect of the known imbalance in emission probabilities in our experiment, we find V = 0.90(2). Dephasing of entangled states due to double excitation An optical Rabi pulse is used to excite the NV-centre nodes to a higher level via a spin-conserving transition. The NV centre subsequently decays back to its original level through spontaneous emission, thereby entangling the spin state of the NV centre and the emitted optical mode. For optical Rabi pulses of finite duration, there is a chance that the NV centre will spontaneously emit a photon during the optical pulse and be re-excited before the end of the pulse. The first emitted photon will be lost to the environment, because it is impossible to distinguish it from the excitation light. However, if the subsequent emitted photon is detected in this double-excitation process, this will falsely herald entanglement. We measured the width of our optical pulse (Extended Data Fig. 8 ) and used a quantum-jump-based simulation to calculate the corresponding double-excitation probability. Given that the NV centre emitted a photon within the detection window, the probability that double excitation occurred is p 2ph = 0.04. State storage via dynamical decoupling The coherence time of NV centres is limited by interactions with other magnetic impurities. In our samples, the dominant source of magnetic field noise is the surrounding bath of slowly fluctuating 13 C nuclear spins (natural abundance of 1.1%), which results in typical coherence times of 5 μs. We use dynamical-decoupling ‘XY8’ sequences of the form ( t –π X –2 t –π Y –2 t –π X –2 t –π Y –2 t –π Y –2 t –π X –2 t –π Y –2 t –π X – t ) N /8 to elongate the coherence times of both NV centres (Fig. 3 ), with microwave inversion pulses π, the waiting time t and the number of pulses N . Each decoupling duration is obtained by arbitrary combinations of t and N . We find the optimal combination for a targeted protection duration of about 100 ms by varying t for a fixed N = 1,024. We choose N = 1,024 because the infidelity introduced from inversion-pulse errors is moderate for both nodes. Extended Data Fig. 9 shows the results of our decoupling-optimization procedure. We prepare the NV centre in a balanced superposition and choose waiting times that are integer multiples of the inverse 13 C-nuclear-spin Larmor frequency ν L to avoid coupling with the nuclear-spin bath (node A, ν L = 443.342 kHz; node B, ν L = 442.442 kHz). Following previously reported techniques 27 , we further avoid coupling to other magnetic noise sources that result in loss of NV-centre coherence by picking five waiting times with a total variation of 16 ns for each multiple of the inverse Larmor frequency. The data (grey) are then sorted for the waiting time with the best state-preservation quality (blue) at each multiple, giving the minimal NV-centre coherence decay for this number of inversion pulses. We then pick the waiting time that guarantees a low number of inversion pulses while still providing high-quality state protection (red). Data availability The data sets generated and analysed during this study are available from the corresponding author on reasonable request. Change history 26 June 2018 Change history: In this Letter, the received date should be 20 December 2017, instead of 27 April 2018. This has been corrected online.
Researchers at QuTech in Delft have succeeded in generating quantum entanglement between two quantum chips faster than the entanglement is lost. Via a novel smart entanglement protocol and careful protection of the entanglement, the scientists led by Prof. Ronald Hanson are the first in the world to deliver such a quantum link on demand. This opens the door to connect multiple quantum nodes and create the very first quantum network in the world. Their results are published in Nature. By exploiting the power of quantum entanglement, it is theoretically possible to build a quantum internet invulnerable to eavesdropping. However, the realization of such a quantum network is a real challenge—it is necessary to create entanglement reliably on demand, and maintain it long enough to pass the entangled information to the next node. So far, this has been beyond the capabilities of quantum experiments. Scientists at QuTech in Delft have are now the first to experimentally generate entanglement over a distance of two metres in a fraction of a second, on demand, and theoretically maintain this entanglement long enough to enable entanglement to a third node. "The challenge is now to be the first to create a network of multiple entangled nodes—the first version of a quantum internet," professor Hanson says. In 2015, Ronald Hanson's research group was the first to generate long-lived quantum entanglement over a long distance (1.3 kilometres), , allowing them to provide full experimental proof of quantum entanglement for the first time. This experiment is the basis of their current approach to developing a quantum internet. Distant single electrons on diamond chips are entangled using photons as mediators. However, this experiment has not had the necessary performance to create a real quantum network. Hanson says, "In 2015, we managed to establish a connection once an hour, while the connection only remained active for a fraction of a second. It was impossible to add a third node, let alone multiple nodes, to the network." Researchers from QuTech in Delft working on the 'entanglement on demand' experiment'. From left to right: prof. Ronald Hanson, dr. Peter Humphreys and dr. Norbert Kalb, all from the group of prof Ronald Hanson of Delft University. Credit: TU Delft/Marieke de Lorijn The scientists have now made multiple innovative improvements to the experiment. First of all, they demonstrated a new entanglement method. This allows for the generation of entanglement 40 times a second between electrons at a distance of two metres. Co-author Peter Humphreys says, "This is a thousand times faster than with the old method." In combination with a smart way of protecting the quantum link from external noise, the experiment has now surpassed a crucial threshold. For the first time, entanglement can be created faster than it is lost. Through technical improvements, the experimental setup is now always ready for entanglement-on-demand. Hanson says, "Just like in the current internet, we always want to be online, the system has to entangle on each request." The scientists have achieved this by adding smart quality checks. Humphreys says, "These checks only take a fraction of the total experimental time, while allowing us to ensure that our system is ready for entanglement, without any manual action." The researchers demonstrated last year that they were able to protect a quantum entangled link while a new connection was generated. By combining this and their new results, they are ready to create quantum networks with more than two nodes. The Delft scientists now plan to realize such a network between several quantum nodes. Hanson says, "Together with partners such as KPN, we want to connect four cities in the Netherlands by 2020 via quantum entanglement. This will be the very first quantum internet in the world."
10.1038/s41586-018-0200-5
Medicine
Antibiotic restores cell communication in brain areas damaged by Alzheimer's disease
J. K. Hefendehl et al, Mapping synaptic glutamate transporter dysfunction in vivo to regions surrounding Aβ plaques by iGluSnFR two-photon imaging, Nature Communications (2016). DOI: 10.1038/ncomms13441 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms13441
https://medicalxpress.com/news/2016-11-antibiotic-cell-brain-areas-alzheimer.html
Abstract Amyloid-β (Aβ) plaques, a hallmark of Alzheimer’s disease (AD), are surrounded by regions of neuronal and glial hyperactivity. We use in vivo two-photon and wide-field imaging of the glutamate sensor iGluSnFR to determine whether pathological changes in glutamate dynamics in the immediate vicinity of Aβ deposits in APPPS1 transgenic mice could alter neuronal activity in this microenvironment. In regions close to Aβ plaques chronic states of high spontaneous glutamate fluctuations are observed and the timing of glutamate responses evoked by sensory stimulation exhibit slower decay rates in two cortical brain areas. GLT-1 expression is reduced around Aβ plaques and upregulation of GLT-1 expression and activity by ceftriaxone partially restores glutamate dynamics to values in control regions. We conclude that the toxic microenvironment surrounding Aβ plaques results, at least partially, from enhanced glutamate levels and that pharmacologically increasing GLT-1 expression and activity may be a new target for early therapeutic intervention. Introduction Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and, with no cure available, it imposes a major burden on society. The pathological hallmarks of AD include the extracellular accumulation of amyloid plaques and intracellular protein inclusions known as neurofibrillary tangles 1 , 2 , 3 , 4 . Positron emission tomography imaging using Pittsburgh compound B has shown that cerebral Aβ plaque deposition is an early and predictive marker for the progression of preclinical to symptomatic AD 5 , 6 . Recent publications have revealed many interesting aspects of cell dysfunction in relation to Aβ deposits 7 , 8 , 9 , 10 , 11 , 12 . A current working hypothesis is that soluble Aβ adheres and combines with resident Aβ plaques leading to progressive plaque growth 13 , 14 . Although plaques consist of insoluble Aβ, each plaque may create a toxic microenvironment because soluble species continuously bind and unbind from its surface 7 , 8 . The toxic microenvironment hypothesis is supported by observations of neuronal dysfunction and decreased spine density in the immediate regions surrounding Aβ plaques 15 , 16 . Furthermore, hyperactive neurons, microglia and astrocytes with elevated intracellular calcium levels have been reported in the close proximity of Aβ plaques 9 , 10 , 12 , 17 , 18 . A shortcoming of the amyloid cascade hypothesis is that there are no clear connections between the onset of plaque deposition and the progression of cognitive impairment and neuronal loss 19 , 20 . In contrast, there are stronger correlations with the levels of soluble Aβ species 21 , 22 , 23 and cognitive impairment as well as neuronal loss. In this study we investigated whether changes in glutamate dynamics in the microenvironment surrounding Aβ deposits could contribute to neuronal dysfunction preceding the onset of neuronal death. This may establish an early link between Aβ toxicity and neuronal dysfunction. A genetically encoded fluorescent glutamate indicator, iGluSnFR, was used to detect the presence and time course of glutamate dynamics in vivo with two-photon and wide-field fluorescence microscopy 24 . It allows for unprecedented spatial and temporal resolution of local glutamate concentration on a subsecond timescale. Rapid glutamate uptake from the synaptic cleft, principally by the astrocyte glutamate transporter excitatory amino-acid transporter 2 (EAAT2 in humans or the homologue glutamate transporter 1 (GLT-1) in mice) 25 permits precise and fast synaptic transmission. Impairment of glutamate clearance in the extracellular space in AD 26 , 27 , 28 might underlie the reported upregulation of Ca 2+ signalling in neurons and astrocytes in the microenvironment of Aβ deposits and could contribute to excitotoxicity, which eventually leads to cell death. EAAT-2 activity has been reported to be significantly reduced in early stages of AD correlating well with cognitive decline in AD patients 29 . Moreover, studies of a heterozygous GLT-1 knockdown in an AD mouse model showed exacerbated cognitive decline further supporting the theory that dysfunction of the astrocyte glutamate transporter is involved in AD pathogenesis 30 , 31 . Recent in vitro results suggest that Aβ species are responsible for the loss of GLT-1 expression as a part of their toxicity to astrocytes 32 . Therefore, we treated APPPS1 mice with ceftriaxone to determine whether GLT-1 expression and/or transporter activity could be upregulated around plaques and whether this would restore normal glutamate dynamics in this region. Our results indicate that ceftriaxone helps to partially restore glutamate dynamics and chronically elevated levels of glutamate. Thus, it reduces the pathological impact of Aβ deposits and surrounding prefibrillary forms of Aβ. Results Wide-field imaging of glutamate dynamics We first ensured that changes in iGluSnFR fluorescence in mouse brains injected with iGluSnFR ( Fig. 1a ) were due to evoked synaptic release by examining the transient changes in fluorescent intensity evoked by visual or hindlimb stimulation (APPPS1 mice n =9; wild-type (WT) mice n =9). Figure 1b shows iGluSnFR expression in the hindlimb area of the somatosensory cortex. The complete cranial window ( Fig. 1c , ∼ 3.5 mm diameter) was used to analyse the stimulus-evoked signal. Wide-field imaging of the prestimulus baseline controls followed by the increase in iGluSnFR fluorescence on hindlimb stimulation and subsequent recovery to baseline levels is shown in Fig. 1d–f . The respective trace of the alterations in Δ F / F 0 shows a clear increase on stimulation of the hindlimb ( Fig. 1g , for video see Supplementary Video 1 ). Figure 1: Wide-field imaging of glutamate dynamics do not show significant changes. ( a ) Intracortical injection of AAV.hSynapsin.iGluSnFR in somatosensory cortex area coding for hindlimb ( n =9 transgene positive animals, 7 age-matched WT animals) followed by a cranial window surgery and mechanical hindlimb stimulation. Animals were head fixed and anaesthetized with isoflurane (1%) during imaging. ( b ) Pattern of iGluSnFR expression in the somatosensory cortex after viral injection. ( c ) The complete cranial window (3.5 mm diameter) was used for the analyses. A stimulus-locked response was detected on hindlimb stimulation ( e ) with baseline levels shown before and after stimulation ( d , f ). The respective trace of Δ F / F also displaying the response on stimulation is shown in g . Mean of all recorded traces in APPPS1 ( h ) and WT animals ( i ). ( j , k ) However, when analysed at this scale no difference of the measured area under the response between APPPS1 and WT animals ( P =0.09) or the percentage change ( P =0.56) of the glutamate response could be detected. Error bars indicate s.e.m. Full size image Analysing the overall changes in iGluSnFR fluorescence with wide-field imaging showed no significant differences between the APPPS1 and WT animals, in area under the curve of the primary peak or the maximum peak amplitude of the iGluSnFR signal ( Fig. 1h–k ). This indicates that, at the mesoscale ( Fig. 1a , ∼ 3.5 mm), glutamate dynamics appear relatively normal although there was a trend to decreased signals in the APPPS1 mice. A secondary peak was observed in a much wider cortical area than the primary initial peak that was dominant at the centre of the area responding to the stimuli. The subsequent two-photon laser scanning microscopy (tplsm) experiments were all performed in the centre of the area responding to the stimuli to focus on the primary response peak. Glutamate dynamics at Aβ plaques in anaesthetized APPPS1 Macroscopic approaches may obscure pathological changes of brain function in regions adjacent to Aβ plaques in AD models. Thus, we next tested our hypothesis that changes in glutamate signalling occur in discrete regions that surround Aβ plaques. To ensure that the detected changes in glutamate were within the centre of the area responding to the stimuli all animals were mapped for the respective region using wide-field imaging before data acquisition in the tplsm. This way we ensured that the primary response peak was most prominent and plaques were chosen accordingly within this area. Using tplsm the magnitude and timing of glutamate transients were detected at higher spatial resolution (150 × 150 μm) in the regions surrounding amyloid deposits (APPPS1 mice n =9, WT mice n =9; 3–7 imaged brain areas per mouse; Fig. 2a ). As in the case of wide-field imaging we first ensured that changes in iGluSnFR fluorescence during tplsm imaging in mice injected with iGluSnFR ( Fig. 2b–e ) were due to evoked synaptic release by examining the transient changes in fluorescent intensity evoked by visual or hindlimb stimulation. Furthermore, to ensure that the detected signal is glutamate dependent, we manipulated the glutamate-signalling pathway using two approaches ( n =2 animals). First, we blocked presynaptic release of glutamate using cadmium chloride (Cd 2+ ), which abolished the stimulus-evoked response ( Supplementary Fig. 1A,C ). Second, threo-beta-benzyloxyaspartate (TBOA) was applied to block the uptake of glutamate, which resulted in prolonged glutamate responses and a higher baseline fluorescence ( Supplementary Fig. 1B,D ). To exclude a localized expression difference of iGluSnFR in the direct surroundings of amyloid deposits we checked expression levels using pure iGluSnFR fluorescence. Figure 2g,h shows a homogeneous expression of iGluSnFR around the Aβ plaque shown in f. Figure 2: Glutamate dynamics are pathologically altered around Aβ plaques in anaesthetised APPPS1 mice. ( a ) AAV.hSynapsin.iGluSnFR in somatosensory cortex area (APPPS1 mice n =9, WT mice n =9; 3–7 imaged brain areas per mouse). ( b ) A stimulus-locked response was detected on hindlimb stimulation ( c ) with baseline levels shown before and after stimulation ( b , d ). A merge of the stimulus response and a Methoxy_X04-stained amyloid deposit is shown in e . ( f ) Methoxy_X04-stained amyloid deposit. ( g ) iGluSnFR expression shown in green around the Aβ plaque shown in f . ( h ) Merge of amyloid deposit and iGluSnFR. ( i – n ) Glutamate fluctuations were imaged using tplsm and are displayed in r.m.s. maps. WT animals ( i ) and APPPS1 animals that were imaged in areas where no plaques were found ( j ) show a smooth r.m.s. map with no regional differences. ( k ) APPPS1 animals show high regional differences in r.m.s. map with altered glutamatergic activity near plaque. Numbered boxes indicate analysed ROIs with increasing distance to plaque edge. ( l ) Methoxy_X04-stained Aβ plaque. ( m ) The highest average r.m.s. was measured in the vicinity of Aβ plaques. ( n ) Merged image of plaque ( l ) and r.m.s. map ( m ). ( o ) Traces representing glutamate dynamics differ significantly in relation to Aβ plaque distance. ( p ) The average r.m.s. of Δ F / F is significantly different in ROI1 in comparison with all other ROIs and WT animals. ( q ) Comparison of average baseline fluorescence in ROIs 1 and 4. ROI1 shows a significantly higher level of relative baseline fluorescence in comparison with ROI4. ( r ) The maximal response to the stimulation was detected in the ROI imaged farthest away from the Aβ plaque (ROI4), which was not significantly different from the responses detected in WT animals. ( s ) A significant difference in the area under the response between ROI3 and ROI4 was detected. No difference for the area under the response was found when comparing ROI4 with WT animals. ( t – v ) The decay rates of glutamate calculated for ROIs 3 and 4, as well as WT animals are significantly different. ( t ) Normalized glutamate dynamics on stimulation in per cent change is shown to illustrate the significantly different decay rates between ROIs 3 and 4. The rate of decay significantly increased from ROI4 to ROI3 indicating that glutamate is not taken up with the same efficiency when approaching the plaque. ( u , v ) Comparing the decay rate of glutamate in ROI4 with the one obtained from WT animals no difference could be detected. Error bars indicate s.e.m. *** P <0.001, ** P <0.01, * P <0.05. Full size image The characteristics of unsynchronized, local and spontaneous glutamate transients were investigated by quantifying the magnitude of signal fluctuations at each pixel over time and by calculating the root mean square (r.m.s.) of the iGluSnFR signals. Alterations in r.m.s. signals representing glutamate fluctuations in different regions around the plaques were compared and r.m.s. variations are illustrated as colour-coded heat maps ( Fig. 2i–k,m,n ). R.m.s. heat maps from APPPS1 animals in imaging locations adjacent to plaques show clear regional differences ( Fig. 2k,m,n ). The highest fluctuations in glutamate dynamics and thus the highest average r.m.s. were measured in the direct vicinity of the plaque (within a distance of 20 μm from the outer edge of the plaque, Fig. 2k,m,n ) indicating that the r.m.s. signals are greater the closer the region is to the plaque border. This region also exhibited the highest values within the obtained r.m.s. maps and iGluSnFR fluctuations of Δ F / F 0 indicating the highest spontaneous glutamate levels ( Fig. 2m,o ). In contrast to these local alterations in glutamate dynamics, WT animals display a low and relatively homogeneous average r.m.s. throughout the imaged field of view ( Fig. 2i ). This indicates that no prolonged presence of extracellular glutamate transients could be detected in WT animals. Interestingly, when spontaneous glutamate fluctuations were imaged over time at locations in APPPS1 mice in which there were no amyloid deposits, a low and homogeneous average r.m.s. was measured, similar to that observed in WT animals ( Fig. 2j ). Therefore, the largest dynamic changes in spontaneous glutamate concentrations are locally restricted to areas adjacent to amyloid deposits. Glutamate transients evoked by sensory stimulation also showed marked differences in the regions close to amyloid deposits. Hindlimb stimulation evoked a transient glutamate signal from synaptic release that had similar characteristics in somatosensory cortex in WT mice and in APPPS1 mice in regions distant from plaques (compare ROI4 with WT in Fig. 2o,u ). The secondary lower amplitude glutamate peak observed broadly in the wide-field imaging was not usually apparent in two-photon imaging around plaques. The source of this difference is unknown but might be caused by imaging different volumes of tissue as mentioned in the Methods section. However, to ensure that the differences between wide-field imaging and tplsm were not due to the difference in scan speed we selected a smaller region of 16 × 128 pixels to reach a scan speed of 57 Hz. We found no differences in the rate of glutamate decay, r.m.s., area under the response or maximum response in comparison with the slower scan speed of 5.92 Hz ( Supplementary Fig. 2A–F ). Furthermore, the response curves also only show one primary response peak. Discrete regions of interest (ROI) at increasing distances from the boundary of methoxy_X04-stained Aβ plaques were analysed separately to characterize the response alterations with respect to the distance from the plaque. The greatest fluctuations in spontaneous glutamate transients quantified as r.m.s. were observed in ROI1 in comparison with all other ROIs ( Fig. 2p ). However, hindlimb stimulation did not evoke glutamate transients in ROI1 or 2, whereas time-locked transients were observed in ROI3 and ROI4 ( Fig. 2o,r,t ). To exclude the possibility that a stimulus-evoked response of glutamate is hidden in the high fluctuation in ROI1, the hindlimb stimulation was repeated using 100 trials. The average r.m.s. of Δ F / F was still significantly higher in ROI1 in comparison with all other ROIs and the mean of all measured glutamate traces still did not result in a stimulus-locked response in ROIs 1 and 2 ( Supplementary Fig. 3 ). This suggests that the high degree of spontaneous glutamate dynamics measured as r.m.s. signals in ROI1 was likely due to chronic but spontaneous elevations in glutamate within this area. To test this hypothesis, we measured the baseline fluorescence of iGluSnFR in ROI1 and 4. ROI1 showed significantly higher baseline fluorescence in comparison with ROI4 supporting the hypothesis that the detected fluctuations are the result of chronically higher levels of glutamate ( Fig. 2q ). The first stimulus-locked response was detected in ROI3 that is located 40–60 μm from the edge of a given amyloid deposit. The maximum stimulation-evoked glutamate response in ROI3 was lower and the area under the response was larger than in ROI4 ( Fig. 2r,s ). In contrast, ROI4 did not show any significant differences in these parameters in comparison with WT animals ( τ =0.28 s, s.e.m.=0.02; Fig. 2r,s ). The characteristic decay time constant (tau) of the stimulation-evoked glutamate response decay in ROI3 ( τ =0.66 s, s.e.m.=0.04) was significantly slower from the response obtained in ROI4 ( τ =0.23 s, s.e.m.=0.02; Fig. 2t ). The decay time constant in ROI4 (80–100 μm plaque distance) did not differ from that of WT ( Fig. 2u ). These results indicate that the characteristics of the sensory stimulation-evoked glutamate transients progressively changed closer to the plaque until evoked signals were lost in ROI1 and 2. The increased decay time constant (tau) of the iGluSnFR signal in ROI3 versus the more distal ROI4 ( Fig. 2t,v ) suggested that glutamate transients were prolonged and clearance reduced within this region. These results clearly demonstrate that the characteristics of glutamate dynamics varied depending on distance from the plaque. It is important to note that even in the negative animals the iGluSnFR response to stimulation is not uniform throughout the field of view even in the region of the peak response. Thus, to ensure that the observed local differences in the pattern of the responses was not obtained simply by chance due to the non-uniform nature of the iGluSnFR signal we randomly generated a position to serve as the plaque location and placed the four ROIs at the correct distance from the ‘plaque’ location on transgene negative animal recordings (also chosen at random from amongst our data set). This process was repeated for 500 artificial plaque locations and the ROI responses were then averaged ( Supplementary Fig. 4D , n =9 animals). We measured the average responses in all ROIs and found no difference between the maximum response amplitudes or the decay constants between ROIs ( Supplementary Fig. 4B,C ). This analysis demonstrates that plaques are indeed surrounded by ROIs with altered response amplitudes and decay times in APPPS1 animals. The observed spontaneous and stimulus-evoked fluctuations of extracellular glutamate might also cause fluctuations of intracellular calcium concentrations in neurons in close proximity to amyloid deposits. To test this hypothesis, we combined glutamate and calcium recordings by co-expressing iGluSnFR with the red calcium indicator jRGECO. Co-excitation of the two reporters was possible using an excitation wavelength of 950 nm. iGluSnFR responses to hindlimb stimulation remained similar in the combined experiment and are shown in Supplementary Fig. 5D ( n =3 animals, 3–4 imaged brain regions). Again no stimulus-locked iGluSnFR responses were observed in ROI1 or ROI2. The return to baseline of the signal in ROI3 is notably longer than ROI4. Calcium transients recorded in the same field of view simultaneously with the iGluSnFR signal showed responding cells (somas) in all regions ( Supplementary Fig. 5C ). Such sensory-evoked calcium signals have been reported by others 7 . However, it should be noted that the lowest proportion of responding cells was observed in ROI1 and that the fraction of responding somas increased with increasing distance from the plaque. We counted the numbers of responding versus non-responding cells and grouped them into four groups corresponding to the distances of ROIs 1–4 based on the distance between the plaque and the soma. This is shown in Supplementary Fig. 5E . In ROI1 ∼ 20% of cells responded compared with ∼ 65% in ROI4. Glutamate dynamics around Aβ plaques in awake APPPS1 We examined the characteristics of glutamate release evoked by sensory stimulation in awake animals to ensure that alterations in glutamate uptake around amyloid deposits were not due to anaesthesia (APPPS1 mice n =7, WT mice=7; 3–7 brain areas per mouse). We examined the visual cortex in awake animals, as this would avoid interfering signals from constant limb movement, which would obscure somatosensory stimulation-evoked changes. Tplsm of iGluSnFR was repeated in the primary visual cortex of awake state APPPS1 and WT controls. Light stimulation was used to evoke glutamate transients in the visual cortex and the same analysis that was done in the somatosensory cortex was repeated. Similar disruptions in glutamate dynamics were observed close to Aβ plaques even though larger maximum responses to stimulation and higher r.m.s. values of Δ F / F 0 were observed in awake animals compared with anaesthetized animals. R.m.s. maps of iGluSnFR signals in awake APPPS1 animals exhibited large regional differences in their average intensity ( Fig. 3c,d ) with again the largest fluctuations observed in ROI1 ( Fig. 3g ). In WT animals the average r.m.s. map was relatively homogeneous throughout the field of view ( Fig. 3e ). As observed during the hindlimb stimulation experiments in the anaesthetized animals, there were also qualitative differences in the stimulus-locked response in awake mice from visual stimulation at increasing distances from a given amyloid deposit ( Fig. 3f–j , ROIs 1–4). The maximum response ( Fig. 3h ) and the smallest area under curve of the stimulation-evoked response ( Fig. 3i ) were again measured in ROI4 and were not significantly different to data from WT animals. ROI3 displayed a smaller maximum response and a significantly larger area under the peak response in comparison with ROI4 or WT animals ( Fig. 3h,i ). In addition, changes in decay constant were observed such that in ROI3 ( τ =0.64 s, s.e.m.=0.07) there was a significantly longer decay constant of the iGluSnFR signal in comparison with ROI4 ( τ =0.25 s, s.e.m.=0.07) or WT animals ( τ =0.14 s, s.e.m.=0.03; Fig. 3j–l ). The decay rate in ROI4 (80–100 μm plaque distance) did not differ from the one in WT animals ( Fig. 3l ). Figure 3: Glutamate dynamics are altered due to Aβ plaque presence in awake APPPS1 mice. ( a ) Intracortical injection (i.c.) of AAV.hSynapsin.iGluSnFR in primary visual cortex area (APPPS1 mice n =7, WT mice=7; 3–7 imaged brain areas per mouse). ( b ) Methoxy_X04-stained amyloid deposit. Scale bar, 50 μm. ( c ) APPPS1 animals show significant alterations in glutamatergic activity near plaque. ( d ) Merge of amyloid deposit and r.m.s. map of plaque shown in b with boxes indicating analysed ROIs 1–4. ( e ) WT animals display a lower average r.m.s. with no region-specific differences. ( f ) Traces representing glutamate dynamics differ significantly in relation to the distance from Aβ plaques. ( g ) The average r.m.s. of Δ F / F is significantly different in ROI1 in comparison with all other ROIs and WT animals. ( h ) The maximal response to the stimulation was detected in the ROI imaged farthest away from the Aβ plaque (ROI4), which was not significantly different from the responses detected in the WT animals. ( i ) After normalization a significant difference in the area under the response between ROI3 and ROI4 was detected. No difference was found when comparing ROI4 with WT animals. ( j ) The decay rates of glutamate calculated for ROIs 3 and 4, and WT animals are significantly different. ( k ) Normalized glutamate dynamics on stimulation in percent change is shown to illustrate the significantly different rates of decay between ROIs 3 and 4. The rate of decay significantly increased from ROI4 to ROI3 indicating that glutamate is not taken up with the same efficiency when approaching the plaque. ( l ) Comparing the decay rate of glutamate in ROI4 to the one obtained from WT animals no difference could be detected. Error bars indicate s.e.m. *** P <0.001, ** P <0.01, * P <0.05. Full size image Chronic elevation of glutamate fluctuation around Aβ plaques As no stimulus-locked response was measured in ROI1 but instead a high average r.m.s. was consistently observed in these experiments in ROI1, we repeated the measurements of spontaneous glutamate fluctuations in the somatosensory and visual cortex regions in the same set of animals to determine if r.m.s. maps during spontaneous recordings displayed higher fluctuations around amyloid deposits in ROI1 ( Fig. 4b,c ) in the same APPPS1 animals (for visual cortex: APPPS1 mice n =7, WT mice=7, 3–7 brain areas per mouse; for somatosensory cortex: APPPS1 mice n =9, WT mice n =9, 3–7 brain areas per mouse). WT animals again displayed a relatively homogeneous average r.m.s. throughout the imaged field of view ( Fig. 4d ). In both cortical areas analysed, the average r.m.s. of Δ F / F 0 of ROI1 was higher in comparison with all other regions and WT animals ( Fig. 2e,f ). Owing to the higher average baseline fluorescence of glutamate that was found ( Fig. 2q ) we conclude that a constantly locally higher level of glutamate fluctuation was detected. This chronically altered state of spontaneous glutamate transients appears to depend on the proximity of amyloid deposits. Figure 4: Spontaneous recordings show chronically changed glutamate dynamics. Intracortical injection of AAV1.Syn.Flex.NES-jRGECO1a.WPRE.SV40 in somatosensory or visual cortex area (for visual cortex: APPPS1 mice n =7, WT mice=7, 3–7 imaged brain areas per mouse; for somatosensory cortex: APPPS1 mice n =9, WT mice n =9, 3–7 imaged brain areas per mouse) followed by a cranial window surgery and mechanical hindlimb or visual stimulation. Animals were head fixed and anaesthetized with isoflurane (1%) during imaging. Methoxy_X04 was injected 24 h before imaging intraperitoneally to visualize amyloid deposits at an excitation wavelength of 800 nm. Scale bar, 20 μm. ( a ) Methoxy_X04-stained Aβ plaque. ( b ) The average r.m.s. of the magnitude of spontaneous glutamate dynamics over 2.82 min period (Δ F / F ) is shown in false colours and represents the entire imaging period. ( c ) Merge of methoxy_X04-stained amyloid deposit and respective r.m.s. map. ( d ) WT animals display a lower average r.m.s. with no region-specific differences. ( e , f ) In both anaesthetized and awake trials in the two cortical areas the r.m.s. of Δ F / F in spontaneous recordings of all ROIs is still significantly different in ROI1 in comparison with all other ROIs and WT animals. This suggests that the glutamate dynamics are chronically altered in the microenvironment of the plaque and is not purely dependent on a stimulus. ( g – i ) Immunohistological staining of APPPS1 mice ( n =4) showing a methoxy_X04-stained amyloid plaque ( g ) and prefibrillar forms of Aβ in a 20 μm radius around the plaque stained by hFTAA ( h ). ( i ) The merged image shows the conformational differences of the detected Aβ species by methoxy_X04 (mature amyloid) and hFTAA (prefibrillar and mature forms of Aβ). *** P <0.001, ** P <0.01, * P <0.05. Full size image To further investigate changes in the microenvironment of amyloid plaques, we examined whether prefibrillary forms of Aβ or higher orders of oligomeric Aβ are found in the immediate regions adjacent to but not part of the actual plaques potentially causing the chronic changes. A novel group of luminescent conjugated poly- and oligothiophenes have been used to stain and detect conformational differences in amyloid structure 33 . Most interestingly, the heptameric oligo-thiophene hFTAA (hepta-formylthiophene acetic acid) stains both mature amyloid fibrils and early prefibrillar states of Aβ. We immunohistochemically stained fixed brains (APPPS1 n =4 animals) with methoxy_X04, which solely binds to mature amyloid fibrils, in combination with hFTAA. Figure 4g shows the mature amyloid fibrils detected by methoxy_X04. Figure 4h shows a different and more diffuse staining pattern extending beyond the Aβ plaque when using hFTAA. The merged image ( Fig. 4i ) clearly shows a halo of prefibrillar amyloid in the microenvironment of the Aβ plaque ( ∼ 20–30 μm from the plaque edge). Thus, it is possible that the hFTAA-stained halo around plaques may indicate that non-mature forms of Aβ cause the pathological and chronic changes in ROI1 in brain tissue adjacent to Aβ plaques. The presence of prefibrillar amyloid might alter the interstitial space and thus change the diffusion of molecules such as neurotransmitters within this area. Even though it is beyond the scope of our report to account for all possible alterations within the interstitial space we tested the rate of Alexa 594 diffusion within the Aβ plaque environment by injecting the dye in vivo ( Supplementary Fig. 6 ). The injection and decay of the dye around the plaque is shown in a sequence of three time points ( Supplementary Fig. 6C ). The following taus were calculated for the decay rate of Alexa 594 in ROI1 tau=11.290±0.2 s and ROI4 tau=11.30±0.2 s. Thus, no significant difference in the rate of dye diffusion as measured by the decay constant of its fluorescence time course was detected in this experiment. Reduced GLT-1 expression around Aβ plaques in APPPS1 It was reported that beta-lactam antibiotics, such as ceftriaxone, can significantly and selectively increase the expression and activity of GLT-1 in in vitro and in vivo studies 34 . Most importantly, the neuro-protective effects of ceftriaxone have been shown in middle cerebral artery occlusion-induced focal brain ischaemia, two-vein occlusion-induced ischaemia and transient forebrain ischaemia models 35 , 36 , 37 , 38 , 39 , and they were thought to be related to the upregulation of GLT-1 (refs 35 , 36 , 39 ). Thus, we examined the impact of ceftriaxone in the APPPS1 mouse model. APPPS1 transgenic mice brains were immunohistochemically stained for GLT-1 to determine whether GLT-1 expression was reduced in the tissue surrounding Aβ plaques (APPPS1 mice n =5, WT mice n =4) visualized in the same sections with methoxy_X04. We observed significant decreases in GLT-1 expression in a 20 μm radius analysed around amyloid deposits ( Fig. 5a–c ) compared with more distant regions. This distribution suggests that the chronically higher levels of glutamate observed within ROI1 was due to reduced glutamate uptake as a result of lower expression of GLT-1. After ceftriaxone treatment GLT-1 expression in the region adjacent to Aβ plaques was increased and was now similar to GLT-1 expression levels in more distant regions of the brain ( Fig. 5d–f,j ). Vehicle treatment of APPPS1 animals and treatment of WT animals with ceftriaxone did not result in any significant changes within their groups before and after treatment ( Fig. 5j ). The reduced GLT-1 expression around Aβ plaques was not due to an absence of astrocytes as immunostaining for glial fibrillary acidic protein (GFAP), and the quantification of GFAP-positive pixels (ROI1 versus background) showed there were no differences within the radius close to the plaques ( Fig. 5g–i,k ). Treatment with ceftriaxone or vehicle did not result in any differences in GFAP expression pattern ( Supplementary Fig. 7 ). Figure 5: Decrease of GLT-1 around Aβ plaques. Treatment with ceftriaxone partially restores expression levels. Immunohistological staining of GLT-1 before and after ceftriaxone treatment (APPPS1 mice n =5, WT mice n =4). ( a ) Methoxy_X04-stained amyloid deposit. ( b ) GLT-1 staining of area surrounding Aβ plaque shown in a displays a local downregulation of the transporter in a 20 μm radius. ( c ) Merged image of Aβ plaque and GLT-1 staining. ( d ) Methoxy_X04-stained amyloid deposit. ( e ) GLT-1 staining of area surrounding Aβ plaque shown in d after treatment with ceftriaxone. Five days of drug treatment partially restored GLT-1 expression in a 20 μm radius around Aβ deposits. ( f ) Merged image of Aβ plaque and GLT-1 staining after ceftriaxone treatment. ( g ) Methoxy_X04-stained amyloid deposit. ( h ) GFAP staining of astrocytes around the Aβ plaque shown in g . Downregulation of GLT-1 is thus not caused by the absence of astrocytes in the analysed area. ( i ) Merged image of Aβ plaque and GFAP staining. ( j ) Quantification of relative change in GLT fluorescence after ceftriaxone treatment. ( k ) GFAP-positive pixels were quantified in an annulus extending 20 μm from the edge of the amyloid plaque ( n =10animals). A similar-sized area that was not plaque associated was used for comparison of GFAP-positive pixels. As the graph shows, ∼ 50% of the annulus surrounding the plaque is GFAP-positive, while only ∼ 30% of a plaque-free region is GFAP-positive. The same measurement was performed before and after treatment with ceftriaxone or vehicle. *** P <0.001, * P <0.05. Full size image It is important to note that reports also show that ceftriaxone can alter GLT-1 function without changing overall protein levels 37 , 40 . Thus, even though the upregulation in ROI1 does provide evidence that ceftriaxone has a general effect on GLT-1 in the direct plaque vicinity immunohistological stainings of GLT-1 are not likely to reflect all different stages of GLT-1 from dysfunction to downregulation of the transporter during pathology. Thus, a functional read-out to test possible alterations of GLT-1 unrelated to upregulation of overall protein levels after ceftriaxone treatment using iGluSnFR was needed. Upregulation of GLT-1 reduces changes in glutamate dynamics The histological data in Fig. 5 showed that ceftriaxone treatment reversed the selective decrease in GLT-1 expression within the 20 μm radius around amyloid deposits. Thus, we investigated whether this apparent increase in GLT-1 around plaques by ceftriaxone treatment could reverse the high r.m.s. values in ROI1 that indicate greater spontaneous glutamate transients. Furthermore, we tested whether ceftriaxone treatment led to changes in GLT-1 function possibly unrelated to overall protein levels, which would rescue the slower time constant of evoked glutamate decay rates that represent glutamate uptake ( Fig. 6a ). Figure 6: Ceftriaxone partially restores glutamate dynamic deficits around Aβ deposits. ( a ) Intracortical injection (i.c.) of AAV.hSynapsin.iGluSnFR in the primary visual cortex area (APPPS1 mice n =7, WT mice=7; 3–7 brain areas per mouse) followed by a cranial window surgery and visual stimulation (single flash) with a blue LED light. Visual stimulation and awake imaging was repeated in the same animals and same ROIs after a 5-day application of ceftriaxone (200 mg kg −1 ). ( b ) Glutamate dynamics were imaged using tplsm. The average r.m.s. of the magnitude of glutamate dynamics over a 20 s period (Δ F / F ) is shown in false colours and represents the entire imaging time. R.m.s. map is shown before ( b ) and after ceftriaxone application ( d ). ( c , e ) Methoxy_X04 was injected 24 h before imaging intraperitoneally (i.p.) to visualize the same amyloid plaque before ( c ) and after ceftriaxone application at an excitation wavelength of 800 nm ( e ). Scale bar, 50 μm. ( f ) Traces representing glutamate dynamics change after ceftriaxone application. The purple line indicates the onset of a single LED flash used as a stimulus. The high fluctuation of glutamate measured in the direct plaque vicinity (ROI1) was greatly reduced after ceftriaxone treatment. Furthermore, the quality of the stimulus-locked response in ROI3 is altered. ( g ) After ceftriaxone treatment the regional difference in the r.m.s. maps between all ROIs imaged is significantly lowered. The treatment with ceftriaxone in WT animals did not result in a significant difference when comparing the average r.m.s. before and after treatment. ( h ) After ceftriaxone treatment the difference in the area under the peak between ROI3 and ROI4, which was previously detected, is reversed, suggesting that the characteristics of the response change back to WT levels. The treatment with ceftriaxone in WT animals did not result in a significant difference when comparing the normalized area under the response. ( i ) A time line of the overall normalized glutamate dynamics on stimulation in per cent change is shown to illustrate the significant reversal of glutamate decay before and after ceftriaxone treatment in ROI3. ( j ) The decay rate in ROI3 of glutamate could be restored to a level, which is not significantly from the one measured in ROI4 or WT animals after ceftriaxone treatment. ( k – m ) Treatment with vehicle for 5 days in APPPS1 and WT animals did not result in any significant differences for r.m.s. of Δ F / F , area under the response or rate of glutamate decay. Error bars indicate s.e.m. *** P <0.001, ** P <0.01, * P <0.05. Full size image Our two-photon imaging set-up allows us to reliably reposition the brain to allow repeated imaging over days and weeks from the same precise coordinates as previously published 14 , 41 . Thus, we were able to revisit previously analysed locations in the visual cortex in awake animals (APPPS1 mice n =7, WT mice=7; 3–7 brain areas per mouse) after ceftriaxone treatment. The increase in GLT-1 and hence increased scavenging rate of glutamate has the potential to unmask glutamate transients that were hidden in elevated chronic background fluctuations, such as observed in ROI1. This higher rate of scavenging does however not lead to a loss of iGluSnFR responses, thus giving us the opportunity to detect and compare glutamate traces before and after ceftriaxone treatment. The r.m.s. maps obtained before ( Fig. 6b ) and after ( Fig. 6d ) treatment show a clear reduction of the average fluctuations measured over the imaging period. Therefore, treatment with ceftriaxone significantly lowered the chronically upregulated glutamate levels that were detected in the close proximity of the amyloid deposit. An example is illustrated confirming the location of the same amyloid plaque shown before ( Fig. 6c ) and after ( Fig. 6e ) treatment. iGluSnFR signals for ROIs 1–4 illustrated before and after ceftriaxone treatment ( Fig. 6f ) show marked changes in their time courses. The r.m.s. of Δ F / F 0 was significantly lowered and reached WT levels in all analysed ROIs when compared before and after treatment. WT animals did not show a change in r.m.s. ( Fig. 6g ). Moreover, the normalized areas under the response curve in ROI3 were significantly reduced after treatment and were then no longer different from the areas measured in ROI4 or WT animals ( Fig. 6h ). WT animals did not show a change on treatment when comparing their areas under the response. Most interestingly, when comparing the iGluSnFR signal decay constant in ROI3 before ( τ =0.64 s, s.e.m.=0.07) and after ( τ =0.10 s, s.e.m.=0.02) treatment a significant reduction was measured ( Fig. 6i ). This suggests that the increased expression of GLT-1 rescued the pathological changes in glutamate dynamics within this region ( Fig. 6g,j ). The decay constant in ROI3 after treatment did not show any significant differences to ROI4 ( τ =0.09 s, s.e.m.=0.05) or to WT ( τ =0.3 s, s.e.m.=0.05) ( Fig. 6j ). WT animals that have been treated with ceftriaxone did not show a significant change in their decay constants ( Fig. 6j ). To control for possible effects of the ceftriaxone vehicle a separate group of APPPS1 and WT animals ( n =5 per group) was treated with the vehicle for 5 days. None of the measured parameters resulted in a significant change due to the vehicle treatment ( Fig. 6k–m ). Discussion Our results provide a clear insight into a synaptic mechanism contributing to disturbances in glutamate signalling in a model of AD. Tplsm imaging of extracellular glutamate dynamics detected using iGluSnFR revealed impaired glutamate uptake from somatosensory or visual evoked responses in regions adjacent to visualized Aβ plaques. These synaptically evoked glutamate transients showed reduced glutamate clearance rates close to amyloid deposits, and chronic states of prolonged and elevated glutamate levels were detected in the direct vicinity of amyloid plaques. The regions adjacent to plaques also showed reduced expression of the major glutamate transporter, GLT-1. Treatment of APPPS1 mice with ceftriaxone restored GLT-1 expression in the plaque microenvironment almost to control levels and the plaque-associated disturbances in glutamate clearance leading to chronically elevated glutamate concentration were significantly reduced. These results indicate that Aβ plaques are correlated to a regional reduction and dysfunction of GLT-1 that causes impaired glutamate clearance rates. Functional changes to glutamate dynamics and thus possibly GLT-1 activity were detected and rescued at a distance of 40–60 μm (ROI3) from the Aβ plaque edge by ceftriaxone treatment. This indicates that mechanisms unrelated to overall protein levels, such as the dysfunction or mislocalization of GLT-1, may also contribute to the observed pathological alterations. The detailed mechanistic insights thus remain to be investigated. Overall, alterations to glutamate signalling could be an important contributor to synaptic disruption and cognitive impairment in early AD. Glutamate is the major excitatory neurotransmitter in the brain, and the rapid kinetics of release and clearance from the extracellular space are critical for precisely timed synaptic communication. The principal pathway for glutamate clearance is uptake via the GLT-1 pathway that is expressed in astrocytes. Uptake by astrocytes is essential for glutamate homeostasis that, when disrupted, leads to high levels of glutamate in the extracellular space causing neuronal excitotoxicity and cell dysfunction 11 . The reduction of EAAT2 (the human homologue of GLT-1) activity in the early stages of AD has been reported to be correlated with the cognitive decline seen in AD patients 29 , and studies of heterozygous GLT-1 knockdown mouse models of AD showed exacerbated cognitive decline 30 , 31 . Thus, we re-visited this potential therapeutic candidate with improved scientific methods providing real-time measurements of extracellular glutamate dynamics. APPPS1 mice develop amyloid plaques by 2 months of age in the neocortex and Aβ42 levels are at least five times higher than those of Aβ40. Though they are associated with neuronal dystrophy and robust astro-and microgliosis 42 an overall or plaque-associated loss of neurons could not be shown in APPPS1 mice 43 . However, our findings and those of others suggest that we are looking at an early stage of dysfunction in various cell types in the microenvironment of amyloid deposits in this mouse model rather than a model for neuronal loss. This makes the APPPS1 model a valuable tool to mimic early stages in AD in which detrimental downstream events can still be rescued. We hypothesize that the soluble material, known to be mainly Aβ42 in the case of APPPS1 mice, is present in relatively high concentrations in the microenvironment of amyloid deposits. Our staining results with hFTAA support this hypothesis since prefibrillar forms of Aβ were detected in the microenvironment of Aβ deposits. Thus, it is likely that these known toxic oligomeric species of Aβ contribute to the reported alterations in in vivo glutamate dynamics as has previously been shown in acute brain slice experiments 44 . To examine alterations in glutamate dynamics in APPPS1 transgenic mice we assessed region-specific alterations in relation to plaque deposition in vivo using anaesthetized and awake animals in wide field and tplsm. Our data show that glutamate dynamics are changed in regions surrounding plaques, and the degree of change increases the closer the tissue is to the edge of the plaque. The time constant of glutamate clearance rates is slower when approaching amyloid deposits, and chronic states of high, fluctuating glutamate concentrations are reached in the direct vicinity of amyloid plaques. Hence, we investigated whether there was a reduction in GLT-1 expression that could lead to a reduction of glutamate reuptake in this area. Histological staining of GLT-1 showed a significant reduction in the immediate vicinity of amyloid deposits adding evidence to the imaging data that the chronically higher levels of glutamate within ROI1 are due to a reduction in glutamate uptake. Other mechanisms unrelated to overall protein levels of GLT-1 such as receptor dysfunction were investigated by using iGluSnFR as a functional read-out. The beta-lactam antibiotic, ceftriaxone has been widely used to upregulate GLT-1 transporters up to fivefold and also has been shown to alter GLT-1 function and activity independent of overall protein levels 34 , 37 , 40 , 45 , 46 . We thus aimed at reversing higher extracellular glutamate levels and slower glutamate decay rates by increasing glutamate reuptake via GLT-1 transporters. The treatment of APPPS1 animals with ceftriaxone resulted in an increase in GLT-1 in histological staining in ROI1 and a partial rescue of the observed glutamate alterations measured by tplsm of iGluSnFR. The treatment reduced the prolonged decay rates in ROI3 significantly to levels comparable to those in ROI4 and WT animals. Hence we conclude that glutamate dynamics were restored within this region by the treatment. Even though we could not detect a stimulus-locked response in ROI1, we were able to reduce the high average glutamate fluctuations in the direct plaque vicinity to WT levels. The continued lack of a stimulus-locked response in ROI1 might be caused by neuronal dysfunction, which is already also prevalent within this area 7 . It has been shown that a large number of neurons (up to 50%) in the primary visual cortex can display functional impairment without obvious behavioural deficits in transgenic mice. This hints towards possible compensatory mechanisms that are able to sustain physiological activities even in circuits that contain a large number of dysfunctional neurons 7 . It is thus important to note that when using wide-field imaging to record glutamate dynamics on a more global level no significant differences between APPPS1 and WT animals were found ( Fig. 1 ). This comparison indicates that the changes are very region specific and subtle and will be lost if not analysed with the necessary spatial resolution. Interestingly, Zumkehr et al . 32 conducted behavioural tests in 3xTg-AD mice after ceftriaxone treatment of 2 months and found a significant cognitive improvement in the Morris water maze and novel object recognition tests. Overall, these findings point towards a contribution of high glutamate concentrations to the reported excitotoxic microenvironment surrounding amyloid deposits. High levels of extracellular glutamate could mediate the reported upregulation of Ca 2+ signalling and overall hyperactivity in the microenvironment of Aβ deposits and could contribute to the progressive synaptic disruption and cell death. Thus, early changes in glutamate dynamics could be used to detect the early development of detrimental excitotoxic downstream effects. However, studies in patients would require spatial and temporal resolution sufficient to determine changes in the environment around plaques. In summary we could show that changes in glutamate dynamics in the early stages of AD are part of a dysfunctional microenvironment that is directly linked to plaque deposition. Even though we cannot conclude that higher levels of glutamate precede hyperactive states in different cell types it certainly contributes to the overall pathomechanism and excitotoxic effects described in the microenvironment of amyloid deposits. It could thus be used as a novel biomarker to indicate cellular dysfunction aiming at early intervention that has the potential to stop or delay detrimental downstream effects caused by excitotoxicity. Methods Mice Hemizygous APPPS1 mice 42 that express human APPKM670/671NL and PS1L166P under the control of the Thy-1 promoter were injected with 1 μl AAV1.hSyn.iGluSnFR.WPRE.SV40 in either somatosensory cortex (hindlimb region) or primary visual cortex. The line was generated on a C57BL/6 background. For this study, APPPS1 (±) and age-matched WT animals (C57BL/6J, 5–6 months), male and female mice, were used (APPPS1 n =7–9, WT n =7–9, 3–7 locations within the respective brain areas were acquired per mouse). Mice were group housed under pathogen-free conditions. After surgery, mice were singly housed. All procedures were conducted in accordance with the guidelines from the Canadian Council for Animal Care, and were conducted with approval from the University of British Columbia Animal Care Committee. Surgery An intracortical injection of 1 μl AAV1.hSynapsin.iGluSnFR (virus titre 2.56e13 GC per ml) was performed under general anaesthesia (1–1.5% isoflurane) using a stereotactic frame (Stoelting, Model 51731) and a microinjection system (Drummond Nanoject). Two different brain areas were targeted. Mice ( n =9 per group) were injected in the hindlimb region of the somatosensory cortex (bregma −0.6, −1.5 lateral of midline, depth 400 μm) and a second cohort ( n =7 per group) was injected in primary visual cortex (bregma −2.8, lateral from midline −2.3, depth 400 μm). For simultaneous expression of iGluSnFR and jRGECO a mixture of 0.5 μl AAV1.hSynapsin.iGluSnFR and 0.5 μl of AAV1.Syn.Flex.NES-jRGECO1a.WPRE.SV40 (ref. 47 ) was injected into the hindlimb region of the somatosensory cortex (bregma −0.6, −1.5 lateral of midline, depth 400 μm, APPPS1 mice n =3, 3–4 imaged brain areas per mouse). After 10 days of recovery, a round cranial window (4 mm diameter) was installed under general anaesthesia (fentanyl, 0.05 mg kg −1 ; midazolam, 5 mg kg −1 ; and medetomidine, 0.50 mg kg −1 ) as described previously 14 , 41 . Imaging began 1 week post surgery. Before experiments in the awake state, animals were acclimated to head restraint in the imaging system for a minimum of 1 week after recovering from surgery. As previously reported, we ensured that neither microglia nor astrocytes were activated due to inflammation at this timepoint after the surgery 14 , 41 , 48 . Ceftriaxone and vehicle treatment An intraperitoneal injection of 200 mg kg −1 of ceftriaxone was given for 5 consecutive days to APPPS1 and WT animals ( n =7–9 per group) 32 , 34 , 35 . Ceftriaxone was dissolved in saline containing 1% dimethylsulphoxide. Control groups of WT and APPPS1 animals ( n =5 per group) were given an intraperitoneal injection of vehicle for 5 consecutive days. Imaging In vivo imaging was performed on awake and anaesthetized mice (1.0% isoflurane) using a custom-built, fully motorized, two-photon microscope 49 equipped with a Coherent Chameleon Ultra II laser and a Zeiss W Plan-Apochromat × 40/numerical aperture 1.0 objective. The mice were secured under the microscope by fitting the titanium ring in a custom-built head fixation apparatus, which was specifically designed for high repeatability allowing fields of view to be relocated using saved coordinates 41 . The scope is motorized and controlled by a Sutter MP285 via ScanImage (version 3.8) 49 . Methoxy_X04 and iGluSnFR were imaged separately using 800 and 920 nm excitation wavelength and were detected via non-descanned detectors and ET525/50m-2P and ET605/70m-2P emission filters (Chroma Technology). iGluSnFR images were collected using ScanImage at 128 × 128 pixels without averaging at a depth of 100–140 μm below the cortical surface. Linescan (16 × 128 pixels) tplsm was performed at 57 Hz. A total of 3–7 fields were imaged per mouse. Laser power was kept constant for each experiment and did not exceed 45 mW. All shown iGluSnFR images were taken at a relatively high scan speed of 5.92 Hz with only a 128 × 128 resolution. We thus willingly decreased the resolution of the image to gain scanning speed, which was necessary to track changes in relative glutamate fluorescence on sensory stimulation. Our tplsm was always performed at the same depth and thus only detected signals generated in layer 2/3. To image the area of maximum evoked responses using tplsm all animals were mapped with wide-field imaging before tplsm data acquisition. The area of the highest response was marked by using surface vasculature, which was subsequently easily identifiable using a low-magnification objective on the tplsm. This way we ensured that the responses that were detected with tplsm came from the primary response area seen in wide-field imaging. Wide-field imaging After a period of recovery post-cranial window surgery animals were anaesthetized with isoflurane (1%). Images (12-bit with 6.67 ms temporal resolution) were captured on a charge-coupled device camera (1M60 Pantera, Dalsa). A hindlimb stimulus was given mechanically by the use of a piezo device (single tap, 1 ms). For each stimulus five trials were averaged 50 . The wide-field method combines fluorescence originating from all layers of the cortex due to the large depth of field of the macro lens 50 . In contrast to two-photon imaging of layer 2/3, charge-coupled device -based wide-field imaging obtained signals within a large depth of focus spanning more cortical layers and a larger volume of tissue. Sensory stimulation Somatosensory cortex . Animals were head-fixed and anaesthetized with isoflurane ( ∼ 1%) during imaging. A hindlimb stimulus was given mechanically by the use of a piezo device (1 ms, single tap). Visual cortex . Mice were head-fixed awake and were allowed to run on a wheel where they received a visual stimulus in the form of a blue LED light flash (100 ms pulse) to the left eye. Before two-photon imaging the area of the largest visual response was mapped using wide-field imaging described above. Analysis Imaged iGluSnFR fluorescence intensity was used to calculate responses to stimulation as the fractional change in fluorescence intensity and reported as % Δ F / F 0 . For both hindlimb and visual stimulation 10 trials were collected and averaged. To quantify the magnitude of signal fluctuation at each pixel over time we calculated the r.m.s. of the iGluSnFR Δ F / F 0 using ImageJ. The r.m.s. maps are displayed as false colour heat maps with warm colours representing relatively higher fluctuations in the iGluSnFR signal at that pixel and colder colours representing smaller iGluSnFR fluctuations. Four regions of interest (20 × 35 μm) were analysed in each field of view. The ROIs were positioned systematically using the following work flow: 1 Find the largest response within the periphery of a plaque 80 μm away from the edge of the plaque. Place a 20 × 35 μm ROI spanning from 80–100 μm distance to the plaque edge. This is the position of ROI4. 2 Place putative ROIs 1–3 along the line connecting the centre of ROI4 and the edge of the plaque. 3 If any of ROIs 1–3 showed no response >1 s.d. above the baseline these ROIs were moved azimuthally around the plaque at the appropriate distance. 4 If a response is located update the ROI position. 5 Continue to update the ROI location if a larger response is found. If no response is located in 360°, leave the ROI at the original location. Analysis in WT animals was performed using one ROI of the same dimensions (20 × 35 μm). Custom-written Matlab (Mathworks) programmes were used to quantify properties of the time courses at each ROI. The maximum response was located in the seven frames recorded immediately post stimulus. The noise floor of the custom-built two-photon set-up was measured to quantify the contribution of instrumentation noise to the r.m.s. maps. These noise sources account for only 0.01% Δ F / F 0 r.m.s. To measure the baseline fluorescence, the first 5–20 frames of the trials were averaged in ROIs 1 and 4. The reported values are the mean of all animals. Decay constants were fit using the MATLAB fit function after averaging across plaques (to reduce noise and increase the reliability of the fit) and normalizing to the response in ROI4. Values reported for tau are the best fit and its standard error using the nonlinear least-squared method. These are quantities represented by bar graphs and their error bars in the figures. Statistical analysis A one-way analysis of variance with post hoc Tukey test was used for statistical analysis of maximum response, r.m.s. of Δ F / F 0 and area under the curve. Animals that were imaged before and after ceftriaxone treatment were considered paired groups in a two-way analysis of variance with post hoc Tukey test. All statistical calculations were done in GraphPad Prism software. All P values ≤0.05 were considered statistically significant. Data are expressed as the mean±s.e.m. Histology Mice were perfused intracardially with 0.1 M phosphate-buffered saline (PBS) and 4% paraformaldehyde. Brains were post-fixed in paraformaldehyde and transferred to 30% sucrose in PBS, then cryosectioned (40 μm) as free-floating coronal slices. Sections were incubated with blocking solution (10% normal goat serum and 0.4% Triton X in PBS), followed by primary antibody against EAAT2 (1 μg ml −1 , Santa Cruz, Dilution 1:200) or GFAP (2 μg ml −1 , Life Technologies, Dilution 1:250) and secondary antibody (Alexa-546-conjugated anti-rabbit or anti-rat IgG, 4 mg ml −1 , Molecular Probes, Dilution 1:500). Plaques were stained with methoxy-X04. Sections were imaged using an Olympus FV1000 confocal microscope and quantification was carried out using custom Matlab scripts. Mature and prefibrillar amyloid was stained by hFTAA (3 μM, hepta-formylthiophene acetic acid). Data availability The data that support the findings of this study are available from the corresponding author on request. Additional information How to cite this article: Hefendehl, J. K. et al . Mapping synaptic glutamate transporter dysfunction in vivo to regions surrounding Aβ plaques by iGluSnFR two-photon imaging. Nat. Commun. 7, 13441 doi: 10.1038/ncomms13441 (2016). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
New research from the Djavad Mowafaghian Centre for Brain Health at UBC has found a way to partially restore brain cell communication around areas damaged by plaques associated with Alzheimer's disease. The findings, published this week in Nature Communications, demonstrate a possible target and a potential drug treatment to reduce damage to the brain that occurs in the early stages of Alzheimer's disease. Using Ceftriaxone, an FDA-approved antibiotic used to treat bacterial infections, researchers were able to reduce synaptic disruption and clear the lines of neuronal communication in mice. Amyloid plaques of -amyloid deposits develop in brain regions of patients with Alzheimer's disease, These plaques are linked to the damage found in Alzheimer's disease because they prevent cell communication and are toxic to nerve cells. The researchers found that the brain areas around these plaques show high levels of glutamate, a signaling molecule essential to communication between brain cells, accompanying high levels of hyperactivity in glia, the brain's support cells. It's in this glutamate-rich environment that communication between neurons is changed or disrupted, causing neurons to die in the later stages of the disease. "By imaging the glial cells and glutamate itself around the plaques, we were able to see that the cells were not able to 'remove' the glutamate accumulating in these brain areas. By using Ceftriaxone, we were able to up-regulate glutamate transport," explains Dr. MacVicar, principal investigator and professor of psychiatry. "By restoring glutamate levels, we were able to mostly restore neuronal activity." The team's findings have implications for treatment of early symptoms of Alzheimer's disease. "This dysfunction in cell communication occurs at a very early stage in the disease, before memory impairment is detectable," says Dr. Jasmin Hefendehl, a former Postdoctoral Fellow in Dr. MacVicar's lab and the lead author on the paper. "This makes our discovery particularly interesting, as it opens a window for an early intervention strategy to possibly prevent or delay neuron and memory loss." Ceftriaxone is an antibiotic that is commonly administered before some types of surgery to prevent infections. Although a recent clinical trial failed to see improvements for treating amyotrophic lateral sclerosis (ALS), the researchers are hopeful about its potential for early intervention in treating Alzheimer's disease.
10.1038/ncomms13441
Other
Aussie businesses not ready to tackle modern slavery
Katherine Leanne Christ et al, Accounting for modern slavery: an analysis of Australian listed company disclosures, Accounting, Auditing & Accountability Journal (2019). DOI: 10.1108/AAAJ-11-2017-3242
http://dx.doi.org/10.1108/AAAJ-11-2017-3242
https://phys.org/news/2019-07-aussie-businesses-ready-tackle-modern.html
Abstract 1. Introduction 2. Literature review 3. Research design and methodology 4. Results 5. Discussion 6. Conclusion Table of contents Abstract 1. Introduction 2. Literature review 3. Research design and methodology 4. Results 5. Discussion 6. Conclusion / Table of Contents Abstract Purpose Given the impending introduction of legislation requiring large Australian listed companies to make supply chain disclosures about modern slavery, the paper aims to reveal current voluntary practice. The purpose of this paper is to provide a benchmark for assessing the current engagement of large companies with modern slavery in Australia. / Abstract__block Design/methodology/approach Institutional theory provides the foundation for assessing current voluntary practice in relation to modern slavery disclosures by large Australian listed companies. Content analysis is used to identify quantity and quality of modern slavery disclosures of the top 100 companies listed on the Australian Stock Exchange. The contents of annual and standalone reports available on websites, as well as other online disclosures, are examined using terms associated with modern slavery identified from the literature. / Abstract__block Findings Evidence gathered about modern slavery disclosures by ASX 100 companies shows information in annual and standalone reports reveal far less than other disclosures on company websites. Overall, the volume and quality of disclosures are low and, where made, narrative. A wide range of themes on modern slavery are disclosed with bribery and corruption and human rights issues dominant. Although currently in line with institutional theory, as there appear to be mimetic processes encouraging disclosure, results support the idea that legislation is needed to encourage further engagement. / Abstract__block Research limitations/implications The paper provides a baseline of understanding about the volume and quality of modern slavery disclosures as a foundation for future research into the practices of Australian companies prior to the signalled introduction of legislation mandating reporting. It also identifies potential lines of research. The sample only examines large Australian listed companies which restricts generalisation from the results. / Abstract__block Originality/value This is the first academic research paper to examine quantity and quality of modern slavery disclosures of large Australian companies. Results add support for the introduction of legislation by government. / Abstract__block / Abstract Keywords Australia Sustainability Labour Modern slavery ASX 100 Listed company disclosure Citation Christ, K.L. , Rao, K.K. and Burritt, R.L. (2019), "Accounting for modern slavery: an analysis of Australian listed company disclosures", Accounting, Auditing & Accountability Journal , Vol. 32 No. 3, pp. 836-865. Download as .RIS / Citation Publisher : Emerald Publishing Limited / Body Copyright © 2019, Emerald Publishing Limited / Citation 1. Introduction Much has been written in recent years concerning the globalisation of business practices and the benefits provided by access to global markets. Driven by consumer desire for cheap goods, many business organisations have abandoned vertical integration in favour of horizontal business models that incorporate a variety of outsourcing and offshoring activities ( Stringer et al. , 2016 ). The argument is that by moving production in labour-intensive industries to countries that are less well developed and where labour is cheaper businesses can benefit from reduced costs while meeting customer demand for products and services and maintaining income ( Gold et al. , 2015 ). Although this proposition sounds good in theory, there is a dark side to the practice, and the pressure it puts on other organisations that keep labour “in house”, that the western world is only just beginning to acknowledge; it provides a fertile ground for a form of exploitation which has come to be known as modern slavery. Notwithstanding the fact that slavery has long been outlawed on a global scale, available evidence suggests there are more slaves in the world today than at any other time in human history ( Bales, 2009 ; Gold et al. , 2015 ). Modern slavery is an umbrella term used to describe a collection of practices which, in a business context, includes traditional slavery, bonded labour, human trafficking and forced labour ( Crane, 2013 ). At its core, modern slavery is “one of the most acute abuses of human rights in contemporary business practice” ( Crane, 2013 , p. 49), a form of economic exploitation which, according to the International Labour Organization (2017) , generates $150bn in illegal profits each year from forced labour in private enterprise alone. Although modern slavery incorporates sex trafficking and activities that relate to the domestic sector, business organisations and business-related supply chains are also prominent actors in this space ( Crane et al. , 2019 ). Indeed, it is now increasingly accepted that slavery-related practices are embedded in many of the goods that are consumed and used on a daily basis ( Bales, 2009 ; New, 2015 ). Until recently, it was relatively easy for business to turn a blind eye to inappropriate labour-related practices, or at least those that occurred at the relative comfort of arm’s-length in their overseas supply chains. However, there is a growing recognition that modern slavery is everywhere “[f]rom the construction of FIFA World Cup stadiums in Qatar to the cotton farms of Uzbekistan, from cattle ranches in Paraguay to fisheries in Thailand and the Philippines to agriculture in Italy, from sweatshops in Brazil and Argentina to berry pickers in Sweden. The production chains of clothes, food and services consumed globally are tainted with forced labour” ( Business and Human Rights Resource Centre, 2017 , p. 1). Few can now claim ignorance in relation to the modern slavery that exists in global business and domestic networks ( Johnson, 2013 ; LeBaron, 2014 ). The predominant response to this type of activity from governments around the world has been to legislate against slavery-related practices. One well-documented example is the UK Modern Slavery Act 2015 ( LeBaron and Rühmkorf, 2017 ). This type of legislation promotes transparency via the corporate reporting process as an appropriate way to address the issue ( New, 2015 ). Notwithstanding the fact some have questioned the efficacy of this approach, for example, see LeBaron (2014) and Crane et al. (2019) , to date it remains the preferred option. Up until now, Australia has not had a formal policy in place to combat the modern slavery that occurs within business networks. That said on the 15 February 2017, the Attorney-General of Australia asked the Joint Standing Committee on Foreign Affairs, Defence and Trade to inquire into the need for and report on establishing a Modern Slavery Act in Australia ( Commonwealth of Australia, 2017a ) [2] . The recommendation to emerge from this inquiry is that legislation is needed. Thus, it seems certain Australian business will have to report on their efforts to address modern slavery in their direct operations and supply chains in the not too distant future ( Commonwealth of Australia, 2017b ). To date, little is known concerning how Australian companies account for modern slavery. This is unsurprising given the topic itself is relatively new and remains fundamentally neglected by business researchers ( Crane, 2013 ; New, 2015 ). Nonetheless, a burgeoning cohort of academics is now drawing attention to this issue. The current study adds to this work and responds to calls from the public and NGOs, as mentioned in the Interim and Final Reports of the Parliamentary Inquiry into establishing a Modern Slavery Act in Australia ( Commonwealth of Australia, 2017a, b ), to further knowledge of modern slavery in practice by asking the following research question: RQ1. How do Australian listed companies account for modern slavery? In order to address this question, corporate reports on websites and other online disclosures for the largest companies listed on the Australian Stock Exchange (ASX) were analysed. The remainder of this paper is arranged as follows. Section 2 provides a literature review and theoretical foundation for assessing modern slavery disclosures and highlights the need for empirical evidence to address the shortfall in current knowledge in the Australian context. The research design and content analysis method used to gather data for analysis are then introduced. Analysis of the data is presented in Section 4 , which is followed by discussion in Section 5 and then a brief conclusion. 2. Literature review Despite being illegal in most countries, including Australia, as well as expressly prohibited by Article 4 of the Universal Declaration of Human Rights, in practice slavery-related activities continue even if the nature of the practice has changed ( Crane et al. , 2019 ; New, 2015 ). It has been argued that there are currently more than 35m slaves worldwide and that the practice extends to every corner of the globe, although it is more commonly found in developing countries ( Crane, 2013 ; Gold et al. , 2015 ). Modern slavery constitutes a vast array of abuses that range from domestic servitude and sex slavery to organ trafficking and child exploitation ( Quirk, 2006 ). Areas of greatest concern from a business perspective include forced labour, debt bondage, child labour and human trafficking, which is often necessary to transport people to where the labour is needed ( Crane et al. , 2019 ; Crane, 2013 ). Corporate responsibility in relation to modern slavery is further highlighted by LeBaron (2014 , p. 237) who argues that “at least 80 percent of forced labour [that] occurs in the private economy […] involves business [organisations] in some way”. Modern slavery has not simply emerged from nowhere to become a challenge for business. Indeed, the relationship between companies and employee rights has a long history related to the abuse of human rights ( Sikka, 2011 ). 2.1 Human rights An edifice of international institutional principles and guidelines to protect the human rights of employees, including rights in relation to modern slavery, has been developed ( Birkey et al. , 2018 ). These rights relate to core standards that are commonly agreed, schemes of assessment and enforcement, and remedies for the employees ( Posner, 2016 ). Main institutional bodies and standards involved are outlined by O’Brien and Dhanarajan (2016) . The United Nations (UN) is a major actor through its various arms. Key initiatives include the Universal Declaration of Human Rights whose provisions, in Article 23, declare the right of individuals to work, to equal pay, to join unions and to just and favourable remuneration with no discrimination, ensuring an existence worthy of human dignity. These rights can be juxtaposed with separate provisions in Article 4 prohibiting slavery, although direct connection between the two Articles is not made ( United Nations, 1948 ). In 2015, the UN Sustainable Development Goals were introduced, looking to build amongst other things universal respect for, and protection and promotion of, human rights ( UN, 2015 ). Sustainable Development Goal 8 is to “Promote inclusive and sustainable economic growth, employment and decent work for all”, while targets 8.7 and 8.8, combine securing of decent work conditions with elimination of modern slavery. The former specifies the need to “Take immediate and effective measures to eradicate forced labour, end modern slavery and human trafficking and secure the prohibition and elimination of the worst forms of child labour, including recruitment and use of child soldiers, and by 2025 end child labour in all its forms”. Target 8.8 seeks to “Protect labour rights and promote safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment” ( UN, 2015 , p. 20). Detailed guides and indicators for possible business actions to help achieve these targets are provided by the Global Reporting Initiative and UN Global Compact ( GRI and UNGC, 2017 ). Although there is no direct universal legal requirement for business to comply with human rights declarations, including aspects of modern slavery, an ethical and moral guide to good practice was introduced in 2011 ( UNHRO, 2011 ) through a set of General Human Rights Principles for businesses ( Ruggie, 2011 ). Principles 11–24 identify detailed corporate responsibilities to respect human rights in the broad sense of engagement with business partners, other parties directly linked to operations, products or services, as well as employees ( UNHRO, 2011 ) and have a bearing on suppliers. Another important international initiative is the UN International Labour Organization’s (ILO’s) international workplace human rights standards ( Islam and McPhail, 2011 ). These prohibit discrimination and inequality in the workplace by ensuring workers’ human rights, such as the right to freedom of association, the right to organise and bargain collectively and the right not to suffer discrimination in the workplace ( Lauwo and Otusanya, 2014 ). Finally, in a global context the OECD Guidelines for Multinational Enterprises for responsible business introduce a section on human rights consistent with the UN Guiding Principles on Business and Human Rights ( OECD, 2011 ). Separate guidelines for different sectors are available and these are considered by Posner (2016) who, as a Futurist, argues that business and business schools need to be engaged in understanding these rights many of which are classed as modern slavery. It should be noted that lack of implementation and enforcement of the UN Human Rights Office of the High Commissioner and ILO principles and guidelines is heavily criticised by NGOs, trade unions and the media in some settings ( Lauwo and Otusanya, 2014 ). At the same time, a number of institutions have proposed their own approaches for regulation of business in relation to human rights, including “Human Rights Watch; Amnesty International; The Global Reporting Initiative; The Business Leaders Initiative on Human Rights” ( Gallhofer et al. , 2011 , p. 766). These groups are also key contributors to the development of legislation on modern slavery ( Birkey et al. , 2018 ). In summary, the development of human rights has a long institutional history. Although adoption of human rights principles and practices is voluntary for many organisations, pressure from extra organisational bodies, non-government organisations and from the moral standing of corporate leaders is mainstreamed in governance processes. Nevertheless, additional considerations associated with the management of modern slavery are much more recent and are examined next. 2.2 Modern slavery As can be seen, there is a close relationship between disclosures on human rights and current emphasis of modern slavery through forced labour, bonded labour, child labour and human trafficking, especially in global supply chains ( Crane, 2013 ). Yet, although there is an overlap, human rights issues are broader than the concern of modern slavery with threat, ownership, commodification, restricted freedom of movement and economic exploitation of workers with few studies to date looking solely at modern slavery per se ( Crane, 2013 ). Also, as noted in Section 2.1, academic interest in human rights has a long history in comparison with the relatively unknown area of modern slavery at which this paper is directed. Crane (2013 , p. 51) argues modern slavery can be found in numerous business models and that it is essentially “an attempt to underprice a key resource (labor) through illegitimate means”. Those most likely to get caught in slavery-related conditions are from poor communities where work is limited and workers seeking to improve their livelihoods and/or positively impact the well-being of their families ( Pierce, 2011 ). Sadly for many, this journey culminates in terrible working conditions or with the individual’s fate left to unscrupulous labour contractors or intermediaries who provide a protective barrier for international organisations who can then legitimately claim, should poor conditions be discovered, that they had no idea such activities were associated with the production of their products ( Barrientos, 2008 ). The use of contractors and manning agents is also an increasingly recognised problem as it erodes worker agency restricting the ability of workers to negotiate conditions directly with the organisation for which they are effectively providing services ( Stringer et al. , 2016 ). Although some labour contractors do act ethically with a genuine care for their workers, there are many cases of what Barrientos (2013 , p. 1,062) refers to as “fly-by-night” operators whose sole motivation is profit and who effectively trade on human misery and despair. There are numerous examples of modern slavery occurring in business to be found in available literature. For example, Stringer et al. (2016) report on a case of unfree labour on foreign charter vessels fishing in New Zealand waters. Examples of forced labour, debt bondage (via labour contractors and manning agents) and restricted movement were common. Some even constituted transnational trafficking prohibited under New Zealand law, yet criminal charges were not pursued. In England, the Intercontinental Hotel Group has been involved with the employment of slaves, albeit via contractual agreements with external housekeeping services ( Bales and Cornell, 2008 ). Indeed, it is common in the hospitality industry for hotels that outsource their housekeeping to “deny responsibility for the working conditions of those employed under their roofs” ( Roberts, 2015, available online). Robinson (2013) highlights additional instances of slavery in the hospitality sector within developed countries. Examples include individuals working as cooks, kitchen hands and even debt-bonded entertainers who were forced to perform at venues across the USA. In West Africa and the Ivory Coast, the cocoa industry is a further example where modern slavery thrives. Driven by the lack of power in an intense global market, cocoa growers often resort to unpaid family labour in combination with trafficked child labour to meet demands for greater quantities of product at a more affordable price ( Crane, 2013 ; Roberts, 2003 ). For the farmers in the sector, the use of slave labour is viewed as the only way to keep costs down ( Manzo, 2005 ). Of interest in Manzo (2005) is anecdotal evidence which suggests when the cocoa price increases the use of slavery tends to decrease indicating a link between upstream price pressure and slavery. In Uzbekistan, the conditions faced by workers in the cotton industry are well documented with the approach to child labour described as among the worst examples in the world as the state forces children to work, in violation of both international law and conventions to which the country is a signatory ( Bhat, 2011 ). Debt-bonded labour in Bangladesh is common, yet it was only in 2013 when the Rana Plaza garment factory collapsed shocking the world with the deaths of over 1,000 workers that the international community chose to express its outrage at these unsafe and unacceptable working conditions. Unsafe work conditions such as those seen at Rana Plaza often represent the start of a slippery slope of exploitation which can quickly deteriorate into forced and unfree labour ( Lawthom et al. , 2017 ). In the aftermath, many of the brands whose goods were found there denied knowing of the factory’s existence ( LeBaron, 2014 ) leading to the question, was this a case of “willful blindness” or are these large multinational companies truly ignorant as to how the goods they sell are produced? Even accepting the plea of innocence advanced by multinational companies, the need for organisations to be better informed with regard to how their products are made and sourced and how the workers involved in the process are treated is undeniable, it being imperative organisations start to address these issues as unacceptable risks ( Johnson, 2013 ). Much of the abuse that occurs in relation to workers happens in corporate supply chains with organisations engaging in upstream, low value activities especially vulnerable to slavery-related practice ( Crane, 2013 ). It then follows that large companies, often located downstream, have a responsibility to work with their suppliers, and with the suppliers of their suppliers, to ensure working conditions are both legal and humane. Although some have argued this responsibility should be legalised, jurisdictional issues make the implementation of global legislation extremely difficult if not impossible. In addition, some multinational companies deal with thousands or even hundreds of thousands of independent suppliers on a daily basis, presenting practical limitations ( LeBaron and Lister, 2015 ). Thus the “question of how far corporate liability and responsibility should extend within supply chains remains contentious” and in many ways the need to address slavery-related issues in this context remains a moral rather than a legal obligation ( LeBaron, 2014 , p. 238). This is not to say business is hamstrung or helpless in relation to this matter. Crane (2013 , p. 60), for example, suggests that it is possible for “supply chain intervention[s] [to] moderate the relationship between context and slavery”. Such interventions can take many forms and there are now a number of examples in practice. Some sectors where slavery has been found to be prevalent have elected to take an industry-wide approach to addressing the problem. One such example is the Sustainable Apparel Coalition (SAC) founded in 2009 by Walmart and Patagonia ( Sustainable Apparel Coalition, 2017 ). A key element of the SAC and its approach to social and environmental sustainability is the Higg Index, an assessment tool which allows “brands, retailers, and manufacturers to measure their environmental and social and labor impacts at every stage of the lifecycle and value chain” ( Sustainable Apparel Coalition, 2017 ). The SAC is committed to a Social and Labor Convergence Project which aims to develop “an industry-wide, standardized methodology for social and labor performance assessment in the apparel and footwear supply chains” ( Sustainable Apparel Coalition, 2017 ). In the UK, the nine largest supermarket chains worked together to sponsor an initiative called Stronger Together which, in 2016, released a toolkit with the explicit purpose of addressing modern slavery in the supply chains of businesses involved in the provision of retail consumer goods ( Stronger Together, 2017 ). In 2011, the chemical industry launched the Together for Sustainability initiative the purpose being “to develop and implement a global audit program to assess and improve sustainability practices within the supply chains of the chemical industry” ( Together for Sustainability, 2017 ). Among other things, the programme audits for forced labour and child labour in the chemicals supply chain ( Together for Sustainability, 2017 ). Examples of other initiatives include the Pharmaceutical Supply Chain Initiative, the International Cocoa Initiative which is concerned with the eradication of child labour and forced labour, Fairmined which is a sustainability-based accreditation scheme for gold miners, and more generic schemes such as Fair Trade, the OECD Guidelines for Multinational Enterprises, the regional Bali Process Government and Business Forum, the UN Global Compact and more recently the UN Sustainable Development Goals. As can be seen from the preceding paragraphs, efforts in this area are primarily concerned with development of tools and audit programmes and, in some cases, information sharing. Although the stated aims associated with industry-based schemes are admirable, there are still numerous problems associated with their implementation. For example, organisations with long supply chains often have problems associated with determining the traceability of different products while others have questioned the effectiveness (and independence) of industry-driven audit programs ( UN Global Compact and BSR, 2014 ). The fact supply chain labour abuses continue to be uncovered indicates available initiatives are either not as effective as they could be or have been unsuccessful in recruiting organisations to the cause. Either way the need for additional requirements, backed by legislation, is now generally acknowledged. This approach is not meant to replace industry- or company-based initiatives, but rather complement them in a way that encourages a broader adoption of measures to identify and prevent slavery-related practice. In short, modern slavery has its own set of human rights issues associated with inhumane treatment and control of labour. Issues arising are particularly evident in supply chains, which for many organisations reflect relationships with thousands of suppliers. Where these supply chains extend across different jurisdictions and are, thus, subject to regulatory state controls in different countries, different pressures on the tiers of suppliers emerge, making it hard for companies to be assured that they are not inadvertently supporting modern slavery in practice. To this end, there has been a spate of new legislation in different countries, aimed at reducing the number of modern slavery instances on a global scale to zero. These developments are considered next. 2.3 Legislation An early example of legislation purposefully designed to combat modern slavery in business was the California Transparency in Supply Chains Act introduced in 2010. This Act has a sector-specific focus with retail sellers and buyers with operations in the state required to disclose their efforts to identify and eradicate modern slavery in their direct supply chains ( Crane, 2013 ). The legislation requires the company to disclose the extent to which it “[verifies] product supply chains to evaluate risks; conduct[s] supplier audits; requires direct suppliers to certify that materials are produced in accordance with antislavery legislation in relevant countries; maintains internal accountability procedures in respect of employees or contractors who fail to meet company standards; and provide[s] training for employees and management” ( New, 2015 , p. 700). In 2015 the UK introduced similar, but more generic, legislation, the UK Modern Slavery Act 2015, which required large organisations to outline via a Modern Slavery Statement the steps taken to tackle slavery and human trafficking in their business and operational supply chains ( LeBaron, 2014 ). This information must be made publicly available on the company’s website and be signed off by at least one director. More recently, steps have been taken to address slavery in business in France with introduction of the French Corporate Duty of Vigilance law 2017, The Netherlands and Australia, who are investigating the potential for legislation to address different aspects of modern slavery. Existing examples of slavery-based legislation reveal a strong preference for reporting and transparency as a way of dealing with the problem. Yet reporting requirements are flexible leaving it up to the individual company to decide what and how much they reveal. One exception is France where the government adopted an approach centred around due diligence. The focus on due diligence is consistent with various human rights initiatives such as the Guiding Principles on Business and Human Rights which were endorsed by the UN in June 2011 ( O’Brien and Dhanarajan, 2016 ). Furthermore, France considered the adoption of strong penalties with fines of up to €10m and further penalties if otherwise preventable damage for non-conformance ensued. In the event, the Constitutional Court did not agree to the implementation of these civil penalties. In other jurisdictions, penalties have also been considered but are minor or described as “reputational” and it is assumed the public will punish organisations that do not do the right thing. As previously mentioned, Australia is among the latest in a list primarily made up of developed countries seeking to introduce legislation requiring companies to report on their efforts to tackle modern slavery in their operations and supply chains. The process began in February 2017 when the Attorney-General of Australia asked the Joint Standing Committee on Foreign Affairs, Defence and Trade in Australia to inquire into the need for and report on establishing a Modern Slavery Act in Australia ( Commonwealth of Australia, 2017a ). The Inquiry recognised that although Australia is a developed country with a seemingly high standard of living, it is not immune from modern slavery both within its borders and without. For example, Andrew Forrest, a well-known Advocate in the fight against modern slavery and CEO of Fortescue Metals Group, conducted a slavery audit of his own company and was shocked to find slave labour within his supply chain. Products sold in Australia, such as cat food produced by Mars Inc. and Nestlé, have also been subject to media exposés revealing slavery-related practice in the South-East Asian fishing industries from which the required ingredients are obtained ( Business and Human Rights Resource Centre, 2017 ; Fischman, 2017 ). This is but one example of supply chain labour abuses that are now in the public eye. Cases of slavery within Australia itself are few, but they do exist with exploitation most common in industries such as agriculture, domestic work, hospitality and construction ( Cullen and McSherry, 2009 ; David, 2010 ). One example concerns ten nurses trafficked from the Philippines on the pretext of being offered nursing work in Australia. Once in the country, the women were confined to a house in Victoria where they were forced to work as unpaid cleaners ( David, 2010 ). Another example is foreign agricultural workers trafficked into Australia to undertake seasonal work. These workers were housed in animal shelters, underpaid and managed by gang-masters who threatened them with both physical and sexual violence ( David, 2010 ). In such cases, pay is often also withheld as a part of the control exercised over the victims. These examples demonstrate that despite slavery being illegal in an Australian context and outlawed by the Criminal Code (Cth), Australian business is still directly and indirectly involved with modern slavery ( Cullen and McSherry, 2009 ). In recognition that an unregulated business environment was unlikely to bring about real change ( Gallhofer et al. , 2011 ; O’Brien and Dhanarajan, 2016 ), the Australian Parliamentary Inquiry was established. The initial Inquiry received over 200 submissions leading to a Public Consultation and Industry Impact Statement and discussion paper being released on the 16 August 2017 ( Australian Government Attorney-General’s Department, 2017 ). Based on this document, the government’s preferred response was to develop and legislate a Modern Slavery in Supply Chains Reporting Requirement which would require all organisations operating in Australia with total annual revenues in excess of $100m to “report on their actions to address modern slavery in both their operations and their supply chains” ( Australian Government Attorney-General’s Department, 2017 , p. 15). Among other things, it was hoped the proposed legislation will facilitate “a “race to the top” by providing reputational incentives for businesses to take action on modern slavery” ( Australian Government Attorney-General’s Department, 2017 , p. 10). The Final Report to emerge from the Inquiry, Hidden in Plain Sight , confirmed the recommendation that a Modern Slavery Act be introduced with a corporate supply chain reporting requirement among the central features proposed. This report reduced the mandatory reporting level to companies with annual revenues in excess of $50m and also proposed to extend the provision to government procurement ( Commonwealth of Australia, 2017b ), something also being considered through an independent Bill introduced into Australia’s State Parliament of New South Wales ( Parliament of New South Wales, 2018 ). Notwithstanding the fact that the Australian Government supports the introduction of legislation on modern slavery, to date there is no systematic research which examines how Australian companies are currently dealing with the issue. In some ways, this is not surprising as research on the area of modern slavery in general is underdeveloped ( Smith and Betts, 2015 ). Crane (2013) , for example, notes there is a near vacuum of research in the management literature in relation to modern slavery. Lack of research in this area has also been noted by Gold et al. (2015) , New (2015) , Soundararajan et al. (2018) , Phillips and Mieres (2015) and Barrientos (2013) among others. Furthermore, the research that does exist tends to focus “on the victims rather than the organizations perpetrating [and contributing to] slavery” ( Crane, 2013 , p. 51). The lack of research in this area, and more especially in relation to Australia, is problematic as there is no baseline from which to evaluate the impact of any subsequent legislation. In addition, modern slavery is a topic accounting academics should engage with given accounting and auditing practices can be used to create an environment where slavery-related activities prosper ( Crane, 2013 ; Gold et al. , 2015 ). The research presented addresses this shortfall in baseline knowledge. In doing so data from the largest 100 listed Australian companies is considered. In summary, recent moves towards legislative developments to eliminate modern slavery practices in a number of jurisdictions have followed concern for the control of practices in corporate supply chains. An information strategy is being developed whereby, initially, building transparency of external stakeholders and awareness of management has received growing attention. Australia is soon to commit to the introduction of modern slavery legislation and the matter arises as to what this might achieve, given recent experiences and the different means available for persuasion of companies towards best practice behaviour and action. Institutional theory is discussed next, as a way of understanding current practices and of perceiving potential developments. 2.4 Institutional theory The way Australian companies account for modern slavery takes place in the milieu of various pressures for disclosure identified in new institutional sociology (institutional theory) as different isomorphisms, coercive, mimetic and normative, or combinations of these ( Willmott, 2015 ). The coercive mechanism of change is legally sanctioned; the mimetic mechanism morally governed by management; and the normative mechanism is culturally supported ( Scott, 2008 ). Lack of awareness of these mechanisms and how they combine in global projects, such as eradicating modern slavery ( United Nations, 2018 ; International Labour Organisation, 2018 ), might lead to unexpected costs when regulative, cognitive-cultural and/or normative institutions are misunderstood ( Orr and Scott, 2008 ). Wong (2011) sounds an additional word of caution because of the mix of institutional issues being addressed in modern slavery, such as forced, bonded and child labour, when a focus on greater granularity might lead to more effective regulation. This said, institutional theory offers a suitable lens for addressing management response to modern slavery issues and the paper adopts institutional theory as developed originally by DiMaggio and Powell (1983 ). In the Australian context, practice is not currently based on coercion from government through legislation ( Commonwealth of Australia, 2017b ). Nevertheless, for several reasons, the paper adopts new institutional sociology to guide the analysis of modern slavery disclosures. Companies might be pressuring each other to disclose in anticipation of concerns about potential legislation. Also, given the growing number of instances of modern slavery being revealed ( Crane, 2013 ), there might be normative pressures, from extra organisational actors, industry associations, non-government organisations and not for profits, etc., to make disclosures about modern slavery in domestic and global supply chains. The paper focusses on identifying and analysing the account provided by companies prior to potential legislation, rather than detailing the different pressures from stakeholders holding the companies to account ( Ijiri, 1983 ). Existing literature adds support to the adoption of institutional theory. Islam and McPhail (2011) outline the ways that institutional theory has been used to explore corporate social disclosure practices. They identify that it has previously been used “to establish a link between stakeholder pressure and individual corporate reporting practices […]; explore isomorphic reporting behaviour across different organisations that experience the same kinds of social and environmental pressures […]; and explain the adoption and reporting of social and environmental standards more generally” ( Islam and McPhail, 2011 , p. 795). Furthermore, Birkey et al. (2018 ) suggest a “more nuanced analysis of the narratives within the disclosures, and how those might vary with respect to what institutional theorists refer to as coercive, normative, and mimetic pressures could be enlightening”. Finally, Crane (2013) proposes institutional and strategic capabilities theories combined as influencing the adoption of modern slavery as a management practice. Of particular interest is the aim of institutional deflection which identifies and shapes “the external institutional conditions through which slavery is able to flourish, even in the face of near universal illegality” ( Crane, 2013 , p. 50). The focus here is on current disclosures, not the actual dynamics of institutional pressures except in the sense that regulation is expected and this may change the future status quo in relation to modern slavery disclosures. The current interest is in providing an anchor against which analysis of such future changes might be judged. Hence, the paper takes institutional theory back to basics using it as a general classificatory scheme from which to discuss how, given the results in a non-coercive institutional environment, they might best be advanced in the Australian context. The next section addresses the methods used in undertaking this research. 3. Research design and methodology Attention is drawn to three aspects of research design and methodology in the following sub-sections: background to content analysis, data collection and data analysis design. 3.1 Content analysis – background Because of their power and wealth, diverse operations and broad geographical reach, large companies have potential to impact a larger number of vulnerable people who may be cast into modern slavery ( Gold et al. , 2015 ). To date, the supply chains of large listed companies have been the main focus of legislation on modern slavery reporting in different jurisdictions. Hence, understanding disclosure practices in relation to modern slavery in supply chains is of greatest interest when establishing a benchmark of current practices. To develop this understanding, the main research method adopted here is content analysis. Content analysis is applied to discover modern slavery disclosures from the top 100 companies included on the ASX. In Australia, it has been suggested that reporting on modern slavery in supply chains be required by organisations with a minimum A$100m in annual revenue ( Parliament of the Commonwealth of Australia, 2018 ). Businesses with revenue of less than A$100m can opt to report voluntarily, making the following definition for revenue critical for entities considering whether they must report: Revenue will include the consolidated revenue of the reporting entity and the entities it controls (if any) and will be calculated in accordance with the Australian Accounting Standards. To create a level playing field, the reporting requirement will apply to all Australian entities that meet the revenue threshold as well as all foreign entities with more than $100 million consolidated revenue that are carrying on business in Australia. ( Australian Government Department of Home Affairs, 2018 , p. 56). The proposed Modern Slavery Act in Australia was established bearing in mind the requirements of the UK Modern Slavery Act 2015. In the UK, large companies are also targeted, with their annual turnover threshold being ₤36m (A$64m equivalent, August 2018) (UK Modern Slavery Act, 2015, Clause 54). 3.2 Data collection For many years, annual reports were viewed as the main medium by which to analyse a company’s social and environmental disclosures to the public ( Guthrie and Parker, 1989 ; Cowen et al. , 1987 ). Nevertheless, recently attention has been directed towards applying content analysis across a broader set of reports, to obtain a more complete picture ( Unerman, 2000 ; Zéghal and Ahmed, 1990 ). Studies exploring the extent and nature of disclosure indicate companies use different media for reporting different types of information ( Guthrie et al. , 2008 ; Frost et al. , 2005 ; de Aguiar and Bebbington, 2014 ; Patten and Crampton, 2003 ). For example, Guthrie et al. (2008) show some categories of social and environmental information (e.g. product responsibility, labour practices) are reported more in annual reports than on corporate websites, whereas others (e.g. environmental issues, social performance and human rights) are predominantly reported on corporate websites. Likewise, Frost et al. (2005) observe a higher volume of labour-related disclosures in the annual report, higher environmental disclosures in standalone reports and a more diverse range of disclosures on corporate websites. Furthermore, when comparing environmental disclosures in annual reports and websites, Patten and Crampton (2003) find a greater number of positive/neutral disclosures on websites and more negative disclosures in annual reports. More recently, de Aguiar and Bebbington (2014) explore differences in the quality of voluntary climate change disclosures and find a higher incidence in standalone reports compared with annual reports. Evidence suggests that companies increasingly use standalone and online reports to disseminate their social and environmental disclosures ( Guthrie et al. , 2008 ; Frost et al. , 2005 ; Hooks and van Staden, 2011 ; Clarkson et al. , 2008 ). Hence, corporate annual reports, standalone reports (sustainability, corporate social responsibility and environment, social and governance) and other disclosures on company websites (modern slavery statements, human rights statements and supplier codes of conduct) were chosen here as the sources from which to collect data about modern slavery in supply chains. When assessing disclosures, there is a likelihood that quantity and quality will differ between the reporting sources. For example, Michelon et al. (2015) compare the volume and quality of disclosure between annual reports and standalone reports. Although their analysis evidences a high level of information in standalone reports, the quality of information disclosed in standalone reports was found to be similar to the quality of information reported in annual reports ( Michelon et al. , 2015 ). With this finding in mind, to provide greater insight into companies’ reporting behaviour, volume and quality of disclosures were separately explored. Data from annual reports, standalone reports and websites were collected during July 2017. For companies reporting at the end of the fiscal year annual reports and standalone reports dated 30 June 2016 were used. In cases, where companies had an end of financial year date other than 30 June, the most recent annual reports and standalone reports available for each company at the time of data collection were used ( Cuganesan et al. , 2010 ; Patten and Crampton, 2003 ). As website data change rapidly to capture changes during the month and avoid potential problems ( McMillan, 2000 ), each of the sample companies’ websites was accessed throughout the month and all data relating to modern slavery saved to be analysed at a later stage ( Cuganesan et al. , 2010 ). Furthermore, as the study also separately examines disclosure in annual and standalone reports, online copies of such reports were excluded from the website analysis ( Patten and Crampton, 2003 ) in order to segregate data between the three different reporting media. In addition, links to external press release disclosures or newsletters were excluded from the analysis ( Patten and Crampton, 2003 ) in order to reduce the potential ephemerality of content and to improve replicability of the content analysis ( Campbell and Beck, 2004 ). 3.3 Data analysis design Content analysis is one of the most widely used techniques in the social and environmental disclosure literature to assess the extent, nature and quality of disclosure ( Guthrie and Abeysekera, 2006 ; Cormier and Gordon, 2001 ; Hooks and van Staden, 2011 ; Hasseldine et al. , 2005 ). In its simplest form, content analysis is a “research technique for making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use” ( Krippendorff, 2004 , p. 18). One of the important issues that must be considered in content analysis is the unit of analysis, a unit being “an identifiable component of a communication through which variables are measured” ( Gamerschlag et al. , 2011 , p. 241). When examining the extent of disclosure, previous studies have used various measurement units such as word count ( Deegan and Gordon, 1996 ), sentences ( Hackston and Milne, 1996 ), pages and fractions of pages ( Gray et al. , 1995 ). The most appropriate measure for assessing the extent of disclosure remains debatable ( Gray, 2010 ). Unerman (2000) , supporting the use of a proportion of a page, suggests that “characters, word, sentence or paragraph counts ignore differences in typeface size which can be captured by measuring volume as the proportion of a page taken up by each disclosure” (p. 667). According to Milne and Adler (1999) , using sentences as a basis for coding and measurement provides more reliable and meaningful results compared to use of words or proportions of page. The use of sentences is further justified by both Hackston and Milne (1996) and Raar (2002) on the basis that accuracy is more assigned to sentences than individual words, and provides the context of the words even when words are used as the unit of analysis. However, one of the major arguments against measuring the extent of disclosure in terms of the number of sentences is that it will result in any non-narrative disclosures (such as charts, tables, pictures, photographs, etc.) being ignored ( Unerman, 2000 ; Chan et al. , 2014 ). Unerman (2000) specifically states that “ignoring pictures, graphics and different typeface sizes is also likely to result in an incomplete representation of the quantum of corporate social reporting in many reports” (p. 678). Several studies have recently supported this notion and suggest incorporation of narrative and non-narrative disclosures in content analysis studies to measuring the extent of disclosure ( Unerman, 2000 ; Steenkamp and Hooks, 2011 ; Pesci and Costa, 2014 ). Nevertheless, measurement of non-narrative disclosures is “highly subjective” ( Wilmshurst and Frost, 2000 , p. 17) and inclusion in the measurement process makes it difficult to combine them with sentences in a plausible way ( Wilmshurst and Frost, 2000 ; Chan et al. , 2014 ). It is also difficult to determine the amount of disclosure represented by a picture without converting it into a text equivalent which might be counterproductive ( Guthrie and Abeysekera, 2006 ). In consequence, the authors decided to exclude non-narrative disclosures and this is noted as a limitation of the study. On this basis, the extent of disclosure is measured as a continuous variable represented by the number of sentences related to modern slavery sub-themes by each sample company in their annual report, standalone report or on their corporate website. Another important element of content analysis is the choice of an appropriate categorisation for disclosures ( Weber, 1990 ). To establish the broad categories of modern slavery information, a research instrument was developed (see Table I ). Since no prior instrument specific to modern slavery disclosure is available, the research instrument developed here required seeking out definitions from prior studies. One problem encountered was that there is currently no universal definition of modern slavery ( Quirk, 2006 ; Crane, 2013 ). General definitions discovered in prior research largely consist of several common factors including human rights, forced labour, bonded or child labour, human trafficking, minimum wage, health and safety in supply chain (e.g. mental or physical threat, abuse, employee discrimination, illegal practices in supply chain and so on) ( Crane, 2013 , Crane et al. , 2019 ; LeBaron and Rühmkorf, 2017 ; Gold et al. , 2015 ; New, 2015 ; Quirk, 2006 ). These aspects were thus included in the research instrument developed (see Table I ). In addition, ten randomly selected Australian companies’ modern slavery statements (30 reports) from the ASX 100 were analysed to further identify possible themes relating to modern slavery issues. As a result, a research instrument containing five broad categories was derived. Data relating to the volume and content of modern slavery disclosures were recorded in a searchable Microsoft Word file. In addition to volumetric data, the quality of disclosures was assessed. Studies in social and environmental research provide a range of measures of disclosure quality, while recognising that it can add to the subjectivity of results (see e.g. Birkey et al. , 2018 ; Cormier and Gordon, 2001 ; Hasseldine et al. , 2005 ; Hooks and van Staden, 2011 ; Michelon et al. , 2015 ). Given the newness of modern slavery, disclosures research into assessment of quality requires a simple, pragmatic approach in the development of an initial understanding. Cormier and Gordon (2001) provide a three-point scale for assessing quality. Highest quality is represented by themes described in quantitative terms, medium quality represents specific items described in specific narrative and lowest quality for items discussed in general terms. Guthrie and Parker (1990) provide a similar basis for a quality index based on monetary, non-monetary, declarative and no disclosures. Bozzolan et al. (2006) adopt the simplest approach, but one which still differentiates the quality of disclosure between companies, counting qualitative disclosures as one and quantitative disclosures as two. In all studies, quantitative disclosures are treated as of higher quality than narrative ( Hooks and van Staden, 2011 ) as they facilitate the use of nominal, ordinal, interval and ratio scales ( Marston and Shrives, 1991 ). Combining these perspectives a three-point scale is developed for this study based on zero, narrative and quantitative disclosures. Assessment is made by scoring each theme identified in the research instrument using a value of 0, for narrative, or 1, for quantitative. Disclosures are analysed to determine whether they are narrative or quantitative in nature. Finally, scores are calculated by theme (total possible 5 points) and sub-theme (total possible 14 points) for each company. Content analysis has been subject to criticism regarding the reliability of classifications when repeated by different persons on different occasions with different instruments ( Krippendorff, 2013 ; Drost, 2011 ). As Milne and Adler (1999) suggest, reliability in content analysis is associated with two related issues: reliability of the research instruments and reliability of the data collected using those instruments. In order to demonstrate the reliability of the research instrument, a well-specified set of decision rules was designed to reduce possible discrepancies between coders ( Milne and Adler, 1999 ; Hackston and Milne, 1996 ) [1] . To ascertain reliability of the data collected, testing was conducted on data from the top five companies listed on the stock exchange (five annual reports, five standalone reports and five corporate websites). Two researchers, one co-author and one independent research assistant, performed a pre-test of the coding activity, using the research instrument keywords and decision rules established. Classification schemes and a set of keywords allowed the independent coder to determine “what” and “how” the coding was to be carried out. Several minor discrepancies were found when comparing the pre-test results between the two coders. These related to a lack of definition in disclosure categories, and omission of some items in the instrument by either coder. The researcher and independent coder discussed these uncertainties in coding and reached a common agreement on correct classification. Revisions were undertaken to both the decision rules and research instrument accordingly. Although the researcher and the independent coder believed pre-testing the instrument had produced high levels of coding reliability, the final pre-testing was formally assessed using content analytic reliability measures. Although various measures exist, including per cent agreement, Holsti’s method, Scott’s π , Chen’s κ ( Hayes and Krippendorff, 2007 ; Lombard et al. , 2002 ), inter-coder reliability was tested using Krippendorff’s α , generally considered the most reliable measure ( Krippendorff, 2013 , Lombard et al. , 2002 ). Guthrie and Mathews (1985) suggest a score of 0.80 confirms an acceptable level of agreement while Wimmer and Dominick (1991) suggest 0.75 or above for Krippendorff’s α is acceptable. The internal reliability results for the pre-testing produced coefficient values above 0.90 for all categories indicating a strong agreement between the two coders. Further, given the inherently subjective nature of assessing disclosure quality, an internal reliability test was also conducted on data gathered on annual and standalone reports and websites for seven randomly selected companies from the ASX 100 (total 21 reports). High quality, substantive disclosures are seen as specific disclosures involving extensive detailed, quantitative information ( Soobaroyen and Ntim, 2013 ; De Villiers and van Staden, 2006 ) which may demonstrate actual changes in the organisation’s actions, goals or processes. To check for this quality in the disclosures made, a simple mechanism was used based on whether narrative or quantitative disclosures appeared. Some quantification was excluded from results because it did not relate to modern slavery themes, e.g., terms such as “ Section 1 ”, “Principle 1”, dates such as “21/1/2016” and footnote numbers such as “2”. The two coders were in full agreement on all observations when assessing the same sentences and themes from the randomly selected sample of reports, as recommended by Milne and Adler (1999) . In summary, the main research method adopted was the application of content analysis to determine the quantity and quality of disclosures. Corporate annual reports, standalone reports and websites were interrogated in July 2017. A set of decision rules was adopted for the coders and a high level of agreement was reached, being tested using Krippendorff’s α coefficient. In addition, an internal reliability test was applied, with full agreement between coders. Results of the content analysis are analysed next. 4. Results Results for the content analysis provide an indication of how the ASX 100 companies accounted for modern slavery in 2016, prior to the Government Inquiry being announced. An overview of the sample is contained in Table II (additional detail can be sought from the corresponding author). The extremes in approaches to disclosure are stark. Ten companies (10 per cent) disclosed no modern slavery-related information either in standalone, annual reports, or on their websites, while 90 companies did make some disclosures. Nevertheless, of the ten non-disclosing companies three (ResMed Inc., Domino’s Pizza Enterprises Ltd and Reece Ltd) did produce an online modern slavery statement. Stimulus for ResMed’s statement was the California and UK Acts, for Domino’s it was the UK Act, but Reece did not reference any overseas legislation. At the other extreme, 17 companies (17 per cent) disclosed standalone information, a modern slavery statement based on the UK Act, a supplier code of conduct and a human rights statement. Modern slavery statements were disclosed by 37 companies (see Table III ), the vast majority following the UK Modern Slavery Act 2015 which became effective in 2016. About two-thirds of companies neither disclosed a modern slavery statement nor followed an Act, with very similar proportions not having a supplier code of conduct or human rights statement. While 63 companies did not provide a modern slavery statement, some information about modern slavery was disclosed in annual and standalone reports and online. The dominant source of disclosure was provided on corporate websites. Online disclosure is the method required by the UK and California Acts and so this result is unsurprising. Where modern slavery disclosures are made by ASX 100 companies, quantifiable differences in the volume of disclosures are observed (see Table IV ). The range of disclosures by sub-theme by companies making some disclosures on modern slavery is from a minimum of 0 per cent in annual reports for human trafficking and standalone reports for supply chain screening, to a maximum of 66 per cent online for diversity, with safety, whistle blowing and bribery and corruption a close second. Examples of these are provided in Table IV . There is a large difference in the volume of modern slavery disclosures by sentences in sub-themes across annual reports, standalone reports and websites. While 2 per cent of companies provide disclosures on child labour in their annual reports, in standalone reports 11 per cent of companies report, whereas on their websites 38 per cent made such disclosures. This tendency was consistent across all sub-themes with the exception of bribery and corruption and abuse where standalone reports disclose marginally less information than annual reports. Nonetheless, website disclosures were far greater for both categories (60 and 50 per cent, respectively). In relation to annual reports, the range of disclosures was between 0 and 17 sentences. Companies were found to disclose more about their supplier code of conduct (with an average of 0.76 sentences per company) and general modern slavery issues (0.44 sentences per company). With regard to standalone reports, the range of disclosures was 4–38 sentences. Companies were found to publish more sentences about several specific themes which include human rights (1.29 sentences per company), supplier assessment (1.32 sentences per company) and supplier screening (1.10 sentences per company). Finally, in relation to websites, companies were found to provide information on all the modern slavery themes (with a wide range of between 4 and 179 sentences per company) resulting in an average of 17.73 sentences per company for the most prominent theme which was bribery and corruption (see Table IV ). With regard to the use of quantitative disclosures, results based on a maximum disclosure score of 5 (one for each of five main themes) show for annual and standalone reports the majority of companies make no disclosures, and a bare majority make disclosures on their websites (see Table V ). In short, most companies in the sample provided no quantitative information. Of those disclosing, most only provide quantitative information with regard to a single theme. This result was consistent across all three media. Only one company provided quantitative information for all five main themes, and this was located within their standalone report. Very few companies (9 out of 100 in annual reports; 17 out of 100 in standalone reports and 44 out of 100 on websites) include quantitative information, thereby confirming the overall low quality of disclosure. A detailed analysis of quantitative disclosure by sub-themes is presented in Table VI while additional examples of narrative and quantitative statements are provided in Table AI . For instance, in annual reports, few companies make specific quantitative disclosures with regard to the sub-themes (between 0 and 4 out of 100). Quantitative disclosures representing higher quality were highest on websites and negligible in annual and standalone reports. Nevertheless, based on the provision of quantitative information, these examples suggest a low quality of disclosure overall. The level of quantitative disclosure across the three media differs by theme. For instance, “Child labour” is quantified and reported on the web by 13 companies with only 3 companies reporting in standalone reports and none in the annual reports. Finally, consistent with mimetic institutional processes, analysis of the level of quantitative disclosures of ASX 100 companies shows the sub-themes disclosed and not disclosed by companies are similar. For example, disclosure related to the quantitative aspects of child labour included the minimum age requirement (e.g. typical is “the minimum age for entry into employment must not be younger than 15 years of age”). Supplier screening and supplier assessment sub-themes include the number of suppliers assessed/screened (e.g. for “supplier assessment – 36”; and “70% of our suppliers assessed”) and the general modern slavery theme included thresholds for disclosure, etc. (e.g. a total turnover of over £36m). For some sub-themes such as “Human Trafficking”, “Minimum Wage” and “Abuse”, no quantitative information was disclosed at all. 5. Discussion Analysis of the ways in which ASX 100 companies account for modern slavery provides a potential benchmark reflecting voluntary practice prior to promised legislation on modern slavery being enacted in Australia ( Commonwealth of Australia, 2017b ). In the context of institutional theory, analysis reveals that whereas some companies appear to have no interest in making disclosures in standalone reports, they have decided to publish modern slavery statements. One implication is that the problems of modern slavery in supply chains resonates with these companies perhaps because they feel legislation will be introduced eventually and there is some reputational advantage in being proactive. For these companies, it can be argued, in line with existing literature that draws on institutional theory, there is no direct coercive influence as legislation was not yet in place within Australia. That said it is possible that an indirect mimetic pressure exists to encourage the adoption of practice legislated elsewhere with a high potential for transfer. Nevertheless, as only just over one-third of companies sampled voluntarily provided a modern slavery statement, with these mostly following the example of UK legislation, motives behind the decision of the large majority of companies not to disclose such statements are unclear and in need of further exploration and explanation. As an example, it would be beneficial for future research to ask managers their reasons for voluntary disclosure or non-disclosure in order to assess the relative importance of coercive, mimetic and normative pressures. It is possible, given the UK Modern Slavery Act is relatively new, as more companies in the UK decide to report, the mimetic influence across the globe will become more pronounced leading to higher levels of disclosure overall. It is also possible that industry associations will provide additional normative guidance for global members, possibly to forestall greater levels of government interference in their activities or the implementation of new penalties for non-compliance. In order to assess these effects, longitudinal analysis will be needed to supplement existing knowledge. Understanding the current behaviour of the sample companies could also be enhanced by examining reasons for the relatively high level of online disclosures. This could be because online disclosure is required in other jurisdictions (e.g. California ( Birkey et al. , 2018 ) and the UK ( LeBaron and Rühmkorf, 2017 ) ) and that a mimetic influence exists, or the digital revolution is now favouring moves towards more rapid and even real-time disclosure. Companies may be adopting electronic media because of the decline in relative cost and immediacy of delivery leading to a lessening of the importance of standalone reports for investors and other groups. The mimetic behaviour revealed for the Australian companies associated with UK or USA, parents might also be encouraged through lower incremental costs when producing similar electronic information about modern slavery themes in Australia. Or it could be that managers are moving towards online disclosures because webpages are a more informal type of disclosure and it is easier to modify information at any time. Behaviour encouraged by these mimetic processes needs closer examination than is possible in this brief exploratory research. The results obtained from analysing the quality of disclosure, represented by quantitative information, reveals another possible explanation, worthy of research into the voluntary behaviour of companies – the desire to reduce command and control regulation in favour of greater self-regulation ( Gunningham and Rees, 1997 ). Where disclosures were evident they were found to be narrative in nature thereby not facilitating easy comparison, and of low quality as assessed by the narrative/quantification classification. This suggests a potential decoupling of actual action from perceived action. While the Parliamentary Inquiry highlighted normative pressure being exerted by NGOs in relation to modern slavery within corporate supply chains, it appears Australian business is trying to control the agenda by providing poor quality disclosures that give the appearance of action while providing little evidence of real change. Nonetheless, for policy makers, as online reporting of themes about modern slavery in supply chains already dominates the sample results this supports the recent recommendation that legislation in Australia should require disclosure through the same medium ( Commonwealth of Australia, 2017b ), reducing the costs of compliance for already proactive companies. The high level of disclosure of some modern slavery information online might tempt thoughts that legislation is not needed as voluntary disclosure is already becoming the norm. However, given large ranges in the quantity and quality of information disclosed on each modern slavery theme and the evidence that 10 per cent of top companies make no disclosures at all the argument for introducing a standard disclosure requirement is strengthened. Analysis of the information disclosed on modern slavery points to the above strengths and weaknesses but it also draws attention to the need to establish reasons for current practice and changes in behaviour which future studies might investigate. In this sense, other theoretical foundations behind these empirical results merit further attention, such as responsive regulation ( Islam and McPhail, 2011 ) and legitimacy theory ( Birkey et al. , 2018 ). Overall, it appears that required modern slavery disclosures specified in overseas Acts might be related to current voluntary practice in Australia, indicating the strength of institutions, such as regulative and normative practices, working in tandem, across countries, in different industries and over time and this would be worth closer investigation with the potential of a global standard in mind. Cross-country comparisons would also be useful in extending the current understanding in relation to this topic. In addition, the drivers of modern slavery practices within organisations need to be better understood. For example, in the Australian setting many companies disclosing are in the banking and finance, mining and retailing industries and modern slavery practices in these (and other) industries need further investigation as practice might be contingent upon mimetic or normative pressures from industry associations. Even though the sample chosen for analysis consists only of large Australian companies as, following overseas practice, these are the organisations that have been earmarked by the Parliamentary Inquiry for inclusion in legislation ( Commonwealth of Australia, 2017b ), future research into company size and disclosures would be of interest as smaller firms might adopt mimetic practices once legislation is introduced. 6. Conclusion Legislation requiring companies to report on their modern slavery practices has been introduced in a growing number of countries with Australia being one of the most recent to engage. In 2017, a Parliamentary Inquiry recommended modern slavery legislation be introduced, and the Attorney-General’s Department lent its support by recommending reporting requirements about modern slavery in supply chains be mandated for larger companies. Although modern slavery and its disclosure are topics of great contemporary interest, literature indicates a lack of systematic evidence available from Australia and this paper contributes by addressing the research question: RQ1. How do Australian listed companies account for modern slavery? From an academic perspective, the paper is the first to consider the state of modern slavery disclosures within an Australian context and investigate possible explanations for current practice via institutional theory. From a practical standpoint, the paper raises the issue of modern slavery, establishes a benchmark for Australian practice prior to proposed legislation being presented to Parliament and provides a basis for combatting the issues. A content analysis of annual, standalone and online reports of ASX 100 companies reveals under a third of companies make disclosures. Where they do make disclosures the main focus is on bribery and corruption, human rights and generic slavery-related issues, rather than specific themes. Information, when provided, is of low quality being largely narrative and descriptive of policy, rather than quantitative setting targets for how specific sub-themes of modern slavery might be addressed and what financial resources will be provided. Although preliminary and speculative, at best given the voluntary nature of disclosure in Australia at the current time, the response of sampled Australian companies to growing pressures to provide an account of their actions in relation to combatting slavery-related practice has a very marginal accord with the concepts of mimetic and normative institutional pressures. Some companies appear to take into account how their overseas counterparts are responding to similar legislation which suggests a possible mimetic or pre-emptive form of coercive influence. However, without a direct form of coercion isomorphic pressures appear weak with failure to address modern slavery the default institutionalised position. At worst, the response is muted as companies take full advantage of the current self-regulatory environment in relation to modern slavery and produce low volumes of poor quality information about modern slavery in their supply chains which suggests a decoupling of perceived action from actual action. Even for those companies voluntarily disclosing some themes on modern slavery the low take-up in annual (5 per cent) and standalone (9 per cent) reports indicates appropriate incentives are lacking. Nonetheless, as disclosure through company websites across all sub-themes is over 40 per cent, the mimetic effects of requirements in other jurisdictions appear to be influential. The evidence obtained provides support to Government in Australia in its current moves towards requiring better transparency about this pernicious practice via increased coercive pressure on companies. Although the analysis provides insight into this socially important yet under-researched area, a caveat needs to be expressed in relation to potential limitations. Most attention in modern slavery regulation is directed towards large companies and the focus here fits this mould. Selection of the sample size for this study is arbitrary and limited to the largest 100 of almost 2,200 companies listed on the ASX and the results are not representative of all companies. Nonetheless, most legislation introduced in different jurisdictions is designed to influence the behaviour of large companies. Although modern slavery may be indirectly supported by smaller companies, especially in certain industries, obtaining and analysing direct evidence from reports of these companies was beyond the scope of the paper. Two other limitations should be noted. First, the analysis does not consider practices in non-listed companies, especially those with large private wealth, or public sector corporations which may be responsible for large volumes of items procured and having modern slavery practices embedded within them. In addition, non-narrative forms of disclosure such as pictures, charts, diagrams and graphs have not been included in the content analysis, partly, as explained in the method section, because of the subjectivity of such processes and the difficulty of integrating with sentence counts. A further limitation relates to internal reliability of the instrument as the data were obtained from online sources and test–retest reliability was not undertaken because website disclosures were downloaded at a point in time and then analysed. Finally, the analysis is specific to Australia which means it is based on local legal, cultural, political, economic, technological, regional and trade contexts. Variation in each of these spheres of influence over the institutional means to address modern slavery might be expected in other countries and regions. With the size limitation of companies sampled in mind, it would also be helpful for further research to study and understand modern slavery practices in the overseas supply chains of smaller companies. The exploratory nature of the current research has established a benchmark at a point of time as to quantity and quality of modern slavery disclosures in the companies examined. It reveals anomalies in modern slavery reporting practice of ASX 100 companies. In this vein, now that a benchmark has been established, the insights gleaned could be used in future research as a variable in multivariate statistical analysis. Multiple regression and related techniques such as ANOVA could be useful for this purpose as would further cross-sectional and longitudinal investigation. More detailed qualitative analysis of slavery-related disclosures would also help build understanding of the narrative underpinning how companies go about acknowledging and managing this issue. Although the authors acknowledge a mixed-method analysis would be advantageous in providing deeper insight into the ways Australian companies account for modern slavery, conventions in terms of article length for extending the analysis in this way was not possible and this remains a topic for future research. Further research will also help understand the institutional pressures currently driving company behaviour in relation to modern slavery issues and how this changes over time. Such research opportunities include: development of case studies and use of interviews designed to establish the reasons managers do and do not adopt voluntary disclosure practices about modern slavery; explanation of the rationale for dominance of online disclosures; extension of the theoretical understanding of the reasons modern slavery disclosure is increasingly being mandated including legitimacy theory; whether and why greenwash and tick the box responses to mandated disclosure emerge; the longitudinal development of supplier codes of conduct on modern slavery, etc. To further examine the potential mimetic disclosure effect, research can investigate how this translates to changing behaviour in relation to sense making, decisions, and implementation and learning processes within companies to address modern slavery issues. A focus on how companies manage the issue of modern slavery internally as an aspect of sustainability management accounting also requires further investigation. How best can a tick the box approach for disclosure be assessed and best avoided by management? Are modern slavery statements a sub-set of corporate social responsibility or sustainability reporting or do they stand alone? Finally, there is a need for the full range of methods behind academic research to be implemented to gain better explanations of modern slavery practices, including interviews, case studies, further empirical studies and longitudinal analysis. Issues around assurance and the quality of information disclosed needs investigation; however, empirical study in this area will be more meaningful once legislation has been enacted. Future research can examine the success of modern slavery in supply chains legislation in Australia against the benchmark provided. A widening of the themes and narrowing of the range of disclosure volumes on each theme would offer indicators of success in terms of standardising the information available to stakeholders. The indicators used will facilitate tracking of changes in the volume and quality of disclosure across the range of sub-themes over time. Table I / header Research instrument for assessing disclosures-main themes and sub-themes Themes – maximum quality disclosure score Main themes Sub-themes total Sub-themes weighting Sub-themes Sub-themes literature sources Sub-themes examples 1. Human rights in the supply chain 5 1 Child labour Crane (2013) , LeBaron (2014) , LeBaron and Rühmkorf (2017) , Johnson (2013) , Quirk (2006) Employment of minor 1 Forced labour LeBaron and Rühmkorf (2017) , Johnson (2013) , LeBaron (2014 ), Feasley (2016) Compulsory labour, hold against will 1 Human trafficking LeBaron and Rühmkorf (2017) , Quirk (2006) , Stringer et al. (2016) Illegal employment 1 Minimum wage Barnes et al. (2015) , Crane (2013) , Quirk (2006) Living wage, fair pay 1 Human rights Locke et al. (2013) , Quirk (2006) , LeBaron and Rühmkorf (2017) , Feasley (2016) Compliance with human rights, human rights training to employees/contractors, freedom of association, labour rights, human rights impacts 2. Health and Safety in the supply chain 2 1 Safety Wells (2009) , Locke et al. (2013) Safe working conditions, treatment of employees/contractors, labour conditions, employee safety training, fair working hours 1 Abuse and violence Pierce (2011) , Quirk (2006) , Crane (2013) Physical/verbal harassment, sexual exploitation or abuse, violence/threatening behaviour 3. Supplier assessment 2 1 Screening Locke et al. (2013) , LeBaron and Lister (2015) Supplier audits, site visits, supplier screening, supplier survey/questionnaire, supplier monitoring, KPIs 1 Risk assessment Locke et al. (2013) , Smith and Betts (2015) Supplier risk assessment, identifying high risk suppliers/high risk countries, employee training on risk assessment 4. Supplier code of conduct 4 1 Diversity Feasley (2016) , Barrientos (2008) Supplier maintaining diversity, inclusion, non-discrimination policy, equal opportunity, respect/dignity 1 Whistle blowing Armstrong (2016) Raising concerns, complaints, grievances 1 Bribery and corruption LeBaron and Rühmkorf (2017) , Stringer et al. (2016) Anti-bribery, bribery, corruption, gifts, facilitation payments 1 Code of conduct LeBaron and Rühmkorf (2017) , Smith and Betts (2015) Supplier code of conduct, compliance statement related to MS, ethical source training, procurement training, supplier training 5. Modern slavery (MS) – general 1 1 General/others Residual themes Any general statement related to MS such as MS act, submissions, Australian initiative, business environment, supplier/contractor/sub-contractor numbers, residual themes Notes: Total number of themes= 5 and sub-themes=14. Keywords used for search are: contract, exploit, forced, human, labour (labor), offshoring, outsource, slave, subcontract, supply chain, trafficking, wage / Table Table II / header Summary of ASX 100 sample No. of companies in the sample who utilise source of disclosure Total market capitalisation ($) Standalone sustainability reports Integrated in Annual Report a Online reports Supplier code of conduct Human rights statement 1,619,061,143,861 39 52 87 41 32 Note: a CSR, ESG or Sustainability Reports published in Annual Reports / Table Table III / header Modern slavery, supplier code and human rights disclosures Item Number of companies ( n = 100) Companies not disclosing a modern slavery statement 63 Companies disclosing a modern slavery statement of which: 37 Companies following the UK Modern Slavery Act 2015 32 Companies following the UK Modern Slavery Act 2015 and Californian Transparency in Supply Chains Act 2010 3 Companies not following an Act 2 Companies with supplier code of conduct 41 Companies with human rights statement 32 / Table Table IV / header Modern slavery disclosures listed by theme Annual reports Standalone reports Websites Supply chain themes Total companies disclosing Mean of sentences Maximum sentences Total companies disclosing Mean of sentences Maximum sentences Total companies disclosing Mean of sentences Maximum sentences Child labour 2 0.06 5 11 0.28 8 38 1.43 19 Forced labour 2 0.07 5 11 0.26 5 39 1.36 13 Human trafficking 0 0.00 0 2 0.05 4 14 0.25 4 Minimum wage 2 0.15 10 6 0.18 10 26 0.75 8 Human rights 7 0.23 6 17 1.29 32 54 7.32 131 Safety 5 0.23 10 12 0.51 18 64 5.56 27 Abuse 1 0.02 2 0 0 0 50 3.22 71 Screening 3 0.16 12 8 1.10 38 16 1.26 18 Assessment 6 0.17 5 18 1.32 22 29 3.00 35 Diversity 2 0.02 1 6 0.19 10 66 5.37 68 Whistle blowing 5 0.10 3 9 0.52 20 62 6.35 84 Bribery and corruption 7 0.21 8 4 0.13 5 60 17.73 179 Code of conduct 17 0.76 17 19 0.97 14 54 3.98 30 MS-general 10 0.44 17 15 0.58 12 43 12.08 151 Total sample 28% 2.62 57 28% 7.38 108 86% 69.04 319 Notes: Number of sample disclosing equals percentage of sample disclosing ( n =100) “no. of sample disclosing” refers to the number of the sample of the top 100 ASX companies making disclosures about supply chain sub-themes; “total” relates to the total number of ASX 100 companies making supply chain theme disclosures; “mean of sentences” denotes the statistical mean of sentences disclosed; “maximum” signifies the highest number of sentences disclosed by any company in the sample. For all themes, the minimum level of disclosures is 0 / Table Table V / header Number of companies making quantitative disclosures, by number of main themes Number of main themes disclosed (Max. 5) a Annual reports ( n =100) Standalone reports ( n =100) Website disclosures ( n =100) 0 91 83 56 1 7 6 24 2 0 3 12 3 0 4 6 4 2 3 2 5 0 1 0 Notes: a The five main themes (from Table I ) are: Human Rights in the supply chain; Health and safety in the supply chain; Supplier assessment; Supplier code of conduct; and Modern slavery (MS)-General; 0 to 5 equal the number of modern slavery main themes disclosed in the different reports / Table Table VI / header Number of companies making quantitative disclosures, by sub-theme Source of company disclosures Sub-themes Annual reports ( n =100) Standalone reports ( n =100) Website disclosures ( n =100) Child labour 0 3 13 Forced labour 1 2 0 Human trafficking 0 0 0 Minimum wage 0 0 0 Human rights 2 6 5 Safety 2 3 6 Abuse 0 0 0 Screening 1 6 1 Supplier assessment 0 6 0 Diversity 0 1 2 Whistle blowing 0 3 15 Bribery and corruption 1 0 10 Supplier code of conduct 4 4 5 Modern slavery (MS) general 4 7 25 Note: Maximum score is 100 in any element in the Table / Table Table AI / header Examples of modern slavery narrative and quantitative disclosures Theme Narrative disclosures Quantitative disclosures Child labour We do not employ forced, bonded or child labour (Rio Tinto, Annual). BIG W has committed to not using cotton from Uzbekistan due to the systemic use of child and forced labour in harvesting cotton (Woolworths, Standalone) Notwithstanding local requirements, the minimum age for entry into employment must not be younger than 15 years of age (BHP, Web). The minimum age for hazardous work is 18 years (Brambles, Standalone) Forced labour Orica prohibits the use of all forms of forced labour (Orica, Standalone) All suppliers are requested to sign a statutory declaration certifying that they have investigated their own labour practices and those of their direct suppliers to ensure they use no slavery or forced labour (Fortescue Metals, Annual) At the end of this reporting period, there were 1,555 approved factories in our audit programme. A further 1,373 factories were conditionally approved and 241 were due to be re-audited. During the year, we identified 46 critical breaches across 42 factories in our audit programme. These concerned issues (or suspected issues) of attempted bribery, forced labour, unauthorised subcontracting, transparency and child labour (Wesfarmers, Standalone) Human trafficking ANZ’s FCU has also partnered with Liberty Asia, which aims to prevent human trafficking and slavery through legal advocacy and collaboration with NGOs and financial institutions based throughout South-East Asia (ANZ, Web). Businesses need to be alert to the risks posed by slavery and human trafficking, regardless of their sector or where they operate (Rio Tinto, Web) No examples Minimum wage Employees not under an employee union/enterprise agreement (EA) are covered by individual employment agreements and in all cases these agreements remunerate at or above minimum wage (CSL, Standalone) We expect that suppliers to the Group will: Provide fair pay and working conditions for employees, including meeting minimum wage requirements and compensation (Commonwealth Bank of Australia, Web) No examples Human rights […] we became a participant in the Voluntary Principles on Security and Human Rights and are now working to further embed human rights principles into all our processes (Oil Search, Annual) We seek to uphold human rights in all business activities (Fisher and Paykal Healthcare, Web) In F[iscal Year 20]17, the Responsible Procurement Code (RPC) was rolled out to all ANZ [Australia and New Zealand] suppliers with an initial response rate of 80% (Treasury Wine Estates, Web) Safety We work with [stakeholder] groups to innovate, share knowledge and implement the safest standards, train our people on safe use of plant and equipment and identifying behavioural triggers that contribute to critical errors and unsafe behaviour (Boral, Annual). Suppliers will comply with all applicable health and safety laws Coca Cola Amatil, Web) Amcor strives to provide a safe and motivating workplace for our 31,000 co-workers around the world. Our related priorities are: Realising our goal of “No Injuries” (Amcor, Standalone) Abuse The Group does not tolerate harassment, discrimination, bullying, vilification, occupational violence or victimisation on any grounds, whether by race, gender, sexual preference, marital status, age, religion, colour, national extraction, social origin, political opinion, disability, family or carer’s responsibilities, or pregnancy, as reflected in Group policies including the Group Anti Bullying, Harassment and Discrimination Policy and supported by the recent implementation of face-to‐face Group Equal Employment Opportunity, Discrimination, Bullying and Harassment training (CIMIC, Annual). Amcor recognises the dignity of each co-worker, and the right to a workplace free of harassment and abuse (Amcor, Web) No examples Screening Having reviewed the outcomes of the three-year screening programme, this year we have focussed on identifying suppliers operating in countries assessed as high risk by Transparency International. A total of 12 countries have been identified, predominantly in south-east Asia, where legal and regulatory frameworks governing issues such as occupational health and safety and labour rights are sometimes less mature (ANZ, Standalone). Our processes ensure that supply chain risks are identified and quantified with appropriate controls and performance measures included in contract and supplier management plans. Anti-bribery and corruption due diligence processes ensure appropriate screening, evaluation and monitoring of all third parties with whom we do business (Woodside Petroleum, Standalone) Last year we completed a three-year independent screening programme in which suppliers were selected (based on spend and potential risk) and screened for compliance against our SCOP. More than 8,600 suppliers went through this process, with suppliers identified as “high risk” and non-compliant being subjected to a deeper level of screening (ANZ, Standalone) We have improved the transparency of our supply chain with more than 3,200 factories in our audit programme (Wesfarmers, Annual) Assessment We established a dedicated third party due diligence team within our Ethics and Integrity function to facilitate risk-based due diligence assessments on our commercial relationships. The assessments cover bribery, corruption, human rights, money-laundering, trade sanctions, denied parties risks and other areas which may result in reputational concerns (Rio Tinto, Standalone). All suppliers are subjected to a robust risk assessment and due diligence process (Fortescue Metals, Annual) A supplier’s response to our Supplier Scorecard, 10% of which is in regards to sustainability risks and opportunities, determines whether or not they need to be risk-assessed (Amcor, Standalone) We are working with all of our 24 high risk suppliers to understand their approach to managing sustainability risks and assessing their performance. High risk suppliers were identified from a supply chain risk assessment conducted in FY14 covering 131 suppliers representing 66 per cent of Telstra’s spend (Telstra, Standalone) Diversity We support a diverse workforce, attracting a wide range of skills and talents from employees across all levels of James Hardie (James Hardie, Web). Suppliers are expected to: Respect the diversity of their employees, clients and others with whom they interact, including respect for differences such as gender, race, colour, age, disability, sexual orientation, ethnic origin and religion (ASX, Web) A gender diversity objective is included in the Group’s Balanced Scorecard and progress against this is reported on a monthly basis to the organisation, Leadership Team and Board. Clear accountability for progress was established with the target of 40% female representation in senior leadership by end of 2016 being set and communicated externally (GPT Group, Web) Whistle blowing The Qantas Group has established an effective Whistleblower reporting and investigation framework for employees to report breaches of any law, regulation, or any Qantas Group Policy (Qantas, Web). Co- workers and third parties can raise grievances via our independent Whistleblower service. All complaints received by the third party Whistleblower service provider are referred to the relevant Whistleblower Committee (Human Resources or Audit and compliance) (Amcor, Standalone) Stakeholders are able to anonymously bring instances of alleged inappropriate conduct to our attention via CSL’s global hotline process. From 1 July 2015 to 30 June 2016, 49 such instances were raised for the attention of management. For substantiated allegations, corrective actions were taken to the extent warranted. No allegations resulted in any regulatory action or action by law enforcement authorities and there was no indication of any increased risk pro se (CSL, Standalone). Our employees have access to a Speak Up Hotline (via phone, e-mail and the internet) 24 h a day, 7 days a week (Orica, Web) Bribery and corruption The Policy […] includes a requirement for the Company’s employees to take steps to satisfy themselves that the Company is dealing with suppliers that do not engage in bribery/corruption (Ramsay Health Care, Annual) In the spirit of our “Global Core Values”, Code of Business Conduct, and regional Codes of Conduct, Cochlear is committed to conducting our operations in every country where we do business, in compliance with all applicable laws against bribery and corruption. We are committed to acting honestly and fairly (Cochlear, Web) By the end of the year, 99% of all employees and on-site consultants had completed Corruption Prevention training (Oil Search, Annual). Our Code of Conduct training incorporates Anti-Bribery and Corruption (ABC) training for all employees and contractors who do work on behalf of Origin. This is completed within 30 days of joining Origin, and every two years thereafter (Origin Energy, Web) Code of conduct The Vendor shall adopt supply chain engagement principles and practices that cover the key themes detailed in this Code of Conduct when dealing with their own key Vendors, such as: Vendors with high spend, strategic alliance or long-term supply of goods/services; Vendors with operations in high risk locations according to Transparency International guidelines ( ) (Mirvac, Web) In 2014, AGL developed a Supplier Code of Conduct, outlining a minimum set of requirements that suppliers must adhere to when engaging in business with AGL (AGL, Standalone) Orica takes breaches of the Code of Conduct seriously […].The Speak-Up Line is available to all Orica employees globally. In 2016, the Speak-Up Line received 103 reported incidents. The incidents reported included policy breaches (49), bullying/harassment (29), theft/fraud (14), health and safety (8) and workplace grievance (3). Of the reported incidents, 26 were substantiated and resulted in action being taken towards employees involved. Overall, 10% of reports were received from the Australia Pacific and Indonesia region, 26% from Europe/Middle East/Africa/Asia, 56% from Latin America and 8% from North America (Orica, Standalone) / Table / Table__article Notes 1. Decision rules available on request from the corresponding author. 2. Please note that between acceptance of this paper and publication Modern Slavery legislation was passed by the Australian Parliament on 29 November 2018 with the first reports due on 31 December 2020. Appendix Table AI
New research from the University of South Australia finds that Australian businesses are ill-prepared for mandatory modern slavery reporting, with more than two-thirds of ASX 100 companies unable to produce a disclosure statement about potentially exploitative labour practices. It is a concerning finding given that initial reporting periods have already commenced requiring Australian businesses to deliver their first modern slavery statements by 31 December 2020. UniSA researchers Dr. Katherine Christ and Dr. Kathy Rao say that in order to meet the requirements of Australia's Modern Slavery Act, businesses must significantly ramp up their efforts to ensure they have the systems and procedures ready to report on modern slavery. "Under federal legislation, businesses with turnovers of more than $100 million must declare what they're doing to eradicate slavery in their operations and supply chains, yet evidence shows the majority of Australian businesses are underprepared," Dr. Katherine Christ says. "While a third of Australian businesses sampled in this research were able to produce a modern slavery statement, the volume and quality of their disclosures was low—typically narrative and descriptive of policy—plus there were many inconsistencies of where and how it was reported. "The general nature of these disclosures just isn't enough. For Australian businesses to appropriately report on modern slavery, they must be able to produce systematic quantitative data with associated targets and finances." Modern slavery is the illegal and inappropriate labour practices that include human trafficking, forced labour, child labour, organ trafficking, sex exploitation, debt bondage and other slavery-like practices. Driven by consumer demand for cheap goods it is embedded within many of the products used by Australians every day, including 73 per cent of imported computers, mobile phones and laptops (representing an estimated US$7.0 billion) and 70 per cent of imported clothing and accessories (representing an estimated US$4.5 billion). Globally, more than 40 million people are trapped in modern slavery with nearly 25 million of these forced into slave labour. In Australia, 15,000 people are victims of modern slavery. This research is the first to consider the state of modern slavery disclosures within an Australian context, providing a useful benchmark against which the impact of Australia's Modern Slavery Act can be measured. Dr. Kathy Rao says that eradicating modern slavery requires commitment across all levels of Australian business. "Modern slavery is a far-reaching and devastating issue, but the responsibility to eliminate it does not sit with business alone," Dr. Rao says. "While all businesses have a responsibility to mitigate the risks of modern slavery, addressing modern slavery requires a joint and ongoing commitment from all parties including governments, business leaders and boards, suppliers, contractors, NGOs, researchers, professional bodies and the general public. "Only through a committed process of ongoing and continuous improvement will we be able to ensure modern slavery is truly a thing of the past."
10.1108/AAAJ-11-2017-3242
Earth
Study yields new clues to predict tipping points for marsh survival
Anna E. Braswell et al, Coastal Wetland Distributions: Delineating Domains of Macroscale Drivers and Local Feedbacks, Ecosystems (2019). DOI: 10.1007/s10021-018-0332-3 Journal information: Ecosystems
http://dx.doi.org/10.1007/s10021-018-0332-3
https://phys.org/news/2019-02-yields-clues-marsh-survival.html
Abstract How do multiple stable states influence local and macroscale ecological patterns? Understanding how local feedbacks operate within heterogeneous coastal environments is essential to forecasting marsh persistence and loss in response to sea level rise, river impoundment, and other environmental changes. In coastal lagoons, feedbacks between open water, wind erosion, and stabilizing effects from wetland vegetation produce two states: open water with fringing marshes, and marsh-dominated basins. Unknown is whether, how, and at what scales these feedbacks affect distribution of marsh ecosystems in large, complex estuaries, where macroscale coastal and watershed characteristics control suspended sediment and wind energy. Using a multi-scale geospatial analysis spanning the Atlantic and Gulf coasts of the USA, we show that characteristics of estuaries (depth, shoreline complexity, land use, tidal range, latitude) and their watersheds (discharge) best predict wetland extent at broad spatial scales (~> 10 2 km 2 ). Bimodal distribution of wetland extent occurs at finer scales (~ 10 0 to 10 2 km 2 ) that correspond to the theoretical scale of wave erosion feedbacks. Local feedbacks are thus the mechanism that controls marsh dynamics, whereas coastal and watershed characteristics determine macroscale wetland distributions through their effects on these processes. In coastal marshes, and likely in other complex landscapes, the spatial extent of feedbacks shapes the scales at which local and macroscale processes control the distribution of alternative stable states. These findings help predict scales at which coastal wetlands will respond catastrophically or continuously to environmental change, and provide a basis for multi-scale strategies to protect and restore coastal wetland ecosystems. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Manuscript Highlights We evaluated coastal wetland patterns across eastern USA to understand drivers of persistence. Feedbacks between erosion and stabilizing vegetation control wetland extent at local scale. Feedbacks and macroscale drivers are primarily influential at different scales. Introduction How do alternative stable states influence ecological patterns over broad spatial scales? Alternative stable states can be expressed over whole ecosystems, such as lakes or sand dunes (Scheffer and others 1993 ; Hayden and others 1995 ; Scheffer and van Nes 2007 ), usually when such systems are relatively small, well-bounded, homogenous, and well-mixed systems. In most ecosystems, and especially over broad spatial scales, physical boundaries are less distinct (Schlesinger and others 1990 ; Staver and others 2011 ), and conditions are heterogeneous. This heterogeneity can disrupt or constrain feedbacks and thus alter or obscure the occurrence and distribution of alternative stable states. The relative influence of external drivers and self-organizing feedbacks may also vary with scale, with internal feedbacks shaping local patterns, but macroscale, physical drivers over-riding those effects at other scales or in particular locations (Sheffer and others 2013 ). In this study of coastal wetland distributions (Figure 1 A), our objective is to test whether alternative stable states are most clearly expressed at scales determined by spatial scales of feedback processes and landscape heterogeneity. Figure 1 Multi-scale patterns of wetland extent and their hypothesized drivers. Our study domain A spanned the Gulf and Atlantic coasts of the USA. At five spatial scales corresponding to USGS Hydrologic unit codes, we determined fraction of estuary occupied by wetlands and open water. The range of this characteristic is illustrated by the B Neuse River Estuary, NC, an example of fringing wetland and C the Satilla and Brunswick River Estuaries, GA, an example of extensive wetlands. At the same scales, we quantified estuarine and watershed characteristics, including relative volumes of river and tidal inflows ( D ; shown at HUC 4 scale along NC coastline), maximum depth ( E ; HUC 8), and shoreline complexity ( F ; HUC 12). Full size image Coastal wetlands are strongly connected to both upland watersheds and adjacent estuaries, whose alteration by human activity has caused wetland degradation and loss (Kennish 2001 ; Lotze and others 2006 ). Loss or alteration of coastal wetlands reduces the ecosystem services they provide, such as shoreline protection, fishery nurseries, nutrient filtration, and carbon sequestration (Zedler and Kercher 2005 ; McLeod and others 2011 ; Hansen and others 2018 ). Along coasts, alternative stable states of open water (Figure 1 B) and extensive, basin-filling marshes (Figure 1 C; Morris and others 2002 ; Marani and others 2007 ; Mariotti and Fagherazzi 2013 ) are thought to arise from a feedback between extent of tidal flat width and of open water (which influences fetch—the length of open water over which wind can blow to produce waves). The ratio of these morphological features, in turn, affects marsh building by vegetation (Redfield 1965 ; Mudd and others 2009 ) and wave erosion at wetland edges (van de Koppel and others 2005 ; Marani and others 2010 ; Mariotti and Fagherazzi 2013 ). Above a critical fetch, wave energy promotes rapid edge erosion, increasing open water extent and tidal flat width; below this critical fetch, marshes expand until the basin is filled. Critical fetch for a specific estuary depends on suspended sediment concentration (SSC) and rates of sea level rise (Mariotti and Fagherazzi 2013 ), and these factors should determine the spatial scales at which resulting states are most influential and apparent. Understanding how local, self-organizing feedbacks operate within heterogeneous coastal environments is essential to forecasting marsh persistence and loss in response to sea level rise, river impoundment, and other environmental changes. Among the macroscale factors that might influence marsh extent are coastal and near-shore morphology that control tidal dynamics and marine sediment inputs (Ardhuin and others 2003 ; Stone and others 2005 ); upland sediment inputs that differ depending on watershed size, terrestrial geology, and land use within the watershed (Mattheus and others 2009 ; Kirwan and others 2011 ; Tweel and Turner 2012 ); and climate-driven variation in vegetation productivity and phenology (Pennings and Bertness 2001 ; Osland and others 2013 , 2014 ). Distribution and covariation of these geophysical factors across multiple spatial scales creates a template within which local marsh building occurs. Shape, size, and depth of estuaries determine the energy environment, thereby affecting the ability of marshes to form and persist. For example, more dendritic or complex shapes reduce effective fetch compared to rounder estuaries (Wetzel 2001 ; Ortiz and others 2017 ). Depth also affects wetland establishment via multiple mechanisms. Waves are less attenuated by bottom interactions at larger depths, leading to stronger erosive waves (Le Hir and others 2000 ). Slope of the estuary/upland interface can determine marsh transgression into terrestrial or upland areas, affecting wetland extent. Moreover, depth and extent of shallow tidal flats determines the area available for colonization of wetland plants (Le Hir and others 2000 ; Möller 2006 ). Deeper estuaries should have fewer coastal wetlands because of all of these effects. Marshes receive regular tidal pulses of water, nutrients, and sediment. At higher tidal ranges, marshes tend to be more resilient to sea level rise because vegetation growth range expands with increasing tidal range (Morris and others 2002 ; Kirwan and Guntenspergen 2010 ), and inundation promotes organic matter accretion and sedimentation on the marsh surface (Pethick 1981 ; Friedrichs and Perry 2001 ; Mudd and others 2009 ; Kirwan and others 2010 ). Therefore, marshes with higher mean tidal range may be more persistent (Kirwan and Guntenspergen 2010 ). Theoretical models suggest that vertical tidal dynamics can, but do not always, influence lateral extent of wetlands (Yousefi Lalimi 2018 ). Estuaries receiving discharge from sediment rich rivers have a greater capacity to build wetlands (Kirwan and others 2010 , 2011 ; Tweel and Turner 2012 ; Yousefi Lalimi 2018 ). Over the geologic period of formation of extant marshes, riverine suspended sediment concentrations were generally higher than concentrations in near-ocean environments (Milliman and Meade 1983 ; Syvitski and Milliman 2007 ). We therefore expect greater wetland abundance where river discharge is large relative to tidal volume, and thus, contribution of river suspended sediment is greatest. Finally, land use along coasts can affect the extent of wetlands (Gedan and others 2009 ), through recent alteration of coastlines and historic conversion to uplands. Therefore, with increased development (that is, urban land use) along the shoreline, we would expect lower wetland extent (Kennish 2001 ; Coverdale and others 2013 ; Gittman and others 2015 ). Although many other anthropogenic stressors can affect wetland persistence, coastal development is most directly related to wetland loss. Other mechanisms such as river impoundment and upland erosion operate within the context of watershed–estuary interactions. We do not consider the more complex effects of historic land use, impoundment, or other potential anthropogenic drivers of wetland extent in this study. Watershed, estuarine, and anthropogenic influences on wetland extent all potentially operate across a range of scales. At local scales, strong feedbacks in marsh formation (Morris and others 2002 ; Kirwan and Murray 2007 ; Marani and others 2007 ; Mariotti and Fagherazzi 2013 ) could modify or even overwhelm the relationship of these drivers with marsh extent (Sheffer and others 2013 ). Specifically, theoretical models suggest that the critical fetch length separating fringing marsh and marsh-filled states ranges from approximately 1–10 km (Mariotti and Fagherazzi 2013 ), depending on suspended sediment, wave environment, and rate of sea level rise. We therefore expect the signature of feedbacks (that is, bimodal distributions of wetland extent; Scheffer and Carpenter 2003 ; van de Koppel and others 2005 ; Wang and Temmerman 2013 ) to be most apparent when coastal wetland extent is evaluated at correspondingly fine scales (that is, coastal estuary segments of 1–100 km 2 ). Coastal and upland drivers operate at particular scales associated with major physiographic regions or watersheds, but may also influence wave energy, sediment concentration, and other proximate drivers of wetland dynamics at finer scales. Nonetheless, we expect these drivers to exert the strongest influence on coastal marsh distributions at broader spatial scales where effects of local feedbacks are spatially averaged. Our primary objectives for this study were (1) to characterize the distribution of coastal wetlands at multiple spatial scales, (2) to evaluate these distributions for signatures of alternative stable states, and (3) to relate variation in marsh extent to coastal and watershed characteristics at each scale. Together, these analyses allow us to characterize the relative influence of local self-organizing feedbacks and the geophysical template at each scale. We hypothesize that alternative stable states in coastal wetlands are most clearly expressed at the scale determined by spatial scale of fetch feedback processes and strength of landscape heterogeneity governed by broad-scale estuarine and watershed drivers. If the feedback-scale hypothesis is correct, then the signatures of local feedbacks in coastal wetland distributions should be strongest at the spatial scales corresponding to the scales of modeled fetch-erosion feedbacks (that is, estuaries < 100 km 2 ). Alternatively, the outcome of local wetland feedbacks might be expressed at all scales or be overwhelmed at all scales by geophysical heterogeneity. Expression of multiple modes of wetland extent at scales that diverge from theoretical predictions of the fetch-erosion hypothesis would indicate that some other mechanism produces open water- and marsh-dominated states. Materials and Methods Approach In this study, we used a geospatial approach (Figures 1 , 2 ) to understand interactions between local feedbacks and broader-scale estuarine and watershed characteristics at multiple scales. Using the intersection of watershed and coastal boundaries, we produced hydrologically coherent coastline segments at five different scales of aggregation (Figure 1 , Table 1 ). At each of these scales, we evaluated coastal wetland extent (measured as proportion of total estuary area occupied by emergent fresh or salt marsh) for bimodality (an indicator of local feedbacks) and assessed how wetland extent varied with coastal morphology and watershed size. Within each of these scales, coastal segments varied substantially in size, allowing us to evaluate how the effect of these drivers depends on estuary size both within and across scales of analysis. Figure 2 Workflow for processing publicly available datasets in our geospatial framework. Estuary units were formed at 5 different scales based on USGS Hydrologic units. Wetlands from the National Wetlands Inventory were selected for each estuary unit. Watersheds were delineated along the coastline and then identified for each estuary unit. NOAA bathymetry was used to determine maximum depth of each estuary unit. Finally, NLCD land cover files were used to assess the percent of the shoreline with developed land cover for each estuary. Full size image Table 1 Variable Glossary Full size table Empirical evaluation of ecogeomorphic feedbacks, structures, and drivers at the broad spatial scales of this study necessitates imperfect assumptions and simplifications. In this project, we use geospatial proxies of geomorphic drivers and processes to test spatial predictions that follow from finer-scale, process-based models (Marani and others 2007 ; Kirwan and others 2010 ; Mariotti and Fagherazzi 2013 ). For example, we use the river/tide ratio (RTR) as a proxy of relative suspended sediment concentration. We reason that higher RTR should favor higher suspended sediment concentrations from river inputs, which, based on mechanistic models, should favor greater wetland extent. These proxies are necessary to achieve broad spatial coverage of the potential drivers of wetland extent. Because of the snapshot nature of the geospatial data, we evaluate predictions over space and cannot explicitly represent the temporal dynamics and path dependence of coastal wetland systems. With this temporal limitation, we also cannot directly address wetland change due to processes such as sea level rise, although results of this study have implications for future persistence of coastal wetlands under sea level rise. Our approach trades dynamic representation of processes for the generality and statistical power that follows from fine-grained, multi-scale coverage of a large, heterogeneous study domain. In some cases, the use of geospatial proxies may linearize more complex geomorphic relationships and processes; however, because the specific form of those relationships and processes is likely to vary among estuaries, linear representations may not only be necessary but also, in our view, more appropriate at the scales of this analysis. Moreover, even highly resolved process-based models make linearizing assumptions (for example, Kirwan and Murray 2007 ). To the extent that these simplifications deviate from reality, they should reduce rather than inflate our explanatory statistical power. Our analysis, then, is likely a conservative estimate of the influence of each driver. Study Domain and Geospatial Framework For our geospatial analysis, we characterized coastal wetlands, associated estuaries, and watersheds using existing, publicly available, geospatial data. This study covers the Atlantic and Gulf coasts of the USA (Maine to eastern Texas coast; HUC 4: 0105-1211; excluding 0109) and uses five spatial grains (hydrological unit or HUC 4, 6, 8, 10, 12; largest to smallest) to aggregate watershed and estuarine characteristics and evaluate influence on wetland extent. At each of these scales, we used USGS Hydrological Units and the coastline to generate coastal segments that were meaningfully associated with specific watersheds (Figure 2 ; see Appendix A in Supporting Information). Coastlines were aggregated at each scale based on the coastal boundary of HUC watersheds (Python 2.7.2, ESRI-ArcMap 10.1). Of the coastlines associated with each watershed scale, the 43 HUC 4 watersheds that drain to the Atlantic Ocean and Gulf of Mexico average ca. 200 km of coastline; coastlines defined by HUC 8 and HUC 12 watersheds average 70 and 13 km, respectively. This nested delineation allowed us to assess wetland extent within, for example, the entire Ablemarle-Pamlico Sound, NC, at the scale of the arm of major tributaries (for example, Neuse River, NC) and that of tributary sub-watersheds (for example, Adams Creek, NC). Extremely small units (Estuary Area < 1 km 2 ) did not reflect coherent estuaries and were excluded from regression analysis. Materials and Methods: Variables and Datasets We calculated wetland extent as the fraction of total estuary area (open water plus wetlands) occupied by wetlands. We used the National Wetlands Inventory (US Fish and Wildlife 2014 ) to determine areal extent of coastal wetlands at each spatial scale (Figure 2 ; see Appendix A in Supporting Information). To determine total estuarine area, we used HUC boundaries, wetland area, and bathymetry. We calculated wetland extent as a proportion of total estuarine area for each HUC-defined polygon: A w /( A w + A e ), where A w is coastal wetland area and A e is area of open water. We characterized coastal segments at each scale in terms of: estuary area (wetland plus open water area); shoreline development factor (SDF) as a measure of shoreline complexity; mean tidal range; river/tide ratio (runoff volume to tidal inflow) as a proxy of sediment inputs; maximum depth as a proxy for both wave energy and the availability of shallow habitat for potential marsh establishment (Figure 2 ; see Appendix A in Supporting Information); and latitude as a proxy of vegetation productivity and climate (Table 1 , Figure 2 ). To develop a proxy for shoreline complexity, we used the morphometric measurement, SDF, commonly used in lake and pond studies (see Appendix A in Supporting Information; Wetzel 2001 ; Ortiz and others 2017 ). To determine the mean annual tidal range, we used VDatum software (3.6) (National Oceanic and Atmospheric Adminstration 2016 ) to determine mean high water (MHW) and mean low water (MLW) and create a fishnet of tidal data points for each HUC unit (Python 2.7.2, ESRI-ArcMap 10.1; Osland and others 2014 ). To estimate the mean tidal range, we calculated difference between average MHW and MLW for each estuary unit. We used shoreline land use and development of the coastline as a signal of human alteration and modification of coastal wetlands. As the anthropogenic stressors on wetlands are diverse and varied, this proxy does not depict many important stressors, such as sediment starvation by dams, watershed land use, and is likely an imprecise proxy for hardening of the shoreline (Gittman and others 2015 ). Relating wetland extent to anthropogenic change in sediment and water fluxes from rivers, such as impoundment, flow regulation, and upland erosion, requires more sophisticated approaches to data analysis that are beyond the scope of this study (see Braswell 2017 ). We calculated a RTR to compare annual volume of watershed runoff and annual volume of tidal inflows. Specifically, we calculated RTR by dividing annual runoff volume from each watershed by annual tidal volume of the associated estuary (see Appendix A in Supporting Information). To determine contributing upland area (Figure 2 ), each HUC estuary unit was intersected with watersheds derived from the National Hydrography Dataset (United States Environmental Protection Agency and United States Geological Survey 2012 ). We determined mean annual runoff from each watershed using global averaged runoff data from years 1950–2000 (Conservation Biology Institute 2010 ). Annual tidal volume was estimated from the sum of daily mean tidal range multiplied by estuary area. We used sum of open water and wetland area to estimate tidal volume, making the assumption that the wetland platform sits at mean sea level and is fully flooded at each tidal cycle. This approach overestimates tidal volume on the contemporary marsh, but RTR varies by ten orders of magnitude across our dataset, making error in this assumption negligible. Moreover, calculating RTR based on total area rather than current open water area better reflects initial, historical conditions before wetland establishment. Materials and Methods: Statistical Analysis One of the operational indicators of alternative states in empirical data is a bimodal or multimodal distribution of the state variable (Scheffer and Carpenter 2003 ). We tested the frequency distribution of wetland extent, our response variable, to determine whether data at each scale is better described by a mixture of two or more normal distributions than by a single normal distribution (van de Koppel and others 2001 ; Heffernan 2008 ; Watts and others 2010 ). We fit the distribution of arcsine-square-root-transformed wetland extent using ‘mclust’ (version 5.2.3; R 3.4), a package that uses gaussian mixture modeling to determine model-based cluster and density estimation (Fraley and Raftery 2002 ; Fraley and others 2012 ). We conducted this analysis on data from each scale of analysis (HUC 4, 6, 8, 10, 12), and across slices of estuary area (< 10 0 , 10 0 –10 1 , 10 1 –10 2 , and > 10 2 ) at the finest scale (HUC 12; see Appendix A in Supporting Information). We compared empirical estimates of fetch with critical thresholds reported by biogeomorphic models (Mariotti and Fagherazzi 2013 ) using a subset of data at the HUC 12 scale. Coastal sections included in this analysis matched theoretical assumptions of critical fetch: Selected coastal sections had simple shapes (SDF < 5), little shoreline development (< 20% developed land use), and were minimally influenced by rivers (RTR < 1). We estimated fetch as the diameter of a circle with the same area and compared this value with theoretical predictions of critical fetch (Mariotti and Fagherazzi 2013 ). In our statistical analysis, we used coastal and watershed variables to predict wetland extent using forward stepwise multiple regression and interpreted explanatory power of these variables as a measure of the control of wetland extent by external drivers at each scale of analysis (HUC 4, 6, 8, 10, and 12). These linear regressions were conducted with just main effects and then with interactions with estuary area included in the analysis ( R 3.4). Adjusted R -squared values are reported for the fit of multiple linear regression at each scale, as well as f -scores and significance. Multi-collinearity was assessed by computing variance inflation factor (VIF) for each input, which measures the degree of variance in an estimated regression coefficient increased because of collinearity (package ‘car’ 2.1–5; R 3.4; Fox and Weisberg 2011 ). Beta coefficients were standardized post hoc into beta weights using standard deviations (package ‘lm.beta’; R 3.4). By standardizing our variables, we directly compared the degree of association with the response variable. Results We found bimodal distributions of wetland extent, with the clearest bimodality (Figure 3 A–E) at the finest scale of our analysis (HUC 12). Low and high modes of wetland extent were most clearly separated at this scale (medians: 15.6 and 75.8% wetland extent, respectively; frequency minimum at ca. 45%) and most even in their relative weight. Size of estuaries at this scale where bimodality was clearly expressed (10 1 –10 2 km 2 median area) corresponded to the range of theoretical predictions of critical fetch (0.5–10 km; Mariotti and Fagherazzi 2013 ). Size-based subsets of HUC 12 estuaries also showed the strongest bimodality for sizes that most closely corresponded to the plausible range of critical fetch (see Figure S1, Appendix C in Supporting Information). Figure 3 Local feedbacks create bimodal distributions of wetland extent at theoretically predicted scales. We evaluated wetland distributions in subsets of different sized estuaries by using each HUC scale (HUC 12–4; A – E ). Agreement among multiple tests of bimodality was strongest at areas of 10–100 km 2 , which corresponds to theoretical range of critical fetch. We further evaluated the critical fetch model using a subset of estuaries that best match its assumptions: round basins with minimal coastal development and riverine influence ( F ). In these estuaries, wetland extent decreased as estimated fetch ( \( \sqrt A \) ) passed through the range of critical fetches predicted under various rates of sea level rise (2–10 mm/year) and suspended sediment concentrations (indicated by shading). Distributions above and below an intermediate threshold (10 0.27 km) were highly skewed toward high ( G ) or low ( H ) wetland extent. Full size image Within a subset of HUC 12 estuaries that best correspond to theoretical assumptions of the critical fetch model (minimal river influence, simple coastline, minimal coastal land development), wetland extent was uniformly high when estimated fetch was below 0.5 km; low wetland extent dominated where estimated fetch was above 10 km (Figure 3 F–H). Between these extremes, we observed a decrease in wetland extent as estimated fetch spanned theoretical thresholds associated with increasing suspended sediment concentrations (Mariotti and Fagherazzi 2013 ). Distributions above and below an intermediate critical width of 10 0.27 km peaked near and were strongly skewed toward and 0 and 100% wetland extent, respectively (Figure 3 G, H). We note that the potential artifact associated with correlating wetland extent and its component (open water area; A e ) applies only weakly across estuaries that vary in size by an order of magnitude. Moreover, this potential artifact does not prescribe that the transition from marsh- to open water-dominated occurs at the specific range of fetches observed here. At increasingly broad scales (that is, > 10 2 km 2 median estuarine area), we found weakening evidence of bimodality of wetland extent (Figure 3 ). At intermediate spatial scales (HUC 6, 8, 10), statistical tests for bimodality lacked consensus (Figures 3 B–D, S1), modes of wetland extent were less separated, and relative abundance of the high wetland mode decreased. At the largest scale (HUC 4), a unimodal rather than bimodal distribution best fit arcsine-transformed wetland extent data (mean: 35.4%; median: 32.6%). From finest to coarsest spatial scales, we observed shifts in the identity and diversity of the most important predictor variables for wetland extent and in their aggregate explanatory power. At the finest scale, all predictors were retained in the model but had modest predictive power. For larger coastal aggregations (HUC 4 and HUC 6), depth, estuary area, RTR, and the estuary area × RTR interaction were the most important predictors. The overall explanatory power of external forcings (adjusted R 2 ) was greater at coarser scales of analysis than at finer units of analysis (Figure 4 ). Figure 4 Controls on coastal wetland extent vary with scale. At fine scales, strong bimodality and weak prediction by estuarine and watershed variables indicate importance of local feedbacks. At broader, scales, weaker bimodality and greater predictive power of watershed and estuarine variables indicate a transition to macroscale controls. Across these scales, the direction and relative magnitude of individual variables are generally consistent though not completely constant. We interpret this shifting strength as a transition from a feedback-dominated to a boundary-controlled domain at spatial scales of approximately 10 2.5 km 2 . Full size image The predictive power of individual variables differed among scales, but the direction of direct effects and the presence and direction of interactions with estuarine size (within scales of analysis) were generally consistent across scales of analysis. Estuary area had a negative effect at largest and smallest units of analysis, with larger estuaries having less wetland extent. Maximum depth was negatively correlated with wetland extent at all but one scale of analysis, with its strongest direct effect at the HUC 4 scale (Figures 4 , 6 ). Mean tidal range had a positive relationship with wetland extent at four scales of analysis, with relatively strong relationships at the HUC 8 and HUC 10 scales. Developed land use along the coast was weak relative to other predictor variables (Figure 4 ) and only significant at finest scales of analysis (Figure 5 C). Latitude had a small, negative effect at all scales except the largest (HUC 4), with decreasing wetland extent at higher latitudes. Figure 5 Relationships between local morphological, climatic, and anthropogenic drivers and wetland extent varied within HUC 12 scale. A Greater shoreline complexity (as measured by shoreline development factor) promoted higher wetland extent than did round or simple estuaries, suggesting that dendritic shapes shelter wetlands from wave energy. B Regional differences in temperature, growing season, and freeze/thaw cycle (latitude as proxy) affected plant production, with areas at higher latitude and a shorter growing season having less wetland extent. C Estuaries with greater shoreline development (measured by the developed land use) had less wetland extent than estuaries with lower shoreline development, pointing to the influence of human settlements on wetlands. Full size image Within scales of coastal aggregation, effects of some watershed and coastal characteristics included an interaction with estuary size (Figure 4 ). Most prominently, RTR had a strong interaction with estuary area at the coarsest and finest scales. Within these scales, RTR and wetland extent co-varied negatively among smaller estuaries, and positively among larger ones. Interaction of shoreline complexity with estuary area was also significant at some scales, although the sign of the relationship did not shift as was the case with RTR. Within each coastal scale, larger estuaries accounted for most of the coastal area (Figure S2). Our interpretations of these patterns reflect the fact that the relationship within larger coastal segments influences a much larger spatial extent than relationships within smaller segments. Discussion Local and Macroscale Distribution of Coastal Wetlands Three lines of evidence from our analysis of coastal marsh distributions along the US Atlantic and Gulf coasts support the hypothesis that feedbacks between edge erosion and marsh building shape local establishment and persistence of coastal wetland alternative states (Mariotti and Fagherazzi 2013 ; Leonardi and Fagherazzi 2014 ). First, within a subset of estuaries that best correspond to assumptions of marsh basins in theoretical models (Figure 3 F; Mariotti and Fagherazzi 2013 ), wetland extent was negligible when estimated fetch exceeded the highest plausible critical fetch (10 km, assuming SSC = 100 mg/L); conversely, wetland extent approached 100 percent when estimated fetch was below the lowest plausible critical value (0.4 km, assuming SSC = 20 mg/L). When estimated fetch was within the range of plausible critical fetch values (that is, between 0.4 and 10 km), we observed a decline in wetland extent with increasing estimated fetch, rather than a single clear threshold separating filled and open basins. This deviation of field patterns from theoretical predictions likely reflects heterogeneity within and among even these simple estuaries. Second, we also observed the signature of local feedbacks in our more complete sampling of varied coastlines. Across the entire US Atlantic and Gulf coasts, bimodal distributions of wetland extent were clearest at scales in which median estuarine area closely matched the range of length scales of critical fetch in models (1–10 km; Mariotti and Fagherazzi 2013 ). Finally, significant predictors of wetland extent at the finest scales of our analysis included proxies for major components of fetch-edge erosion feedbacks (wave energy, sediment delivery, and biotic productivity). These significant proxies indicate the central importance of edge erosion as a control on local marsh establishment and persistence at fine scales. Larger estuaries have potential for larger fetch and had lower wetland extent than smaller estuaries (Figure 4 ). Estuaries with complex shape had higher wetland extent than simpler estuaries, indicating that narrow estuaries promote marsh expansion by limiting maximum effective fetch (Figures 4 , 5 A). This effect was greater in larger estuaries, which is consistent with the hypothesis that local fetch-erosion feedbacks influence marsh distributions across a specific range of scales (Mariotti and Fagherazzi 2013 ). Similarly, deep estuaries had lower wetland extent (Figure 6 ) than shallow estuaries, which reflects depth’s influence on suitable habitat for vegetation and on wave energy (Le Hir and others 2000 ; Mariotti and others 2010 ). Figure 6 At coarser scales, relationships between watershed and estuarine drivers and wetland extent varied within largest scale (HUC 4). Deeper estuaries (as measured by maximum depth) had lower wetland extent, suggesting the effects of less colonizable space and higher wave energy. Full size image Other significant predictors of wetland extent indicate that some variation in marsh extent is attributable to differences in rates of marsh building. We assume that latitude is primarily a proxy for growing season temperature/length, freeze/thaw processes, and vegetation productivity (Kirwan and others 2009 ) and that its significant relationship with marsh extent indicates influence of plant production on sediment trapping and erosion. The interaction of RTR and estuary size, as proxies for sediment supply (Kirwan and others 2011 ; Tweel and Turner 2012 ; Bevington and others 2017 ), indicates influence of sediment delivery on accretion (Figure 4 ). Tidal range controls vertical feedbacks on the marsh platform through local sediment delivery and hydro-edaphic conditions controlling vegetation growth and sedimentation (Morris and others 2002 ; Kirwan and others 2010 ; Kirwan and Guntenspergen 2012 ; Yousefi Lalimi 2018 ). In our analysis, tidal range was a positive predictor of wetland extent at all but the coarsest spatial scales. Overall, the relatively weak predictive power of watershed and coastal variables at fine scales suggests that fetch-erosion feedbacks mediate or obscure their effects on wetland extent (Figure 4 ). The diffuse relationship of many predictors to wetland extent is to be expected in circumstances and at scales where feedbacks among water, sediments, and biota processes shape landform development and persistence. Our results indicate that effects of local feedbacks are weakened at broad scales and spatially averaged over large, complex estuaries. We observed weakening of the bimodal distribution of wetland extent at increasingly coarse grains of analysis, indicating decreasing influence of km-scale internal feedbacks. Commensurate with this decrease, we observe decreased predictive power of variables associated with local feedbacks (for example, shoreline complexity; Figure 4 ). Weakening of the signatures of local marsh self-organization indicates a relative increase in influence of macroscale coastline and watershed characteristics. Our results suggest that broad geographic scales of coastline represent a distinct domain over which sediment supply and estuarine geometry drive wetland distributions. At the coarsest scale, we found an increase in individual and overall predictive power of river influence, estuary area and depth (Figures 4 , 6 ). The positive relationship of RTR with wetland extent in large estuaries (which dominate overall coastline area) is strong evidence for the role of SSC as a driver of macroscale coastal marsh distributions, as suggested by newly developed theoretical models (Yousefi Lalimi 2018 ). Association of high RTR with low wetland extent in smaller estuary units (within both the coarsest and finest scales) likely reflects limitations of marsh expansion by downstream flow in settings where narrow estuary arms act primarily as river channels. Unlike most other predictors, riverine effects are significant across all scales, indicating their influence on coastal marsh distributions through both sediment supply and hydrodynamics (Yousefi Lalimi 2018 ). Predictive power of estuary area and depth at the coarsest scale could arise from influence on wind and wave energy, but likely reflects the estuary shape in relation to accommodation space for the mass of sediment inputs that have accumulated since the last ice age (Le Hir and others 2000 ; Möller 2006 ). We interpret this finding to mean that wetland extent across broad regions is driven by ability of watershed sediment supply to fill coastal basins since the most recent stabilization of sea level. Scale-specific differences in relationships between wetland extent and predictors may also reflect specific features of the US Atlantic and Gulf coasts. Our broadest scales of analysis coincide with large drowned-valley estuaries (for example, Chesapeake and Delaware Bays) whose depth and size exert strong influence on marsh distributions. At finer scales, these features are disaggregated into sub-basins where effects of their large open water extents (as a proxy of fetch and tidal flat width) may be less apparent. Finer subdivisions of other coastal regions (for example, Gulf coast) may better capture the actual scale of influence of their river inflows, shape, and land use. Analysis of other coasts would indicate whether the scales at which different aspects of coastal morphology exert the strongest influence on wetland extent are general or idiosyncratic. We found relatively weak effects of local shoreline development on wetland extent. At finer scales, wetland extent decreased modestly with increased shoreline development, indicating some direct loss or alteration of coastal marshes through human development of the shoreline (Kennish 2001 ; Coverdale and others 2013 ). Given the diverse and widespread stressors placed on wetlands by humans (Gedan and others 2009 ), this finding should be interpreted narrowly and with caution. Upstream land use and river impoundment can alter sediment supply and thus influence wetland persistence (Syvitski and others 2005 ; Kirwan and others 2011 ; Tweel and Turner 2012 ), but upland factors were not considered in this analysis. Over past centuries, human activity along coasts and in uplands may have both created and eliminated coastal wetlands. Either effect would be difficult to detect with our aggregated, synoptic geospatial approach. Nevertheless, modest effects of shoreline development, limited to the two finest scales of our analysis, indicate that effects of wetland alteration and loss attributable to development are local and small, and may be less important at macroscales than geophysical characteristics that have influenced marsh development over longer timescales. Shoreline development might also act as a barrier to marsh transgression into upland areas (not assessed in this paper) and could be a better predictor of transgression than expansion or propagation into estuaries. Self-Organization and Scale This work addresses a broad challenge in the study of complex systems: How do multiple states manifest within the complex, multi-scale heterogeneity of real landscapes? Our analysis of coastal marsh distributions confirms our hypothesis that local feedbacks and macroscale heterogeneity interact to produce distinct domains of scale: a finer-scale domain controlled by internal biogeomorphic processes; and a macroscale domain controlled by structural features of coasts (that is, boundary conditions). The nested, hierarchical nature of both geomorphology and ecosystems leads to components that interact and form heterogeneous patterns at the small scale, while also scaling up to and functioning within a higher-level system as broad as regions or continents (de Boer 1992 ; Wu and Loucks 1995 ; Stallins 2006 ; Heffernan and others 2014 ). Local self-organization and control by macroscale drivers are not separate hypotheses of coastal wetland formation (or any biogeophysical process). Rather, physical dimensions of these processes and interactions determine how ecosystems respond to changing environments. Beyond simply detecting distinct domains of scale, our analysis delineates dimensions of these domains. Studies empirically assessing the outcome of local self-organization over whole continents (Hirota and others 2011 ; Staver and others 2011 ; Scheffer and others 2012 ) have not explicitly addressed scales of feedbacks and heterogeneity. Furthermore, analyses examining domains of scale over which local feedbacks exert influence on landscape pattern have occurred at much finer scales (Dekker and others 2007 ; Sheffer and others 2013 ). Like climatic drivers and their feedbacks with vegetation (Hirota and others 2011 ), geomorphic constraints interact with local feedbacks to shape spatial patterns of marshes along coasts (de Boer 1992 ; Wu and Loucks 1995 ). We have shown that macroscale drivers set the template of boundary conditions within which extensive and fringing wetland states form and persist (Sheffer and others 2013 ; Dong and others 2016 ). By studying drivers at multiple scales and across regions, we have defined relatively clear domains of scale of local feedbacks and macroscale control. We have shown that this scale break corresponds to theoretical predictions based on simple feedbacks subject to multiple broader-scale controls (Mariotti and Fagherazzi 2013 ). More explicit evaluation of the spatial scales of feedbacks in other systems may similarly illuminate domains of local self-organization and macroscale external control. Macroscale Marsh Change Our integrated conception of marsh dynamics provides an important basis for forecasting changes in marsh extent by delineating the domains over which wetlands sites might respond continuously or catastrophically to alterations and changes in external forcing (Moffett and others 2015 ). Our results confirm that both vertical and horizontal dynamics of marshes may be important for coastal responses to sea level rise. Large tidal range can facilitate development of the marsh platform in the vertical direction through sedimentation and maintenance of equilibrium depth, yet past the threshold of vegetation survival, more water can lead to marsh drowning and death (Morris and others 2002 ; Kirwan and others 2010 ; Kirwan and Guntenspergen 2012 ; Yousefi Lalimi 2018 ). The effect of increasing flooding on both accretion and vegetation survival may promote marsh survival up to a threshold depth of sea level when sediment accretion is rapid. Past this threshold depth, erosion, and vegetation drowning take over, influencing marsh platform persistence in the lateral direction (Silvestri and others 2005 ; Ganju and others 2017 ; Yousefi Lalimi 2018 ). At the scale of estuaries (up to ~ 100 km 2 ), marsh loss may be rapid as changing conditions push systems across local thresholds. Larger estuarine systems may be more likely to respond relatively linearly or gradually to changes in macroscale drivers, with small, incremental losses arising from aggregation of local wetland disappearance. Our results provide a potential basis for broad-scale assessment of marsh resilience, akin to recent efforts in tropical and boreal forests (Staver and others 2011 ; Scheffer and others 2012 ). Some marshes may exist in a state of meta-stability, currently persisting under conditions that are not conducive to marsh formation and may be vulnerable to loss (Kirwan and Murray 2008 ). Such vulnerable marshes would appear as positive outliers in our analysis, having marsh extent greater than predicted by their external conditions. Conversely, marshes with favorable conditions, but low wetland extent, might be favorable sites for restoration. Conclusions about the future persistence of coastal wetlands under sea level rise are varied (Kirwan and others 2016 ; Watson and others 2017 ; Valiela and others 2018 ), mostly due to heterogeneity of coastal systems along the US coastline and differences in approach (for example, focus on lateral or vertical dynamics). Our analysis provides a framework that incorporates the natural heterogeneity and hierarchal nature of estuary systems and could be extended to identify vulnerable coastal wetlands whose existing marsh coverage exceeds what would be expected based on their watershed and coastline characteristics. Additional studies would be needed to link such indices to field studies of marsh stability under sea level rise. Resilience of coastal marshes is particularly important because sediment delivery has declined with reforestation of the landscape and construction of dams (Walling 2006 ), which now prevent about 20% of the global sediment load from reaching the coast (Syvitski and others 2005 ). Resulting mismatches between conditions of formation and current sediment loads are likely to influence how marshes respond to future sea level rise. Forecasts of future wetland loss and their response to global and regional environmental change should incorporate both local feedback processes and broader estuarine and watershed drivers.
Sea-level rise, sediment starvation and other environmental woes pose increasing threats to coastal wetlands worldwide. But a massive new Duke University study could help stem these losses by giving scientists a broader understanding of which wetlands are most at risk, and why. The study, which assessed wetland distribution and resilience in hundreds of U.S. estuaries, found that it's all a matter of scale. "At the local level, the persistence or loss of coastal marshes was invariably linked to feedbacks between two factors: erosion, which eats away at marsh edges, and vegetation, which stabilizes them," said Anna E. Braswell, a Ph.D. graduate of Duke's Nicholas School of the Environment, who conducted the research as part of her 2017 dissertation. "But at broader spatial scales, other key drivers emerged, too." As the researchers took more of an estuary's surrounding geography into account, the depth, size, shape and latitude of the estuary became increasingly important predictors for determining the extent of wetlands it could support, Braswell said. The shape and orientation of the nearby coastline and the depth of near-shore waters mattered, too. And the amount of replenishing sediment being carried into the estuary by rivers or incoming tides became a key indicator of marshes' resilience to change. "These macro-scale coastal and watershed characteristics accentuated or limited the stabilizing impacts of the local feedbacks," she said. "But they weren't really evident until we took a few steps back and viewed the estuaries from broader spatial perspectives." "What this tells us is that salt marshes everywhere probably have tipping points," said co-author James B. Heffernan, assistant professor of ecosystem ecology and ecohydrology at the Nicholas School. Researcher Anna Braswell samples soil from an estuary during her dissertation research on 'tipping points' for wetland preservation. Credit: Megan Fork "Knowing what causes these tipping points to vary from location to location is an important step in identifying where we should expect marshes to be especially vulnerable to future change," he said. "It also provides a framework for understanding where wetland restoration is likely or not likely to succeed." Coastal salt marshes provide a long list of ecosystem services that benefit humans, including shoreline protection, pollution filtration, flood prevention, fishery habitat and carbon sequestration. Braswell and Heffernan published their peer-reviewed findings Jan. 31 in the journal Ecosystems. Using existing geospatial data, they analyzed hundreds of estuaries on the U.S. Atlantic and Gulf coasts from Maine to Mexico to determine the fraction of each estuary that was occupied by wetlands and identify the factors that controlled the extent of the wetlands' spread and their resilience to change. Each site was analyzed at five different geographic scales—from macro-analyses that covered the entire estuary and its adjacent coastline and watersheds, to fine-scale analyses that zeroed in on what was happening in small, individual tributaries. "Integrating data from five different scales allowed us to see patterns and linkages that we would have missed otherwise," said Braswell, who now works as a research scientist in the Earth Lab at the University of Colorado at Boulder. "This information will be vital for future preservation and restoration efforts." Braswell and Heffernan used data from the National Wetlands Inventory and other publicly accessible sources to do their analyses. The five spatial scales they used—sub-region, basin, sub-basin, watershed and sub-watershed—correspond to standard U.S. Geological Survey hydrological units.
10.1007/s10021-018-0332-3
Nano
Researchers clarify the microscopic origin of dissipation with graphene
Keşkekler, A., Shoshani, O., Lee, M. et al. Tuning nonlinear damping in graphene nanoresonators by parametric–direct internal resonance. Nat Commun 12, 1099 (2021). doi.org/10.1038/s41467-021-21334-w www.nature.com/articles/s41467-021-21334-w Journal information: Nature Communications
https://doi.org/10.1038/s41467-021-21334-w
https://phys.org/news/2021-02-microscopic-dissipation-graphene.html
Abstract Mechanical sources of nonlinear damping play a central role in modern physics, from solid-state physics to thermodynamics. The microscopic theory of mechanical dissipation suggests that nonlinear damping of a resonant mode can be strongly enhanced when it is coupled to a vibration mode that is close to twice its resonance frequency. To date, no experimental evidence of this enhancement has been realized. In this letter, we experimentally show that nanoresonators driven into parametric-direct internal resonance provide supporting evidence for the microscopic theory of nonlinear dissipation. By regulating the drive level, we tune the parametric resonance of a graphene nanodrum over a range of 40–70 MHz to reach successive two-to-one internal resonances, leading to a nearly two-fold increase of the nonlinear damping. Our study opens up a route towards utilizing modal interactions and parametric resonance to realize resonators with engineered nonlinear dissipation over wide frequency range. Introduction In nature, from macro- to nanoscale, dynamical systems evolve towards thermal equilibrium while exchanging energy with their surroundings. Dissipative mechanisms that mediate this equilibration convert energy from the dynamical system of interest to heat in an environmental bath. This process can be intricate, nonlinear, and in most cases hidden behind the veil of linear viscous damping, which is merely an approximation valid for small amplitude oscillations. In the last decade, nonlinear dissipation has attracted much attention with applications that span nanomechanics 1 , materials science 2 , biomechanics 3 , thermodynamics 4 , spintronics, 5 and quantum information 6 . It has been shown that the nonlinear dissipation process follows the empirical force model \({F}_{d}=-{\tau }_{{\rm{nl1}}}{x}^{2}\dot{x}\) where τ nl1 is the nonlinear damping coefficient, x is the displacement, and \(\dot{x}\) velocity. To date, the physical mechanism from which this empirical damping force originates has remained ambiguous, with a diverse range of phenomena being held responsible including viscoelasticity 7 , phonon–phonon interactions 8 , 9 , Akheizer relaxation 10 , and mode coupling 11 . The fact that nonlinear damping can stem from multiple origins simultaneously makes isolating one route from the others a daunting task, especially since the nonlinear damping coefficient τ nl1 is perceived to be a fixed parameter that unlike stiffness 12 , 13 , 14 , quality factor 15 , and nonlinear stiffness 16 , 17 , 18 , cannot be tuned easily. Amongst the different mechanisms that affect nonlinear damping, intermodal coupling is particularly interesting, as it can be enhanced near internal resonance (IR), a special condition at which the ratio of the resonance frequencies of the coupled modes is a rational number 19 . This phenomenon has frequently been observed in nano/micromechanical resonators 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 . At IR, modes can interact strongly even if their nonlinear coupling is relatively weak. Interestingly, IR is closely related to the effective stiffness of resonance modes, and can therefore be manipulated by careful engineering of the geometry of mechanical systems, their spring hardening nonlinearity 30 , 31 , and electrostatic spring softening 29 . IR also finds its route in the microscopic theory of dissipation proposed back in 1975, where it was hypothesized to lead to a significantly shorter relaxation time if there exists a resonance mode in the vicinity of twice the resonance frequency of the driven mode in the density of states 32 . Here, we demonstrate that nonlinear damping of graphene nanodrums can be strongly enhanced by parametric–direct IR, providing supporting evidence for the microscopic theory of nonlinear dissipation 10 , 32 . To achieve this, we bring the fundamental mode of the nanodrum into parametric resonance at twice its resonance frequency, allowing it to be tuned over a wide frequency range from 40 to 70 MHz. We extract the nonlinear damping as a function of the parametric drive level, and observe that it increases as much as 80% when the frequency shift of the parametric resonance brings it into IR with a higher mode. By comparing the characteristic dependence of the nonlinear damping coefficient on parametric drive to a theoretical model, we confirm that IR can be held accountable for the significant increase in nonlinear damping. Results Measurements Experiments are performed on a 10 nm thick multilayer graphene nanodrum with a diameter of 5 µm, which is transferred over a cavity etched in a layer of SiO 2 with a depth of 285 nm. A blue laser is used to thermomechanically actuate the membrane, where a red laser is being used to detect the motion, using interferometry (see “Methods” for details). A schematic of the setup is shown in Fig. 1 a. Fig. 1: Nonlinear dynamic response of a graphene nanodrum near 2:1 internal resonance. a Fabry–Pérot interferometry with thermomechanical actuation and microscope image of the graphene. Experiments are performed in vacuum at 10 −3 mbar. Red laser is used to detect the motion of the graphene drum and the blue laser is used to optothermally actuate it. BE beam expander, QWP quarter wave plate, PBS polarized beam splitter, PD photodiode, DM dichroic mirror, VNA vector network analyzer, \({{\rm{V}}}_{a{c}_{in}}\) analyzer port, \({{\rm{V}}}_{{{\rm{ac}}}_{{\rm{out}}}}\) excitation port. In the device schematic, Si and SiO 2 layers are represented by orange and blue colors, respectively. b Direct frequency response curve of the device (motion amplitude vs. drive frequency), showing multiple resonances (drive level = −12.6 dBm). The mode shapes are simulated by COMSOL. Resonance peaks are associated with \({f}_{m,n}^{(k)}\) where m represents the number of nodal diameters, n nodal circles, and k = 1, 2 stand for the first and second asymmetric degenerate modes. Dashed line shows 2 f 0,1 , which is the drive frequency where the parametric resonance of mode f 0,1 is activated. c Parametric resonance curves (calibrated motion amplitude vs. drive frequency), driven at twice the detection frequency. As the parametric resonance curves approach the 2:1 internal resonance (IR), f SNB first locks to 2:1 IR frequency and consecutively saddle-node bifurcation surges to a higher frequency and amplitude. A SNB and f SNB stand for the amplitude and frequency of saddle-node bifurcation. d Variation of the nonlinear damping τ nl1 as a function of drive F 1 . Dashed lines represent different regimes of nonlinear damping. White region represents a constant nonlinear damping, purple region an increase in nonlinear damping in the vicinity of 2:1 IR and orange region an increase in nonlinear damping due to IR with a higher mode. Full size image By sweeping the drive frequency, we obtain the frequency response of the nanodrum in which multiple directly driven resonance modes can be identified (Fig. 1 b). We find the fundamental axisymmetric mode of vibration at f 0,1 =20.1 MHz and several other modes, of which the two modes, at \({f}_{2,1}^{(1)}\) = 47.4 MHz and \({f}_{2,1}^{(2)}\) = 50.0 MHz, are of particular interest. This is because, to study the effect of IR on nonlinear damping, we aim to achieve a 2:1 IR by parametrically driving the fundamental mode, such that it coincides with one of the higher frequency modes. The frequency ratios \({f}_{2,1}^{(1)}/{f}_{0,1}\approx 2.3\) and \({f}_{2,1}^{(2)}/{f}_{0,1}\approx 2.4\) are close to the factor 2, however additional frequency tuning is needed to reach the 2:1 IR condition. The parametric resonance can be clearly observed by modulating the tension of the nanodrum at frequency ω F with the blue laser while using a frequency converter in the vector network analyzer (VNA) to measure the amplitude at ω F /2 as shown in Fig. 1 c. By increasing the parametric drive, we observe a Duffing-type geometric nonlinearity over a large frequency range, such that the parametrically driven fundamental resonance can be tuned across successive 2:1 IR conditions with modes \({f}_{2,1}^{(1)}\) and \({f}_{2,1}^{(2)}\) , respectively. In Fig. 1 c, we observe that the parametric resonance curves follow a common response until they reach the saddle-node bifurcation frequency f SNB above which the parametric resonance curve reaches its peak amplitude A SNB and drops down to low amplitude. We note that the value of A SNB can be used to determine the degree of nonlinear damping 33 . Therefore, to extract the nonlinear damping coefficient τ nl1 of mode f 0,1 from the curves in Fig. 1 c, we use the following single-mode model to describe the system dynamics $${\ddot{x}}_{1}+{\omega }_{1}^{2}{x}_{1}+\gamma {x}^{3}={F}_{1}{x}_{1}\cos ({\omega }_{F}t)-2{\tau }_{1}{\dot{x}}_{1}-2{\tau }_{{\rm{nl1}}}{x}_{1}^{2}{\dot{x}}_{1},$$ (1) in which ω 1 = 2 π f 0,1 is the eigenfrequency of the axisymmetric mode of the nanodrum, γ is its Duffing constant, and F 1 and ω F are the parametric drive amplitude and frequency, respectively. Moreover, 2 τ 1 = ω 1 / Q is the linear damping coefficient, with Q being the quality factor, and τ nl1 is the nonlinear damping term of van der Pol type that prevents the parametric resonance amplitude A SNB from increasing to infinity 33 , 34 at higher driving frequencies since ∣ A SNB ∣ 2 ∝ (2 F 1 Q − 4)/ τ nl1 . To identify the parameters governing the device dynamics from the measurements in Fig. 1 c, we use Eq. ( 1 ) and obtain good fits of the parametric resonance curves using τ nl1 and γ as fit parameters (see Supplementary Note I ). As we gradually increase the drive level, f SNB increases until it reaches the vicinity of the IR, where we observe an increase in τ nl1 (Fig. 1 d). Whereas f SNB increases with parametric drive F 1 , Fig. 1 c shows that its rate of increase \(\frac{{\rm{d}}{f}_{{\rm{SNB}}}}{{\rm{d}}{F}_{1}}\) slows down close to \({f}_{2,1}^{(1)}\) , locking the saddle-node bifurcation frequency when f SNB ≈ 45 MHz. At the same time, τ nl1 increases significantly at the associated parametric drive levels, providing the possibility to tune nonlinear damping up to twofolds by controlling F 1 , as seen in Fig. 1 d. Figure 1 c also shows that above a certain critical parametric drive level F 1,crit , the frequency locking barrier at f SNB ≈ 45 MHz is broken and f SNB suddenly jumps to a higher frequency (≈5 MHz higher), and a corresponding larger A SNB . We label this increase in the rate \(\frac{{\rm{d}}{f}_{{\rm{SNB}}}}{{\rm{d}}{F}_{1}}\) by “surge” in Fig. 1 c, where an abrupt increase in the amplitude–frequency response is observed to occur above a critical drive level F 1,crit . Interestingly, even above F 1,crit a further increase in τ nl1 is observed with increasing drive amplitude, indicating that a similar frequency locking occurs when the parametric resonance peak reaches the second IR at \({f}_{{\rm{SNB}}}\approx {f}_{2,1}^{(2)}\) . Similar nonlinear phenomena are also showcased in a second nanodrum undergoing parametric–direct modal interaction, confirming the reproducibility of the observed physics (see Supplementary Note II ). Theoretical model Although the single-mode model in Eq. ( 1 ) can capture the response of the parametric resonance, it can only do so by introducing a nonphysical drive level dependent nonlinear damping coefficient τ nl1 ( F 1 ) (Fig. 1 d). Therefore, to study the physical origin of our observation, we extend the model by introducing a second mode whose motion is described by generalized coordinate x 2 . Moreover, to describe the coupling between the interacting modes at the 2:1 IR, we use the single term coupling potential \({U}_{{\rm{cp}}}=\alpha {x}_{1}^{2}{x}_{2}\) (see Supplementary Note III ). The coupled equations of motion in the presence of this potential become $$\begin{array}{ll}&{\ddot{x}}_{1}+{\omega }_{1}^{2}{x}_{1}+\gamma {x}_{1}^{3}+\frac{\partial {U}_{{\rm{cp}}}}{\partial {x}_{1}}={F}_{1}{x}_{1}\cos ({\omega }_{F}t)-2{\tau }_{1}{\dot{x}}_{1}-2{\tau }_{nl1}{x}_{1}^{2}{\dot{x}}_{1},\\ &{\ddot{x}}_{2}+{\omega }_{2}^{2}{x}_{2}+\frac{\partial {U}_{{\rm{cp}}}}{\partial {x}_{2}}={F}_{2}\cos ({\omega }_{F}t)-2{\tau }_{2}{\dot{x}}_{2}.\end{array}$$ (2) The two-mode model describes a parametrically driven mode with generalized coordinate x 1 coupled to x 2 that has eigenfrequency \({\omega }_{2}=2\pi {f}_{2,1}^{(1)}\) , damping ratio τ 2 , and is directly driven by a harmonic force with magnitude F 2 . To understand the dynamics of the system observed experimentally and described by the model in Eq. ( 2 ), it is convenient to switch to the rotating frame of reference by transforming x 1 and x 2 to complex amplitude form (see Supplementary Note IV ). This transformation reveals a system of equations that predicts the response of the resonator as the drive parameters ( F 1 , F 2 , and ω F ) are varied. Solving the coupled system at steady state yields the following algebraic equation for the amplitude a 1 of the first mode $$\begin{array}{ll}&{\left[{\tau }_{1}+({\tau }_{{\rm{nl1}}}+{\tilde{\alpha }}^{2}{\tau }_{2})\frac{{a}_{1}^{2}}{4}\right]}^{2}+{\left[{{\Delta }}{\omega }_{1}-\left(\frac{3\gamma }{{\omega }_{F}}+{\tilde{\alpha }}^{2}{{\Delta }}{\omega }_{2}\right)\frac{{a}_{1}^{2}}{4}\right]}^{2}\\ &=\frac{1}{4{\omega }_{F}^{2}}\left[{F}_{1}^{2}+{\tilde{\alpha }}^{2}({F}_{2}^{2}+2{\omega }_{F}{{\Delta }}{\omega }_{2}{F}_{1}{F}_{2}/\alpha )\right],\end{array}$$ (3) where Δ ω 1 = ω F /2 − ω 1 and Δ ω 2 = ω F − ω 2 are the frequency detuning from the primary and the secondary eigenfrequencies, and \({\tilde{\alpha }}^{2}={\alpha }^{2}/[{\omega }_{F}^{2}({\tau }_{2}^{2}+{{\Delta }}{\omega }_{2}^{2})]\) is the rescaled coupling strength. Essentially, the first squared term in Eq. ( 3 ) captures the effect of damping on the parametric resonance amplitude a 1 , the second term captures the effect of nonlinear coupling on the stiffness and driving frequency, and the term on the right side is the effective parametric drive. From the rescaled coupling strength \(\tilde{\alpha }\) and Eq. ( 3 ) it can be seen that the coupling \({\tilde{\alpha }}^{2}\) shows a large peak close to the 2:1 IR where ∣ Δ ω 2 ∣ ≈ 0. In addition, Eq. ( 3 ) shows that mode 2 will always dissipate energy from mode 1 once coupled, and that the two-mode model accounts for an increase in the effective nonlinear damping parameter ( \({\tau }_{nl{\rm{eff}}}={\tau }_{{\rm{nl1}}}+{\tilde{\alpha }}^{2}{\tau }_{2}\) ) near IR, in accordance with the observed peak in τ nl1 with the single-mode model in Fig. 1 d. It is also interesting to note that this observation in steady state is different from what has been reported in Shoshani et al. 24 for transient nonlinear free vibrations of coupled modes where it was important that τ 2 ≫ τ 1 to observe nonlinear damping. The two-mode model of Eq. ( 3 ) allows us to obtain good fits of the parametric resonance curves in Fig. 1 b, with a constant τ nleff ≈ 3.4 × 10 21 (Hz/m 2 ) determined far from IR and a single coupling strength α = 2.2 × 10 22 (Hz 2 /m) which intrinsically accounts for the variation of τ nleff near IR. These fits can be found in Supplementary Note V , and demonstrate that the two-mode model is in agreement with the experiments for constant parameter values, without requiring drive level dependent fit parameters. We note that the extracted nonlinear damping parameter fits the Duffing response at f 0,1 with good accuracy too (see Supplementary Note VI ). To understand the physics associated with the frequency locking and amplitude–frequency surge, we use the experimentally extracted fit parameters from the two-mode model and numerically generate parametric resonance curves using Eq. ( 3 ) for a large range of drive amplitudes (see Fig. 2 a). We see that for small drive levels, an upward frequency sweep will follow the parametric resonance curve and then will lock and jump down at the first saddle-node bifurcation (SNB1) frequency, which lies close to \({f}_{{\rm{SNB}}}\approx {f}_{2,1}^{(1)}\) . At higher parametric drive levels, the parametric resonance has a stable path to traverse the IR toward a group of stable states at higher frequencies. Fig. 2: Parametric–direct internal resonance. a Color map of the analytical model response curves obtained by using the fitted parameters from experiments. Colors correspond to frequency response (motion amplitude vs. drive frequency) solutions with a certain parametric drive level. Black lines show samples from these solutions where solid lines are stable and dashed lines are unstable solutions. White dashed line is where parametric resonance meets with interacting mode and undergoes internal resonance. b The underlying route of the amplitude–frequency surge is revealed by tracing the evolution of saddle-node bifurcations (green and red squares represent theoretical SNB1 and SNB2, whereas experimental SNB1 is represented by crosses) of the parametric resonance curves. Full size image A more extensive investigation of this phenomenon can be carried out by performing bifurcation analysis of the steady-state solutions (see Supplementary Note IV ). The bifurcation analysis reveals two saddle-node bifurcations near the singular region of the IR, one at the end of the first path (SNB1) and another at the beginning of the second path (SNB2) (Fig. 2 b). As the drive amplitude increases, the bifurcation pair starts to move toward each other until they annihilate one another to form a stable solution at the connecting point, which we labeled as “surge.” It is also possible to observe that the rate at which saddle-node pairs approach each other dramatically drops near the IR condition, demonstrating the “locking” which we also observed in the experiments. To check how closely the two-mode model captures the variation of τ nl1 close to the IR condition, we follow a reverse path, and fit the numerically generated resonance curves of Fig. 2 a using the single-mode model of Eq. ( 3 ) with τ nl1 as the fit parameter. In this way, we track the variation of τ nl1 in the single-mode model with the parametric drive F 1 , similar to what we observed experimentally and reported in Fig. 1 c. The result of this fit is shown in Fig. 3 a, where a similar anomalous change of nonlinear damping is obtained for the two-mode model. Fig. 3: Measurements and fits of the effective nonlinear damping. a Variation of the effective nonlinear damping parameter ( τ nleff ) with respect to parametric drive. The τ nleff is obtained by fitting the numerically generated curves of Fig. 2 a as the fit parameter. Dashed lines represent different regimes of nonlinear damping. White regions represent a constant nonlinear damping and purple region represents an increase in nonlinear damping in the vicinity of 2:1 IR. b Comparison of the ratio between linear damping ( τ 1 ) and total damping ( τ tot ). In the figure, blue and red dashed lines represent τ 1 / τ tot obtained from uncoupled and coupled models, whereas black crosses represent experiments. Full size image The variation of nonlinear damping affects the total damping (sum of linear and nonlinear dissipation) of the resonator too. It is of interest to study how large this effect is. In Fig. 3 b, we report the variation in the ratio of the linear damping τ 1 and the amplitude-dependent total damping τ tot = ( ω 1 / Q + 0.25 τ n l eff ∣ x 1 ∣ 2 ) 33 in the spectral neighborhood of \({f}_{2,1}^{(1)}\) , and observe a sudden decrease in the vicinity of IR. This abrupt change in the total damping is well captured by the two-mode model. With the increase in the drive amplitude, τ 1 / τ tot values of this model though deviate from those of the experiments due to a subsequent IR at \({f}_{2,1}^{(2)}/{f}_{0,1}\approx 2.4\) that is not included in our theoretical analysis. The dependence of τ 1 / τ tot on frequency shows that near IR, the total damping of the fundamental mode increases nearly by 80%. We note that increased nonlinear damping near IR was also observed in Güttinger et al. 11 . In that work, nonlinear damping was studied using ringdown measurements, with two modes brought close to an IR by electrostatic gating. The increased nonlinear damping was attributed to a direct–direct 3:1 IR, which as shown theoretically in Shoshani et al. 24 leads to a high order (quintic) nonlinear damping term. Conversely, in our work, two modes are brought into parametric–direct 2:1 IR by adjusting the parametric drive level. This results in a nonlinear damping term that already comes into play at smaller amplitudes because it is of lower (cubic) order, as discussed in Shoshani et al. 24 . Moreover, the nonlinear damping mechanism in Güttinger et al. 11 is approximately described by two exponential decays with crossovers from ( τ 1 + τ 2 )/2 to τ 1 , which implies that similar to Shoshani et al. 24 , τ 2 > τ 1 is required to observe positive nonlinear damping. This is in contrast with the damping mechanism we describe, where the effective nonlinear damping actually increases for smaller τ 2 (see Eq. ( 3 )). Discussion Since the tension of the nanodrum can be manipulated by laser heating, we can further investigate the tunability of the nonlinear damping by increasing the laser power and detecting the range over which 2:1 IR conditions may occur. When increasing the blue laser power and modulation, we observe the parametrically actuated signal also in the direct detection mode (like in Fig. 1 b) due to optical readout nonlinearities 35 . As a result a superposition of Fig. 1 b, c is obtained, as shown in Fig. 4 . We note that the enhanced laser power increases membrane tension which moves f 0,1 upward by a few MHz, but also allows us to reach even higher parametric modulation. In this configuration, we achieve a frequency shift in f SNB from 40 to 70 MHz, corresponding to as much as 75% tuning of the mechanical motion frequency. This large tuning can increase the number of successive IRs that can be reached even further, to reach modal interactions between the parametric mode f 0,1 and direct modes \({f}_{2,1}^{(2)}\) and f 0,2 (see Fig. 4 ). As a result, multiple amplitude–frequency surges can be detected in the large frequency range of 30 MHz over which nonlinear damping coefficient can be tuned. Fig. 4: Nonlinear frequency response measurements at high drive powers. The parametric resonance interacts successively with multiple directly driven modes of vibration. The arrows in the figure show successive amplitude–frequency surges. Starting from the dashed line, shaded area represents the region where nonlinear damping is tunable. Full size image In summary, we study the tunability of nonlinear damping in a graphene nanomechanical resonator, where the fundamental mode is parametrically driven to interact with a higher mode. When the system is brought near a 2:1 IR, a significant increase in nonlinear damping is observed. In addition, the rate of increase of the parametric resonance frequency reduces in a certain locking regime, potentially stabilizing the values of f SNB and A SNB , which could potentially aid frequency noise reduction 21 . Interestingly, as the drive level is further increased beyond the critical level F 1,crit , this locking barrier is broken, resulting in a surge in f SNB and amplitude of the resonator. These phenomena were studied experimentally, and could be accounted for using a two-mode theoretical model. The described mechanism can isolate and differentiate mode coupling induced nonlinear damping from other dissipation sources, and sheds light on the origins of nonlinear dissipation in nanomechanical resonators. It also provides a way to controllably tune nonlinear damping which complements existing methods for tuning linear damping 15 , linear stiffness, 12 , 13 , 14 and nonlinear stiffness 16 , 17 , 18 , extending our toolset to adapt and study the rich nonlinear dynamics of nanoresonators. Methods Sample fabrication Devices are fabricated using standard electron-beam (e-beam) lithography and dry etching techniques. A positive e-beam resist (AR-P-6200) is spin coated on a Si wafer with 285 nm of thermally grown SiO 2 . The cavity patterns ranging from 2 to 10 µm in diameter are exposed using the Vistec EBPG 5000+ and developed. The exposed SiO 2 are subsequently etched away in a reactive ion etcher using a mixture of CHF 3 and Ar gas until all the SiO 2 is etched away and the Si exposed. Graphene flakes are then exfoliated from natural crystal and dry transferred on top of cavities. Laser interferometry The experiments are performed at room temperature in a vacuum chamber (10 −3 mbar). A power modulated blue laser ( λ = 405 nm) is used to thermomechanically actuate the nanodrum. The motion is then readout by using a red laser (λ = 633 nm) whose reflected intensity is modulated by the motion of the nanodrum in a Fabry–Pérot etalon formed by the graphene and the Si back mirror (Fig. 1a ). The reflected red laser intensity from the center of the drum is detected using a photodiode, whose response is read by the same VNA that modulates the blue laser. The measured VNA signal is then converted to displacement in nanometers using a nonlinear optical calibration method 35 (see Supplementary Note VII ). Data availability The data that support the findings of this study are available from the corresponding authors upon request.
Mechanical sources of dissipation play a key role in modern physics, with applications that span nanomechanics, biomechanics, materials science, and quantum computing. In clocks and other vibrating mechanisms, energy loss is usually proportional to the speed of the vibrating object. But in special circumstances, where one resonant frequency of the resonator is exactly twice as high as another resonant frequency, these losses suddenly become much greater, as additional energy is lost through the coupling between these modes of vibration. With support from the European Research Council (ERC), associate professor Farbod Alijani and Ata Keşkekler Ph.D. student in the department of precision and microsystems engineering at TU Delft, tuned the interaction between the vibrational states of a graphene nanodrum in such a way that one mode vibrates exactly twice as fast as another. In doing so, they also showed that with this mechanism it is possible to control the damping force via the coupling strength between the two vibration modes. Ata Keşkekler: "Normally, the rate at which the sound of a guitar string decays is independent of how hard you pluck it. However, if we make an analogy between a nanoresonator and a guitar, in this work we find a mechanism which indicates that if you tune another string close to a note that is the first octave of the string that is played, the rate of decay becomes dependent on how hard you pluck it. The closer to the octave, the stronger is this dependency." As there have been few possibilities to influence the damping force in nanosystems until now, this research paves the way to exciting possibilities to better understand the origin of dissipation at the nanoscale and realize ultrasensitive controllable sensors. For this study, the researchers worked with colleagues from Ben Gurion University and the Kavli Institute of Nanoscience at TU Delft. This week, Nature communications published the results of this study.
doi.org/10.1038/s41467-021-21334-w
Biology
Scientists find sex differences in mosses play key role in carbon storage
Adam L. Healey et al, Newly identified sex chromosomes in the Sphagnum (peat moss) genome alter carbon sequestration and ecosystem dynamics, Nature Plants (2023). DOI: 10.1038/s41477-022-01333-5 Journal information: Nature Plants
https://dx.doi.org/10.1038/s41477-022-01333-5
https://phys.org/news/2023-02-scientists-sex-differences-mosses-play.html
Abstract Peatlands are crucial sinks for atmospheric carbon but are critically threatened due to warming climates. Sphagnum (peat moss) species are keystone members of peatland communities where they actively engineer hyperacidic conditions, which improves their competitive advantage and accelerates ecosystem-level carbon sequestration. To dissect the molecular and physiological sources of this unique biology, we generated chromosome-scale genomes of two Sphagnum species: S. divinum and S. angustifolium . Sphagnum genomes show no gene colinearity with any other reference genome to date, demonstrating that Sphagnum represents an unsampled lineage of land plant evolution. The genomes also revealed an average recombination rate an order of magnitude higher than vascular land plants and short putative U/V sex chromosomes. These newly described sex chromosomes interact with autosomal loci that significantly impact growth across diverse pH conditions. This discovery demonstrates that the ability of Sphagnum to sequester carbon in acidic peat bogs is mediated by interactions between sex, autosomes and environment. Main Sphagnum (peat moss) is both an individual genus and an entire ecosystem. Sphagnum -dominated peatlands are estimated to cover ~3–5% of the Northern Hemisphere boreal zone, yet store ~30% of the total global terrestrial carbon pool 1 . Sphagnum grows most abundantly in bogs and fens, where they engineer peatland habitats through acidification (via cation exchange for nutrient uptake) and depletion of oxygen to promote their own persistence and dominance 2 . Within bogs, Sphagnum species display niche preferences, growing at different heights above the water table (‘hummock’ mounds and ‘hollow’ valleys) and pH levels. This community microtopography is characteristic of Sphagnum -dominated peatlands where species habitat is phylogenetically conserved such that closely related species occupy similar niches 3 , 4 which correlates with differences in growth, carbon sequestration and tissue decomposability. For these well-documented niche differences among species, Sphagnum and their bogs have long served as a model for studies of community assembly, stress physiology and carbon sequestration 5 , 6 ; efforts which have recently been bolstered by ecological genomics and biogeochemical experimental innovations 7 . In addition to species genetic differentiation, Sphagnum community assembly and within-species trait variation appear to be controlled in part by sex-ratio biases, where sexes are differentially adapted to local environments 8 . Sex, in haploid-dominant life-cycle bryophytes that have been examined, is determined by U/V (U, female; V, male) sex chromosomes that segregate 1:1 among spores during meiosis 9 . However, the mechanism for sex determination in Sphagnum has not yet been elucidated. While a balanced sex ratio is expected within any bryophyte habitat, skewed ratios are often observed (evidenced by either phenotypic or genotypic observations) 10 , particularly within stressful environments. These biases have important implications on effective population sizes and in extreme cases could result in population collapse 11 , 12 . Given that bryophyte sex ratios are influenced by extreme environments and Sphagnum engineers harsh, unfavourable conditions within bogs, Sphagnum comparative genomics provides a unique opportunity to investigate the underlying genetic components of bryophyte sex-determination and sex-ratio bias. To facilitate genetic analysis of carbon sequestration and stress responses in peatlands and understand how Sphagnum responds to environmental stress (both native and self-generated), we developed the first chromosome-scale, de novo genome assemblies for S. angustifolium (subgenus Cuspidata ) and S. divinum (subgenus Sphagnum ). Genome sequencing and genetic map construction enabled the discovery of a minuscule (~5 megabase (Mb)) sex chromosome (chr. 20; V chromosome) that is one-quarter the size of other chromosomes, shares conserved gene order (synteny) with autosome chr. 7 and is derived from ancient whole-genome rearrangements. To study how Sphagnum contends with abiotic stress encountered in peat bogs, reference genotypes were exposed to laboratory-simulated pH stress, finding that species endemic to hummock and hollow niches differentially respond to alkalinity and acidity through hormone expression and plasmodesmata-mediated cell transport. Investigation of the effect of pH stress on Sphagnum physiology in our F 1 -haploid pedigree population found that quantitative trait loci (QTLs) that impacted growth were dependent on U/V chromosome inheritance, providing a direct link between peatland environmental conditions, carbon sequestration and sex-ratio biases that are commonly observed in bryophytes. Results Sphagnum represents an uncharacterized lineage of plants Peat accumulation within bogs is primarily linked to growth, biomass deposition and low rates of decomposition. These traits, as well as niche and pH preferences among Sphagnum species are phylogenetically conserved. While tremendous variation exists across the five Sphagnum subgenera 13 based on previously explored phylogenetic relationships and niche evolution 3 , 14 , for reference genome sequencing we selected two haploid genotypes, one from the ancestral hummock clade: subgenus Sphagnum ( S. divinum ; recently reclassified within the S. magellanicum species complex 15 ) and the other from ancestral hollow clade: subgenus Cuspidata ( S. angustifolium ; previously S. fallax —misclassified at the time of collection; genotyped using marker data from ref. 16 ). Although Sphagnum diverged from other mosses millions of years ago (Ma), within the genus these references represent diverged lineages which diversified during the Miocene (7–20 Ma; ref. 17 ) and contain a large swath of Sphagnum functional ecological variation. Both were sequenced to ~70× consensus long read (CLR) PacBio coverage and assembled into highly contiguous chromosome sequences (Supplementary Table 1 ): the S. divinum and S. angustifolium genome assemblies were 439 Mb (contig N50: 17.5 Mb) and 395 Mb (contig N50: 17.4 Mb) in size, respectively. This is consistent with k -mer based genome size estimates for each reference of 424 and 367 Mb, respectively. Chromosomes were scaffolded for S. angustifolium from a high-density genetic map consisting of 2,990 genetic markers in 20 linkage groups (chromosomes 1–20). Gene content, order and orientation were then projected onto the S. divinum assembly to separately order contigs into 20 chromosomes. Each genome was also annotated with a combination of RNA-seq evidence-based and ab initio gene models, finding 25,227 primary gene models in S. divinum and 25,100 in S. angustifolium . Comparison of chromosomes between S. divinium and S. angustifolium showed that, despite their divergence, high collinearity between genomes is maintained (Fig. 1a ). Long contiguity of the genomes (contig N50: 17.5 and 12.1 Mb, respectively), paired with high synteny and annotation protein BUSCO scores (Viridiplantae: 98.3% each), show that the genome assemblies are high quality and the most contiguous non-vascular plant genomes produced thus far 18 , 19 , 20 , 21 . Interestingly, Sphagnum genome synteny does not extend to any other bryophyte or vascular plant lineages investigated (Supplementary Fig. 1 ), a result consistent with the findings of ref. 19 with Anthoceros hornwort genomes ( Sphagnum – Anthoceros divergence: 496 Ma). This is in direct contrast, however, with the moss Physcomitrium patens and liverwort Marchantia polymorpha where gene colinearity can still be observed with other land plants 22 , 23 . Fig. 1: Comparative genomics of Sphagnum . a , Syntenic mapping between chromosomes, comparing gene density and repeat content. The orientation of chr. 9 (marked *) is reversed for visualization purposes. Chr. 7 and chr. 20 are duplicated with expanded axes to the right of the main plot to highlight the differences in repeat content. b , S. angustifolium recombination rate (calculated from the S. angustifolium genetic map) with putative centromere positions, denoted with red asterisks showing RLC5 cluster positions. Lines are coloured on the basis of y axis position to better highlight regions of low recombination (yellow) c , Zoomed in look at the RLC5 cluster region on chr. 7. Top panel shows recombination rate from the S. angustifolium genetic map (coloured by position on y axis), showing a drop in recombination coinciding with the RLC5 cluster. Bottom panel shows the recombination haplotypes (maroon and blue) within the F 1 -haploid pedigree ( n = 184; denoted on the y axis), finding no recombined haplotypes in the region overlapping with the RLC5 cluster. d , Recombination/LOD score heatmap for chr. 7 to show high recombination rate in pedigree and tight linkage among markers. Source data Full size image We observed typical bryophyte chromosome structure 22 , 23 , 24 , 25 in these genomes. Repeat-rich pericentromeres (typical of angiosperms) were conspicuously absent while gene density was largely uniform across the genome, ranging between 20% and 25% (Fig. 1a ). Genome-wide repeat content (~30%), with unclassified and terminal inverted repeat CACTA superfamily being the most abundant types (7.8% and 6.8%, respectively; Fig. 1a ), was also similar to existing bryophyte genomes. Further, bryophytes have been shown to lack typical centromeric structures that are usually located via large arrays of tandem duplicated genes and increases in repeat density. In the P. patens genome, the authors noted unique Copia -like elements that clustered in distinct locations within each chromosome. These clusters were primarily composed of full-length and truncated RLC5 long terminal repeat (LTR) elements in tight clusters on each chromosome 22 . The same feature was described in green algae ( Coccomyxa subellipsoidea ), which was believed to be a centromeric structure 26 . To determine whether RLC5 clusters were present in Sphagnum , the RLC5 Copia sequence was extracted from the Ceratodon purpureus 18 genome and was used to mask each Sphagnum genome. Each chromosome in both S. angustifolium and S. divinum possessed at least one dense RLC5 cluster locus, with some chromosomes having additional satellite clusters as well (Fig. 1b and Supplementary Table 2 ). A genome-wide scan of recombination across the S. angustifolium genome shows that the RLC5 clusters generally coincide with reduced recombination (Fig. 1b ). For example, close inspection of the RLC5 loci on chr. 7 found no recombination, with two separate non-recombining haplotypes present (Fig. 1c ). Given that each chromosome possesses a RLC5-dominated and non-recombining repeat cluster suggests that these LTR Copia elements function as a highly conserved centromere structure, whose evolution can be traced back to green algal ancestors. A sequenced F 1 -haploid pedigree (derived from a single, field-collected sporophyte) of 184 S. angustifolium genotypes revealed a highly dense genetic map structure and high recombination rate in Sphagnum . The S. angustifolium genetic map is populated with 2,990 genetic markers and a total length of 5,396 centimorgan (cM) (Fig. 1b,d ). The average recombination rate of the genome is 10–30 cM per Mb, or an average physical distance of 73 kilobases (kb) per cM, which is an order of magnitude higher than most vascular plants 27 and appears to be a feature of mosses: the P. patens genetic map and recombination rate is similar to Sphagnum (5,432 cM; 27 linkage groups; ~11 cM per Mb) 22 , while the genetic map of M. polymorpha , a liverwort, contains roughly one-third of the recombination (76–111 cM per ~20 Mb chromosome) 23 . Given Sphagnum’s propensity for hybridization 28 , increased recombination may accelerate adaptation to environmental stresses and facilitate purging of deleterious alleles carried through linkage drag 29 . Sphagnum genetic diversity and phylogenetics Bryophytes, being one of the earliest plant clades 30 to colonize land ~500 Ma, have undergone morphological and genetic diversification to contend with stresses associated with terrestrial life. To explore these evolutionary relationships, we constructed a land plant phylogeny among orthologues using IQ-TREE 2 (ref. 31 ; Fig. 2a and Supplementary Fig. 2 ). Divergence time estimation using fossil calibrated rates suggests that the two Sphagnum species represented by our references diverged ~16 Ma, which coincides with Miocene era cooling in North America that possibly led to Sphagnum diversification and radiation 17 . Reconstructing the evolutionary history within Sphagnum has remained a difficult task due to complex patterns of gene flow, incomplete lineage sorting and introgression 28 . To separate these phylogenetic signals, we sequenced a Sphagnum diversity panel (17 species in 35 accessions; Supplementary Table 3 ) representing each subgenera ( Acutifolia, Cuspidata, Rigida, Sphagnum and Subsecunda ). Alignment to S. angustifolium found 5,155,719 single nucleotide polymorphisms (SNPs) and 834,730 insertions/deletions (indels) across the panel, evenly distributed across chromosomes (Extended Data Fig. 1a ). Visualization of the SNP variation using multidimensional scaling (MDS) shows the first two principal axes (45% and 22% of total variance explained) separate the largest taxonomic clades ( Acutifolia, Cuspidata and Sphagnum ) (Fig. 2b ), with axes two and three (6% total explained variance) separating niche preference among subgenera (Extended Data Fig. 1b ). Using 16,171 orthologues among Sphagnum species and non- Sphagnum peat mosses ( Flatbergium spp.), the nuclear phylogeny presented strong conflict with the chloroplast phylogeny (Fig. 2c ) suggesting evidence of past introgression (Extended Data Fig. 2a,b ). Fig. 2: Sphagnum phylogenetics and response to pH stress. a , Fossil calibrated land plant phylogeny, with the branch separating the chlorophyte algae Chlamydomonas and Volvox from other species shortened for clarity and showing only terminal tips representative of major vascular plant lineages. Node ages (Ma) of note include: (1) Bryophyte divergence (515 Ma), (2) liverwort–moss divergence (473 Ma), (3) Sphagnopsida divergence (391 Ma), (4) P. patens – C. purpureus divergence (268 Ma) and (5 ) Sphagnum radiation (16 Ma). b , Sphagnum diversity panel SNP MDS plot. Species are coloured by subgenera and niche ecosystem preference (closed circle, hummock; cross, hollow). c , Phylogenetic relationships among haploid samples in the diversity panel using nuclear and chloroplast data suggest cytonuclear discordance. Branch support reflects ultrafast bootstrap values and nodes not labelled received maximal support. d , The pH stress response among S. angustifolium and S. divinum . e , Sign test among shared GO terms under alkaline stress. Results show that genes with shared terms are upregulated in S. divinum and downregulated in S. angustifolium . Red dashed lines represent the 95% confidence intervals. Source data Full size image Ecosystem engineering is one of the primary mechanisms that Sphagnum uses to gain a competitive advantage over other organisms. One strategy used by Sphagnum to achieve this is acidification of their environment by cation exchange which keeps biomass inaccessible to microbial decomposition 3 . Given pH preference among Sphagnum subgenera, significant divergence among functional groups and thousands of genes we discovered bearing signature of positive selection among hummock and hollow lineages (hummock, 3,806 genes; hollow, 1,759 genes; Supplementary Table 4 ), we hypothesize that gene expression regulatory network evolution would both underlie pH preferences and respond to altered edaphic conditions. Using the land plant phylogeny, investigation of gene families found that 3,865 gene families (Supplementary Table 5 ) were expanded (average 3.6 Sphagnum genes per orthogroup versus 2.5 in land plants) in the most recent common ancestor of Sphagnum , with significant gene ontology (GO) enrichments for plant signal transduction ( GO:0007165 ; adjusted P = 1.5 × 10 –3 ) and response to stress ( GO:0006950 ; adjusted P = 2.5 × 10 –2 ). When exploring the effect of pH exposure (pH 3.5 and 9.0; Fig. 2d ) on gene expression among S. divinum and S. angustifolium , pathways related to plant hormone signal transduction (plasmodesmata-mediated transport and jasmonic acid biosynthesis and response) were differentially expressed (Fig. 2e and Supplementary Tables 6 and 7 ), with genes associated with plasmodesmata-mediated transport being a top enriched target for transcription factors (TFs) in S. divinum . In mosses, both jasmonic acid (and its upstream precursor 12-oxo-phytodienoic acid; ref. 32 ) and plasmodesmata-mediated transport has been directly linked to phytohormone response to abiotic stress 33 , 34 , 35 , suggesting these phytohormone and cell-to-cell signalling pathways are highly conserved among vascular and non-vascular plants. Whole-genome duplications and the origin of a sex chromosome Much like gene family expansion, whole-genome duplication (WGD) events provide the raw material for sub- and neo-functionalization and were important for terrestrial colonization from algae to land plants 36 . WGDs, while apparently pervasive in mosses, are difficult to detect due to their age 37 . Sphagnum , however, has highly conserved inter-/intragenomic synteny which enables ancestral chromosome reconstruction. Comparing S. angustifolium and S. divinum chromosome synteny reveals that, unlike P. patens with seven ancestral chromosomes 22 , Sphagnum possesses five ancestral chromosomes (A, B, C, D and E) that underwent two separate WGD events and a loss of a copy of ancestor E (4x ABCD; 3x E) to generate the modern-day Sphagnum genome. These ancestral chromosomes correspond to: A (chr. 1, 2, 5 and 8); B (chr. 3, 13, 14 and 18); C (chr. 4, 10, 11 and 15); D (chr. 6, 7, 9 and 12); and E (chr. 16, 17 and 19) (Fig. 3a ). Additionally, chr. 7 maintains synteny with chr. 3, 13 and 14, resulting from a portion of ancestral chromosome D either being duplicated or translocated onto chromosome B before the first WGD, being maintained throughout each duplication, then lost from chr. 18. Chr. 20 (discussed below), being ~4× smaller (4.7 Mb) than chr. 1–19, shares best-hit synteny with chr. 7 (Fig. 3b ) and is a possible relic from the ancestral B/D translocation/duplication and subsequent loss from chr. 18 (Fig. 3d ). Fig. 3: WGDs and ancestral chromosome reconstruction in Sphagnum . a , Interchromosomal synteny between S. divinum and S. angustifolium . S. divinum chromosomes are re-ordered to group paralogous chromosomes together while S. angustifolium chromosomes are arranged in increasing order (1–20). Ancestral B–D synteny on chr. 3, 13 and 14 is highlighted. b , Synonymous mutation rate among paralogous gene pairs in S. divinum . Two distributions derived from WGD are shown with the median of each peak (0.406; 0.643) marked with a coloured vertical line. c , Paralogous gene pairs among chr. 7 and chr. 20. Chr. 20 shares best-hit synteny with chr. 7. d , Ancestral chromosome reconstruction in Sphagnum . Little interchromosomal rearrangement has occurred after each WGD, except for the loss of one of the ancestral E chromosome homologues (noted with a red X). Genome duplication ages from ref. 38 . Source data Full size image To reconstruct the evolutionary history of each WGD, synonymous mutation rates (Ks) were calculated among syntenic paralogues among putative ancestral chromosomes. The most parsimonious number of Gaussian distributions among paralogues was two, coinciding with Ks peaks at 0.406 and 0.643 (Fig. 3c and Supplementary Table 8 ). This finding is consistent with the number of WGD events investigated by ref. 38 , finding that Sphagnum and closely related peat moss genera Flatbergium and Eosphagnum shared two WGD events (189–247 Ma and 102–122 Ma; 95% CI), based on Ks values and reconstructed gene trees. After each WGD, the Sphagnum genome has remained remarkably stable, undergoing few large-scale chromosome rearrangements or translocations, with some chromosomes maintaining almost 1:1 chromosome-scale synteny with their duplicated counterparts (for example, chr. 6 and 7; Fig. 3a ). In addition to 19 autosomal chromosomes, the assembly and genetic map of S. angustifolium first revealed the presence of another small chromosome (chr. 20), which was also present in S. divinum (chr. 20 5.4 Mb). Chr. 20 is approximately one-quarter the size of other chromosomes and displays suppressed recombination (2 cM; expected recombination was ~60 cM, based on size and recombination rate; Fig. 4a ). Consistent with low recombination, chr. 20 also contains significantly more LTR content than chr. 1–19 (Ty3 16% versus 4%, Fisher’s exact test odds ratio 4.34, P < 0.001; Copia 1.2% versus 0.5%; Fisher’s exact test odds ratio 2.33, P < 0.001; Fig. 1a ) and contains a low number of genes (coding sequence bases 8% versus 26%, Fisher’s exact test odds ratio 0.31; P < 0.001) that have non-synonymous (dN)/synonymous (dS) (dN/dS) ratios consistent with relaxed purifying selection (Wilcoxon rank sum test P = 0.011; Extended Data Fig. 3a ). Fig. 4: U/V chromosome detection and analysis. a , Recombination rate per chromosome, finding chr. 20 has a much lower rate of recombination than expected from the other 19 chromosomes. b , Sliding window analyses (100,000 bp window, 10,000 bp jump) of nucleotide diversity and F ST between S. divinum chr. 20 SNP clusters. c , Exact k -mer dotplot with word size 15 for the shared sequence region between chr. 20 (putative V) and Scaffold9707 (putative U fragment), assembled from suspected female genotypes. d , S. angustifolium competitive mapping assay between chr. 20 and Scaffold9707. Ratio of reads mapped to the shared U/V region are shown, with individuals mapping to one sequence or the other (NA-ambiguous mapping ratio). Null distribution of autosome pairwise ratios is shown in yellow. e , Sphagnum diversity panel competitive mapping assay. Regardless of subgenera, individuals either mapped preferentially to the shared region of chr. 20 or Scaffold9707. Monoicous species ( S. squarrosum, S. compactum, S. strictum and S. fimbriatum ) each preferentially mapped to chr. 20. Positions on plot have been randomly ‘jittered’ by 0.1 units to improve readability among points. Source data Full size image One of the first systematic descriptions of chromosome structure within Sphagnum was conducted in 1955, where chromosome squashes typically described 19 bivalents and usually two minor (or M chromosomes) that were notably smaller 39 . Sphagnum , like most (60%) 40 mosses, are dioicous (separate male and female haploid gametophytes) where sex is determined by U/V chromosome inheritance 41 . While tempting to assign chr. 20 to a sex chromosome on the basis of its characteristics (non-recombining, highly repetitive, relaxed purifying selection) and similarity to other bryophyte sex chromosomes 20 , 21 , 42 , the same genomic features are true for B chromosomes, which are pervasive throughout the plant kingdom 43 , 44 . As B chromosomes are cytogenetically inherited, we expected that the population genetic structure of polymorphism on a B chromosome would mirror variation found on the primary 19 chromosomes. Alternatively, moss U/V sex chromosomes that evolved in the ancestor of Sphagnum should possess high nucleotide diversity and strong patterns of divergence between females and males regardless of neutral genetic population structure of polymorphism on the autosomes. To test whether chr. 20 is a sex or B chromosome, we sequenced ten wild S. divinum samples collected across North America (Supplementary Table 9 ). SNPs on chr. 20 formed two distinct and highly diverged ( F ST > 0.95) clusters that did not match chr. 1–19 structures (Supplementary Fig. 3 ). This, in addition to high nucleotide diversity on chr. 20 between clusters ( π = 0.0015; Fig. 4b and Supplementary Table 10 ), suggests that chr. 20 is a sex chromosome. To definitively determine whether chr. 20 was either a U or V sex chromosome, we investigated its structure within the S. angustifolium F 1 -haploid pedigree, which contained the maternal parent of the cross. Mapping reads from the pedigree to chr. 20 showed a bimodal distribution (designated ‘low mapping’ and ‘high-mapping’; Extended Data Fig. 3b ). As the maternal parent was a ‘low mapping’ individual, we suspected that the S. angustifolium reference is male and chr. 20 was a putative V chromosome. To investigate its U chromosome counterpart, reads from 20 individuals within the low-mapping chr. 20 distribution (including the maternal parent) were combined and assembled together (increasing coverage) using HipMer 45 . Protein sequences from chr. 20 were aligned to the HipMer scaffolds and any sequence that corresponded to each protein’s top alignment were extracted (coverage >60%). Scaffolds were then added to the S. angustifolium genome assembly for a competitive mapping assay among the pedigree population. One HipMer scaffold, Scaffold9707 (putative U chromosome segment—133,694 base pairs (bp)), displayed a similar, yet opposite, bimodal mapping pattern to chr. 20 (Extended Data Fig. 3b ). Scaffold9707 is primarily composed of repeat content, except for 6.3 kb which contains a shared gene with chr. 20 (Sphfalx20G000800; calcium-binding EF-hand protein; 92% sequence identity) (Fig. 4c ). Pairwise read count ratios (Supplementary Table 11 ) within this shared region found that reads from almost all individuals in the pedigree definitively map to one sequence or the other (except two, labelled NA), which is not significantly different than the expected 50:50 sex ratio of an F 1 -haploid population (female, 83; male, 91; exact binomial test P = 0.59). Pairwise count ratios between randomly sampled 6.3 kb regions across chr. 1–19 show no mapping bias (with the median shared U/V mapping ratio observed in 0.0006 autosome combinations) (Fig. 4d ). To conclusively test whether chr. 20 is the male V chromosome and Scaffold9707 represents a fragment of the female U chromosome, PCR primers were designed from the shared 6.3 kb region (Fig. 4c ), intended to amplify a sex-specific ~400 bp target amplicon. DNA from vouchered Sphagnum samples ( n = 28) where sex was known (through identification of sexual structures (females n = 16; males n = 12; Supplementary Table 12 )) was extracted and used for PCR amplification. PCR results (Extended Data Fig. 3c,d ) show that 100% of males and females generated their expected amplicon with no cross-reactivity, confirming that chr. 20 represents male (V) and Scaffold9707 represents female (U) sequences. While Sphagnum is predominantly dioicous, some species are monoicous 46 . To better understand sex-determination in monoicous Sphagnum , species within the diversity panel (which contains both dioicous and monoicous species) were competitively mapped against the shared region of chr. 20 and Scaffold9707 (Supplementary Table 13 ). Read mapping preference found two distinct groupings which were independent of phylogenetic relationship (Fig. 4e ). Consistent with other bryophytes, the evolution of sex is not strictly related to changes in ploidy 47 . Sex could be confidently assigned in dioicous species (Supplementary Tables 3 and 13 ); however, all monoicous individuals tested mapped preferentially to chr. 20, suggesting that the potential role the V chromosome (or lack of U) may play a role in this transition. Lastly, to determine whether Sphagnum sex chromosomes share an origin with other U/V bryophytes, we built 36 gene trees from orthogroups that contained a gene annotated to chr. 20 in Sphagnum and a U- or V-linked gene in Ceratodon 18 or Marchantia 21 . None of these showed a topology supporting a shared sex chromosome system but rather separate gene capture and loss events on the sex chromosomes in these species, suggesting sex chromosomes in Sphagnum may have arose independently (Supplementary Figs. 4 and 5 ). Sex-specific growth response to acidic bog conditions Despite its importance to global carbon cycling, the genetic mechanisms of the adaptation of Sphagnum to its engineered low pH environment is poorly understood. To infer the genetic loci that cause variation in the response of Sphagnum to pH stress, clones of the F 1 -haploid pedigree population were exposed to control (6.5 pH), acidic (4.5 pH) and alkaline (pH 8.5) conditions. We used relative growth rate (hereon ‘growth’, defined as occupied area within each imaging well (Fig. 5a ) from time zero, log transformed) as the phenotype in each experimental treatment and calculated the relative response of each genotype as the difference between growth relative to the control (Supplementary Table 14 ). Growth was fastest under control pH (growth rate +0.88 mm 2 d −1 ). Comparison among conditions found significant differences in growth (Kruskal–Wallis test; chi-square value 198.62; d.f. = 2, P < 0.001), which was slowest within the low pH treatment (growth rate −0.08 mm 2 d −1 ; Student’s t -test, t = 21.4; P < 0.001). Within the high pH condition, there was a bimodal distribution of growth, where some individuals exhibited similar growth patterns to the control condition, while others grew similarly to those at low pH. Growth at high pH was significantly different from both the control (Wilcoxon rank sum test, n = 148, W = 15,595, P < 0.001) and low pH (Wilcoxon rank sum test, n = 148, W = 5,562, P < 0.001). Fig. 5: S. angustifolium pedigree QTL mapping in response to pH stress. a , Growth of pedigree genotypes under control and acidic stress conditions. b , Relative growth rates for the S. angustifolium pedigree under control, high (pH 8.5) and low (pH 4.5) pH conditions ( n = 150). c , QTL mapping of low pH growth differences. Two QTL peaks were detected on chr. 7 and chr. 10. LOD scores, conditional on other QTL in a multiple QTL model, are presented. d , QTL effect plots. The connected line plots (shown with error bars) show the differences in growth for the variant alleles underlying each QTL loci. Each QTL is dependent on sex and autosomal parental allele (blue, A allele; orange, B allele). Panels are ordered by low (pH 4.5), control (pH 6.5) and high (pH 8.5) conditions, with data presented as mean values ± s.e. MQM, multiple QTL mapping; RGR LS, relative growth rate least squares. Source data Full size image In addition to size and growth dimorphism 48 , 49 , bryophyte sex-ratio biases are often observed, where females tend to be favoured in a population 11 , 12 , 50 (although male bias has been observed in Sphagnum ) 8 . Given this previous research, we expected the presence of U or V genetic markers to be a strong predictor of growth under variable pH conditions. Unexpectedly, there were no significant differences in growth between sexes (Kruskal–Wallis test; chi-square value 0.80; d.f. = 1, P > 0.05), nor any significant effects of sex within any of the experimental treatments (all Wilcoxon rank sum tests, P > 0.05) (Fig. 5b ). Our results also did not reveal differences in nucleotide diversity among inferred males and females within S. divinum wild populations (as predicted if mortality differences caused sex-ratio biases 11 ; Supplementary Table 10 ) or sex-ratio bias within the pedigree (although this population was reared under artificial conditions). Despite the lack of additive sex-biased phenotypic responses to pH conditions, it is possible that loci on the sex chromosomes or otherwise associated with cytoplasmic inheritance may interact with autosomal variation. Such sex (or cytoplasm)-by-autosome interactions are a common form of epistasis and may underlie genotype-by-environment interaction (G × E) to abiotic stresses in plants 51 . To test for epistatic interactions between autosome and sex chromosomes that cause differential growth responses, we conducted QTL mapping on the change in growth between experimental treatments and the control condition. In contrast to the lack of global sex-driven G × E, QTL scans conditioning on the additive effect of sex and testing for autosome–sex interactions detected two significant QTL peaks on chr. 7 (logarithm of the odds ratio (LOD) 3.8) and chr. 10 (LOD 6.4; Fig. 5c ) for response to low pH stress. While exposure to pH stress (both high and low) reduced growth across the pedigree, the effect of that stress was dependent upon sex and epistatic interactions at each major QTL loci. We found a significant interaction between sex-genotype and environment at QTLs on chr. 10 ( t = 2.462, d.f. = 131.55, P < 0.05) and chr. 7 ( t = 2.095, d.f. = 131.75, P < 0.05) when comparing the differences in growth between control and low pH conditions. Investigating these significant interactions in each sex separately found a significant G × E interaction among males at both loci (chr. 10 model test of fit F (1,155) = 23.885, P < 0.001; chr. 7 model test of fit F (1,155) = 13.21, P < 0.001). In contrast, growth in females was significantly impacted upon exposure to low pH but lacked any additive or epistatic effects at either loci (chr. 10 F (2,110) = 14.71, P < 0.001; chr. 7: F (2,110) = 14.71, P < 0.001). Considering each QTL peak was driven by sex-specific G × E interactions (often caused by trans-regulatory evolution), we investigated chr. 20 TFs with autosomal trans-effects. There were two annotated TFs on chr. 20, a mini-zinc finger (Sphfalx20G006700) and a Trihelix family protein (Sphfalx20G007700). Mini-zinc finger TFs have been broadly implicated in plant growth, root and flower growth and development, plant life span, fertility, and causes hormone insensitivity 52 , while trihelix TFs have been linked to biotic/abiotic stress response and tissue development 53 , 54 . The expression of both TFs was highly correlated ( r > 0.8) with protein kinases within the QTL peak on chr. 10 (Supplementary Table 15 ). The rank change differences observed in growth among males (where an allele is beneficial in one environment but detrimental in another; Fig. 5d ) is indicative of antagonistic pleiotropy 55 , 56 . This suggests separate adaptive strategies (specialist versus generalist) may be used by males and females in Sphagnum under abiotic stress conditions and could provide an explanation for why females (who lacked antagonistic pleiotropy) may be generally favoured in bryophyte populations. Discussion The deep divergence between Sphagnum and other mosses and land plants in general is underscored by their genomics, biology and ecological function. Their ability to hybridize and generate unique allelic combinations through high recombination, paired with the ecosystem engineering allows Sphagnum to dominate across multiple biomes around the world. The key to the importance of Sphagnum for global carbon cycling is ecological differences between hummock and hollow species, which are directly impacted by differences in growth, cell wall structure and pH preference. The ability of Sphagnum to contend with both native and induced environmental stresses encountered within peatlands is directly linked to differential stress response, jasmonic acid precursors and cell-to-cell signalling via plasmodesmata, pathways that arose when plants first colonized land ~500 Ma (ref. 24 ). An unexplored aspect of peatland carbon cycling is the effect of sex on growth and carbon sequestration in Sphagnum . In C. purpureus , females tend to produce thicker and larger leaves relative to males, which enables greater carbon sequestration (measured through leaf photochemistry) 49 . Bryophyte populations tend to skew toward one sex over the other 10 , 57 ; however, hypotheses put forth to explain sex-ratio biases in bryophytes (for example, ‘shy’ males) have not accounted for epistatic interactions between U/V sex chromosomes and autosomes that result in differential response to environmental stresses. Local adaptation and maintenance of diversity in Sphagnum could be driven by sex-specific G × E interactions, resulting in plastic responses to stress within peatland conditions. Differential response between sexes could certainly result in sex-ratio bias if one sex can respond more effectively to persistent abiotic stress. Exploration of these principles in the Sphagnum pedigree population revealed a complex interaction between sex, genotype and environment, which would have remained undiscovered without the discovery of small U/V sex chromosomes in Sphagnum . These interactions were governed by an antagonistic pleiotropy and will require further study to fully elucidate their putative effects on sex-ratio biases, growth and carbon sequestration in Sphagnum and bryophytes in general. Sphagnum , with its small haploid genome, ease of maintenance and phenotyping in large-scale experimental populations 58 and minuscule sex chromosomes linked to ancient whole-genome rearrangement, serves as a tractable model organism for not only niche ecosystem preference and carbon cycling but also sex chromosome evolution. Methods Plant material collection Reference genome materials ( S. angustifolium and S. divinum ) were collected from the Marcell Experimental Forest (SPRUCE S1-Bog) (47.506639, −93.455897) by D. Weston in July 2016 and are maintained at the Duke herbarium (Duke University, NC, USA). Voucher information for Sphagnum samples included in the analyses is provided in Supplementary Table 3 . Unextracted portions of each specimen have been deposited in the Duke herbarium. DNA extraction and sequencing Genomic DNA from references grown in axenic cultures (derived from a single, surface-sterilized (70% ethanol), gametophyte stem) was extracted using the protocol of ref. 59 with minor modifications (2% CTAB buffer with proteinase K, PVP-40, sodium metabisulfite and beta-mercaptoethanol). DNA purity was measured with Nanodrop, DNA concentration measured with Qubit HS kit and DNA size was validated by pulsed field gel electrophoresis. Illumina libraries for the references were prepared as tight insert fragment libraries, 400 bp; 2 μg of DNA was sheared to 400 bp using the Covaris LE220 and size selected using the Pippin (Sage Science). The fragments were treated with end-repair, A-tailing and ligation of Illumina compatible adaptors (IDT) using the Kapa-Illumina library creation kit (Kapa Biosystems). The prepared libraries were quantified using the Kapa Biosystem next-generation sequencing library quantitative PCR (qPCR kit) and run on a Roche LightCycler 480 real-time PCR instrument. The quantified libraries were then prepared for sequencing on the Illumina HiSeq sequencing platform using a TruSeq Rapid paired-end cluster kit, v.2, with the HiSeq2500 sequencer instrument to generate a clustered flowcell for sequencing. Sequencing was performed on the Illumina HiSeq2500 sequencer using HiSeq Rapid SBS sequencing kits, v.2, following a 2 × 250 indexed run recipe. Sphagnum PacBio (20 kb) libraries (from the same genotypes listed above for Illumina) were prepared with BluePippin size selection; 3.4 μg of genomic DNA was sheared to 20 kb using Covaris g-TUBEs. The sheared DNA was treated with exonuclease to remove single-stranded ends and DNA damage repair mix followed by end-repair and ligation of blunt adaptors using SMRTbell Template Prep Kit 1.0 (Pacific Biosciences). The library was purified with AMPure PB beads and size selected with BluePippin (Sage Science) at >6 kb cutoff size. PacBio sequencing primer was then annealed to the SMRTbell template library and sequencing polymerase was bound to them using Sequel Binding Kit 2.0. The prepared SMRTbell template libraries were then sequenced on a Pacific Biosystems Sequel sequencer using v.3 sequencing primer, 1 M v.2 SMRT cells and v.2.0 sequencing chemistry with 1 × 600 sequencing movie run times. Each of the genomes was sequenced to ~75× raw haploid coverage. The long-reads were assembled using MECAT (v.1.2) 60 and subsequently polished using long-reads using ARROW (v.2.2.2) 61 . Diversity panel samples (collected from the wild; Supplementary Table 3 ) were prepared as Illumina regular fragment, 600 bp. Plate-based DNA library preparation for Illumina sequencing was performed on the PerkinElmer Sciclone NGS robotic liquid handling system using Kapa Biosystems library preparation kit. A total of 200 ng of sample DNA was sheared to 600 bp using a Covaris LE220 focused-ultrasonicator. The sheared DNA fragments were size selected by double-SPRI and then the selected fragments were end-repaired, A-tailed and ligated with Illumina compatible sequencing adaptors from IDT containing a unique molecular index barcode for each sample library. The prepared libraries were quantified using the KAPA Biosystems next-generation sequencing library qPCR kit and run on a Roche LightCycler 480 real-time PCR instrument. The quantified libraries were then prepared for sequencing on the Illumina HiSeq sequencing platform using a TruSeq paired-end cluster kit, v.4. Sequencing was performed on the Illumina HiSeq2000 sequencer (yielding ~80× coverage per library) using HiSeq TruSeq SBS sequencing kits, v.4, following a 2 × 150 indexed run recipe. DNA extraction from S. angustifolium pedigree samples were prepared similarly and sequencing libraries were constructed using an Illumina TruSeq DNA PCR-free library kit using standard protocols. Libraries were sequenced on an Illumina X10 instrument using paired ends and a read length of 150 bp and a sequencing depth of ~15× coverage. RNA experimental treatments S. divinum and S. angustifolium grown in sterile tissue culture were used in all treatments. There were a total of eight treatments with four replicates for S. divinum and two replicates for S. angustifolium . Before the experiment, 2.0 cm plugs of axenic Sphagnum were plated on BCD agar media at pH 6.5 and grown for 2 months in ambient temperature (20 °C) and a 350 photosynthetically active radiation (PAR) 12 h light/dark cycle. At 8:00 on the morning of the treatments, Sphagnum tissue was transferred to Petri dishes with 15 ml of appropriate BCD liquid media and placed in a temperature-controlled growth cabinet. Excluding the dark treatment, all samples were kept under 350 PAR for the duration of the experiment. Morning treatment samples ( S. divinum only) were harvested 10 min after the light turned on and all other samples were harvested at 12:00. After each experiment the material was blotted dry, placed in a 15 ml Eppendorf tube, flash frozen in liquid nitrogen and stored at −80 °C until RNA extractions were completed. For the control treatment, Sphagnum tissue was placed in a 22.05 cm 2 Petri dish containing BCD media 6.5 pH and incubated in a growth cabinet at 20 °C and ambient light 350 PAR. To test low pH gene expression, the sample was placed in a 22.05 cm 2 Petri dish containing 6.5 pH BCD media at 8:00. Each hour, the pH was gradually decreased until the sample was transferred to 3.5 pH media at 11:00. The samples were harvested at 12:00. This treatment was repeated for the high pH experiment, except the sample was gradually brought from 6.5 to 9.0 pH. Temperature experiments were controlled in growth cabinets with tissue in 22.05 cm 2 Petri dishes containing 6.5 pH BCD media. The high temperature treatment began at 20 °C and, over 3 h, temperature was gradually increased to 40 °C. The low temperature treatment began at 20 °C and, over 3 h, was gradually decreased to 6 °C. To test drought effect on gene expression, tissue was placed on dry plates (no BCD media) for the duration of the experiment. Dark effect on gene expression was tested by placing material in a BCD-filled Petri dish in complete darkness from 8:00 to 12:00. To evaluate gene expression that is present during immature growth stages, a sporophyte was collected from the mother of the S. angustifolium pedigree and germinated on solid Knop medium under axenic tissue culture conditions. After 10 d of growth, plantlets were predominantly within the thalloid protonemata with rhizoid stage and flash frozen in LN2 for RNA isolation. RNA library preparation and sequencing Total RNA was extracted from 100 mg of tissue with CTAB lysis buffer and the Spectrum Plant Total RNA Kit. Illumina RNASeq w/PolyA Selection, Plates—Plate-based RNA sample prep was performed on the PerkinElmer Sciclone NGS robotic liquid handling system using Illumina TruSeq Stranded mRNA HT sample prep kit using poly(A) selection of messenger RNA following the protocol outlined by Illumina in their user guide ( ) and with the following conditions: total RNA starting material was 1 μg per sample and eight cycles of PCR were used for library amplification. The prepared libraries were quantified using the KAPA Biosystem next-generation sequencing library qPCR kit and run on a Roche LightCycler 480 real-time PCR instrument. Sequencing of the flowcell was performed on the Illumina NovaSeq sequencer using NovaSeq Xp v.1 reagent kits, S4 flowcell, following a 2 × 150 indexed run recipe. Pedigree growth and phenotyping A sporophyte-bearing S. angustifolium mother MNSA5 (species verified by J. Shaw, Duke University) was collected at the SPRUCE experimental site within the S1-Bog on the Marcell Experimental Forest 62 on 15 July 2012. Gametophytes were shipped to Oak Ridge National Laboratory where only attached sporophytes were removed from the gametophyte and kept in separate microcentrifuge tubes. For culturing, a single sporangium from a single female gametophyte was transferred to a sterile 1.5 ml microcentrifuge tube in a laminar flow hood, washed in 10% bleach solution for 5 min with periodic mixing, followed by 3× wash with sterile type I water. The surface-sterilized capsule was crushed with a sterile pipette tip in 200 µl of sterile water, diluted to 1 ml with additional sterile water and 200 µl of the diluted suspension was further diluted to 1 ml and spread on a BCD/agar plate topped with a disc of sterile cellophane. Plates were incubated at 25 °C in continuous light (~150 PAR m −2 s −1 ) to test germination. After 2 weeks, thalloid gametophytes were visible and individual protonema were transferred to single cells of 24-well plates containing solid BCD 63 . Eventually, a single capitula from each growing gametophyte ( n = 600), as well as the maternal gametophyte was transferred to solid BCD or Knop plates or magenta vessels for maximum growth and maintenance at 25 °C 16 h days. Before collection of phenotypes 2.0 cm plugs of axenic S. angustifolium were plated on BCD agar media pH 6.5 and grown for 2 months in ambient temperature (20 °C) and a 350 PAR 12 h light/dark cycle. A single capitulum of axenic S. angustifolium was added to each well of a 12-well plate with 2 ml of BCD media (pH 4.5, 6.5 or 8.5). The plates were placed into growth chambers with a 12 h light/dark cycle. Black and white images were collected weekly and surface area was measured using the ImageJ software 64 . The change in surface area was determined as a proxy for growth. Map construction The 184 individuals sequenced in the pedigree population were aligned to the S. angustifolium pedigree, used for SNP calling as outlined below. A total of 2,856,328 SNPs were called, with 2,590,426 remaining after removing samples with >70% missing data ( n = 12) and SNPs with >2% missing data. The cleaned SNP matrix was phased using the maternal ILEE library to 1,113,729 SNPs. This phased dataset showed little segregation distortion, so the genotype matrix was further subsetted to remove those with high linkage disequilibrium (>99.9%) and markers displaying 35–65% representation across the pedigree ( n = 19,317). The genotype matrix was reformed as a QTL object using the github R package qtltools (v.1.2.0) 65 and pairwise recombination fractions (RFs) among markers were calculated. Markers retained in the QTL object had no RF < 0.01 with any other marker ( n = 5,969). Linkage groups (chr. 1–20) were formed using pairwise RFs with a minimum RF of 0.23 and a maximum LOD score of three. Markers on each linkage group were ordered using a travelling salesman problem solver, which minimizes the number of crossover events 66 . Lastly, after removing markers with segregation distortion patterns and those with high leverage (causing map expansion), markers closer than 1 cM apart were removed ( n = 2,990). Chromosome construction and assessment Genetic map markers (containing linkage and correlated cM position; n = 1,081,918) were extracted from the genetic map and aligned back to the PacBio assembly for S. angustifolium . Misjoins in the contigs were characterized by abrupt changes in linkage groups. Misjoins ( n = 10) were broken, re-ordered and re-oriented on the basis of the genetic map. S. angustifolium (v.0.5 annotation) gene models were aligned to the newly oriented chromosomes to assign each protein a relative position along each chromosome. Proteins ( n = 26,939) were then aligned to S. divinum PacBio contigs to identify misjoins ( n = 4) which were broken, ordered and oriented into 20 chromosomes. Genome size estimation Genome size of two samples ( S. angustifolium , Illumina library ZCGA; S. divinum , Illumina library AGHCS) was estimated using k -mer of size 21. The Illumina reads were quality trimmed (for adaptors, low-quality bases) using inhouse scripts. Jellyfish (v.2.3.0) 67 was used to estimate k -mer abundance and frequency distribution. Genome length and genome characteristics are estimated using Genomescope (v.2.0) 68 . Annotation Transcript assemblies were made from stranded paired-end Illumina RNA-seq reads, ~598 M pairs of 2 × 150 bp for S. divinum and ~1.7 bp of 2 × 125 bp for S. angustifolium , using PERTRAN, which conducts genome-guided transcriptome short-read assembly via GSNAP (v.2013-09-30) 69 . Subsequently, 117,772 transcript assemblies for S. divinum and 122,707 transcript assemblies for S. angustifolium were constructed using PASA (v.2.0.2) 70 from RNA-seq transcript assemblies above with respective genome. Loci were determined by transcript assembly alignments and/or EXONERATE (v.2.4.0) alignments of proteins from Arabidopsis thaliana , Glycine max , Oryza sativa Kitaake, Setaria viridis , Vitis vinifera , Amborella trichopoda , M. polymorpha and Chlamydomonas reinhardtii , high-confidence cross-species Sphagnum prelim gene models ( S. angustifolium for S. divinum or S. divinum for S. angustifolium ) and Swiss-Prot proteomes to repeat-soft-masked respective genome using RepeatMasker (v.open-4.0.7) 71 with up to 2 kb extension on both ends unless extending into another locus on the same strand. Repeat library consists of de novo repeats by RepeatModeler (v.open1.0.11) 72 on respective genome and repeats in RepBase. Gene models were predicted by homology-based predictors, FGENESH+ (v.3.1.0) 73 , FGENESH_EST (similar to FGENESH+, EST as splice site and intron input instead of protein/translated open reading frame (ORF)), EXONERATE 74 , PASA assembly ORFs (inhouse homology constrained ORF finder) and from AUGUSTUS (v.3.1.0) via BRAKER1 (v.1.6) 75 . The best scored predictions for each locus are selected using multiple positive factors including EST and protein support and one negative factor: overlap with repeats. The selected gene predictions were improved by PASA. Improvement includes adding untranslated regions, splicing correction and adding alternative transcripts. PASA-improved gene model proteins were subject to protein homology analysis to above-mentioned proteomes to obtain Cscore and protein coverage. Cscore is a protein BLASTP (v.2.2.26) score ratio to mutual best hit BLASTP score and protein coverage is highest percentage of protein aligned to the best of homologues. PASA-improved transcripts were selected on the basis of Cscore, protein coverage, EST coverage and its CDS overlapping with repeats. The transcripts were selected if Cscore was ≥0.5 and protein coverage ≥0.5, or if they had EST coverage but CDS overlapping with repeats is <20%. For gene models whose CDS overlaps with repeats for >20%, Cscore must be at least 0.9 and homology coverage at least 70% to be selected. The selected gene models were subject to Pfam analysis and gene models whose protein is >30% in Pfam TE domains were removed as were weak gene models. Incomplete gene models, low homology supported without fully transcriptome supported gene models and short single exon (<300 bp CDS) without protein domain nor good expression gene models, were manually filtered out. Transcriptome analysis Illumina paired-end RNA-seq 150 bp reads were quality trimmed ( Q ≥ 25) and reads <50 bp after trimming were discarded. RNA-seq samples with high-quality sequences were aligned to S. angustifolium and S. divinum reference genomes using GSNAP (v.2013-09-30) 69 and counts of reads uniquely mapping to annotated genes were obtained using HTSeq (v.0.11.2) 76 . Normalized count data were obtained using the relative log expression (RLE) method in DESeq2 package (v.1.14.1) 77 . Genes with low expression were filtered out by requiring ≥2 RLE normalized counts in at least two samples for each gene. Differential gene expression analysis was performed using the DESeq2 with adjusted P < 0.05 using the Benjamini–Hochberg method and a log fold change >1 as the statistical cutoff for differentially expressed genes. Expression data for all included treatments are available in Supplementary Tables 6 and 7 and Supplementary Fig. 6 . Additional sign test (probability of upregulation) comparisons between shared GO terms per experimental treatment are provided in Supplementary Fig. 7 . Weighted gene co-expression networks were constructed using WGCNA R package (v.1.49) 78 with variance stabilizing transformation expression data obtained from vst method in DESeq2 (v.1.14.1). We followed standard WGCNA network construction procedures for this analysis. Briefly, pairwise Pearson correlations between each gene pair were weighted by raising them to power. To select proper soft-thresholding power, the network topology for a range of powers was evaluated and appropriate power was chosen that ensured an approximate scale-free topology of the resulting network. The pairwise weighted matrix was transformed into topological overlap measure (TOM) and the TOM-based dissimilarity measure (1 − TOM) was used for hierarchical clustering. Initial module assignments were determined by using a dynamic tree-cutting algorithm. Pearson correlations between each gene and each module eigengene, referred to as a gene’s module membership, were calculated and module eigengene distance threshold of 0.25 was used to merge highly similar modules. These co-expression modules were assessed to determine their association with module eigengenes expression patterns distinct to tissues or conditions to gain insight into the potential biological role of each module. GO enrichment analysis was carried out using topGO (v.2.48.0), an R Bioconductor package 79 with Fisher’s exact test; only GO terms with a P < 0.05 were considered significant. To identify redundant GO terms, semantic similarity among GO terms was measured using Wang’s method implemented in GOSemSim (v.2.22.0) 80 , KEGG pathway enrichment analysis was performed on the basis of hypergeometric distribution test and pathways with P < 0.05 were considered enriched. Sphagnum diversity panel SNPs and indels The paired-end sequences (2 × 150 bp) of the 35 samples were aligned to the S. angustifolium reference genome using bwa-mem (v.0.7.12) 81 . The aligned bams were deduped (PCR duplicates marked) using picard (v.2.17.2–0) tools ( ). The alignment statistics of the bams were obtained using samtools (v.1.9) 82 . Variant calling was performed using samtools mpileup (v.1.9) and Varscan (v.2.4.3) 83 using a minimum depth of 8. Merging and filtering of the VCF was performed using bcftools (v.1.9) 84 . MDS coordinates were obtained for a random set of 50,000 SNPs obtained using LD pruning (--indep-pairwise 50 50 0.5) in PLINK (v.1.9) 85 . Polyploid samples were determined using variant frequency graphs (Supplementary Fig. 8 ) within CDS sequences and a minimum depth of 30. GENESPACE comparative genomics Syntenic orthologues and paralogues among S. divinum and S.angustifolium were inferred via GENESPACE (v.0.9.4) 86 pipeline using default parameters. In brief, GENESPACE compares protein similarity scores into syntenic blocks using MCScanX 87 and uses Orthofinder (v.2.5.4) 88 to search for orthologues/paralogues within synteny constrained blocks. Orthologue information is projected between reference genomes (Fig. 1a ). To search for conserved gene synteny among Sphagnum and other published bryophyte genomes ( C. purpureus , M. polymorpha , P. patens , H. curvifolium , E. seductrix and A. agrestis (Bonn)) 18 , 19 , 20 , 21 , 22 , GENESPACE was run using default parameters. No conserved gene order was found among S. angustifolium and any other bryophyte genome (raw syntenic hits before syntenic block construction shown in Supplementary Fig. 1 , with H. curvifolium–E. seductrix and S. angustifolium–S. divinum shown as positive controls. Similarly, to reconstruct ancestral chromosomes and infer WGDs, protein sequences within S. divinum hierarchical orthogroups were extracted from Orthofinder and aligned using MAFFT (v.7.487) 89 . Alignments were converted from amino acids into CDS sequences using pal2nal (v.13) 90 . Pairwise synonymous mutation rates (Ks) among sequences were calculated using seqinr (v.4.2-16) 91 . Mclust (v.5.4.10) 92 was used to estimate the number of normal distributions present ( k = 2) within the dataset on the basis of combined Ks values, based on Bayesian information criteria. Gene pairs ( n = 5,094) were assigned to peaks (Ks = 0.406; 0.643) on the basis of their posterior distribution using normalmixEM in mixtools (v.1.2.0) 93 . RLC5 cluster detection Putative centromeres within the S. angustifolium genome were detected using the RLC5 sequence extracted from the C. purpureus genome sequence 18 . Locations on each chromosome were discovered by masking the S. angustifolium genome with 20 bp k -mers from the RLC5 locus. Windows of 5 kb with a step size of 200 bp were slid across each chromosome. RLC5 regions (putative centromeres) are defined as five consecutive windows where >5% of bases are masked (Supplementary Table 2 ). Land plant phylogeny To place Sphagnum in the broader context of land plant evolution, we obtained protein-coding loci from 36 genomes to reconstruct phylogeny. We used the primary transcript from each locus for genomes obtained through Phytozome v.13 ( ) (Supplementary Table 17 ) and the longest isoform from each locus for the other genomes. Orthogroups among all species were inferred using Broccoli (v.1.2) 94 and used to generate a complete set of gene trees estimated under maximum likelihood from DIAMOND (v.0.9.35.136) 95 alignments using FastTree (v.2.1.11) 96 . To exclude paralogues and analyse only putatively single-copy orthologues, the Yang–Smith pipeline 97 was used to refine orthogroups. Briefly, tree-based refinement was performed to (1) mask in-paralogues, isoforms and redundant sequences, (2) trim outlier tips that probably represent assembly artifacts and (3) cut long internal branches to separate paralogous gene copies. For each round of refinement, codon alignments for each orthogroup were generated using translatorX (v.1.1–2) 98 and MAFFT (v.7.487). Alignments were then trimmed to 0.1 column occupancy using trimAl 99 (v.1.2rev59) and maximum likelihood trees were obtained with FastTree using only the first two codon positions due to saturation. Spurious tips were removed from the resulting trees with TreeShrink (v.1.3.2) 100 . Tips belonging to the same sample were then masked using the Yang–Smith script ‘mask_tips_by_taxonID_genomes.py’. To determine a suitable internal branch length for separating paralogous gene copies, we inferred orthologues using the ‘monophyletic outgroups’ method of Yang–Smith and used an adaptive threshold determined by the average branch length separating the outgroup from ingroup in trees that had all outgroups and at least half of the ingroup taxa 101 . These adaptive thresholds were used to cut longer internal branches of the maximum likelihood trees to separate paralogues. The entire refinement process was repeated after separating paralogues, producing a set of 3,230 orthologues using the ‘monophyletic outgroups’ method of the Yang–Smith pipeline and requiring at least half of all taxa to be present. Codon alignments for these orthologues were generated and trimmed to 0.7 column occupancy as described previously. We estimated the species tree using a concatenated alignment of first and second codon positions across orthologues in IQ-TREE 2 (v.2.1.3) 31 and determined the best partitioning scheme and substitution model using ModelFinder 102 . Branch support was determined using the ultrafast bootstrap method with 1,000 replicates. We estimated divergence times using the maximum likelihood tree and 12 fossil calibrations from ref. 103 (Supplementary Table 18 ) with treePL (v.1.0) 104 . The optimal smoothing parameter was chosen using cross-validation. To model gene family evolution, we used the original orthogroup delimitations from Broccoli and reconstructed ancestral states of gene family occupancy under Wagner parsimony (gain penalty 1.0) using the program Count (v.9.1106) 105 . We considered a gene family to be expanded (contracted) if the orthogroup occupancy was greater (less) in the most recent common ancestor of Sphagnum than in the most recent common ancestor of mosses. Enrichment analysis of GO ‘biological process’ terms was performed by creating a custom GO term database for S. divinum v.1.1 using AnnotationForge (v.1.34.1) 106 and using the enrichGO function in clusterProfiler (v.4.0.5) 107 to analyse the S. divinum loci associated with expanded (contracted) gene families. The P values form the enrichment tests were adjusted using the Benjamini–Hochberg procedure and a term was considered enriched if the adjusted P was <0.05. Nuclear and chloroplast phylogeny of Sphagnum To reconstruct the evolutionary history of samples within Sphagnum , we performed phylogenetic analyses using protein-coding loci from the two reference genomes ( S. angustifolium v.1.1 and S. divinum v.1.1), 28 haploid resequencing assemblies and the outgroup transcriptomes from Flatbergium novo-caledoniae and F. sericeum 38 . The Sphagnum resequencing libraries BPHAT, BPHAZ, BPHBZ and BPHBS were excluded from these analyses because contamination and/or low coverage prohibited de novo genome assembly. Protein-coding sequences within the Sphagnum diversity panel were predicted using the GeMoMa (v.1.6.4) 108 homology-based prediction pipeline (default parameters) on the basis of a constrained search, where the best hit locations of each Sphagnum transcript were extracted from each assembly with a 500 bp buffer. The nuclear phylogeny was generated from the predicted protein sequences and refined using the Yang–Smith pipeline as described above, except that the Yang–Smith script ‘mask_tips_by_taxonID_transcripts.py’ was used to mask tree tips belonging to the same sample. We used the ‘monophyletic outgroups’ method from the Yang–Smith pipeline to identify 16,171 orthologues, requiring at least half of the ingroup taxa to be present. These sequences were concatenated and the phylogeny was estimated using IQ-TREE 2 (v.2.1.3) 31 with model selection and branch support evaluated as described previously. To account for the possible effects of incomplete lineage sorting on phylogenetic reconstruction, we used the quartet-based method of ASTRAL (v.5.7.1) 109 to summarize the maximum likelihood orthologue genealogies in a coalescent framework (Extended Data Fig. 2c ). To estimate the organellar phylogeny for samples in our dataset, raw reads were used to perform de novo assembly of chloroplast genomes with NOVOPlasty (v.2.6.7) 110 . For each plastid genome, contigs were manually aligned to the published S. palustre plastid genome (GenBank KU726621 ) and to each other to identify the inverted repeat boundaries and generate a single incomplete chloroplast genome sequence (with missing data represented by strings of Ns) including the long single-copy region, one copy of the inverted repeat and the small single-copy region. Plastid sequences were aligned with MAFFT and the phylogeny was estimated using IQ-TREE 2 with model selection and branch support evaluated as described previously. Using both the nuclear and plastid maximum likelihood trees, a cophylogenetic plot was generated with the R package phytools (v.0.7-90) 111 in the R statistical programming environment (v.4.1). SNP phylogeny of Sphagnum and introgression In addition to gene-based phylogenetic analyses, we performed SNP-based phylogenetic analyses to reconstruct the evolutionary history of Sphagnum (Extended Data Fig. 1c ). Reads from resequencing samples were aligned to the S. divinum v.1.1 reference genome as outlined in the Sphagnum diversity panel section. Each sample was split from the multisample VCF and filtered to remove heterozygous sites using BCFtools (v.1.13) 84 . Individual samples were then filtered to keep only sites with a minimum depth of 10 (minDP = 10) and minimum genotype quality of 30 (minGQ = 30) using VCFtools (v.0.1.17). To include outgroups samples, we aligned transcriptome reads from F. novo-caledoniae and F. sericeum 38 to the S. divinum v.1.1 genome using the two-pass mode of STAR (v.2.7.9a) 112 . Before alignment of these transcriptome reads, we used Trimmomatic (v.0.39) to remove bases on the 3ʹ ends of reads in the FASTQ files with a quality score threshold of 25 and kept reads longer than 40 bp after trimming. Duplicates in the STAR alignments were marked and sorted using Picard (v.2.26.2) ( ). The alignments were then reformatted using the SplitNCigarReads function in GATK (v.4.2.2.0) 113 , variants were called using VarScan (v.2.3.9) 83 and the VCF file was filtered to keep homozygous sites with a minimum depth of 10 and minimum genotype quality of 30. Individual VCFs were combined to produce one multisample VCF containing only haploid samples with the two outgroups and another multisample VCF containing all Sphagnum samples (including polyploids). Each combined dataset was filtered to keep only autosomal sites with at least 80% of the samples genotyped. Sites were further filtered for a minor allele frequency of at least 0.05 and pruned for linkage disequilibrium using PLINK (v.1.90b6.24) 85 with a window size of 50 variants, a window shift of 10 variants after each pruning step and a variance inflation factor threshold of 2. The dataset containing all Sphagnum samples was used to infer phylogeny using IQ-TREE 2 as previously described. The dataset containing only haploid samples was used to test for the presence of admixture due to the robust presence of cytonuclear discordance in other analyses. The program Dsuite (v.0.4 r38) 114 was used to calculate D -statistics (ABBA-BABA) and f 4 -ratios across the genome. As the true phylogeny of Sphagnum is unknown, we report the D min statistic 115 . For a given species trio, D min is the lowest value for D across all possible tree topologies and represents a lower bound for the amount of introgression. A D min score >0 means that the evolutionary relationships between species in a given trio cannot be represented by a strictly bifurcating tree due to excess allele sharing (Extended Data Fig. 2b ). As introgression between ancestral lineages can lead to correlated values of D across extant lineages, we used the f -branch metric to determine whether interspecific gene flow detected using D -statistics could reflect past introgression. We used the maximum likelihood tree from the analysis of the ‘monophyletic outgroups’ orthologue dataset to quantify f -branch. Significance testing was performed using the block jackknife method with 100 blocks and the resulting P values were adjusted using the Benjamini–Hochberg procedure (Extended Data Fig. 2a ). Signatures of selection We sought to detect signatures of natural selection across the genus Sphagnum by comparing the rates of dN and dS substitution in protein-coding genes. The goal of this analysis was to identify genes subject to positive selection during the evolution of hummock and hollow lineages. We obtained orthologues using the Yang–Smith pipeline as described in the previous section on phylogenetic reconstruction within the genus Sphagnum but used the ‘rooted ingroups’ method (16,910 orthologues) requiring at least half of the ingroup taxa to be present. For each gene, an in-frame codon alignment and the corresponding maximum likelihood gene tree was estimated using IQ-TREE 2 as described previously. Stop codons and frameshifts within codon alignments were masked with ambiguous nucleotide characters using MACSE (v.2.05) 116 . Branches of each phylogenetic tree were designated as ‘foreground’ or ‘background’ for these tests, where foreground branches were those that we were interested in testing for evidence of positive selection. We assigned habitat designations to all terminal branches and performed ancestral state reconstruction to label internal branches of each gene tree corresponding to their marginal likelihood of being either hummock or hollow. Ancestral state reconstruction was performed using the rerooting method of ref. 117 under an equal rates model of transition probabilities using the phytools and evobiR (v.1.1) 118 packages in the R statistical programming environment. Two sets of analyses were conducted within Sphagnum : one in which hollow lineages were specified as foreground and another in which hummock lineages were the foreground. We used the method BUSTED 119 , implemented in HyPhy (v.2.5.32) 120 , to test each gene for signatures of positive selection. BUSTED is a branch-site test that aims to detect evidence of gene-wide positive selection along foreground branches of a phylogeny. Sites in each phylogenetic partition (foreground or background) were assigned to one of three omega ( ω or dN/dS) classes, ω 1 ≤ ω 2 ≤ 1 ≤ ω 3 and the likelihood of this model was compared to one constrained by the absence of a ω 3 class on foreground branches. The P values from the BUSTED analyses were adjusted using the Benjamini–Hochberg method and a test was declared significant at P < 0.05, indicating that at least one site on at least one foreground branch was positively selected (Supplementary Table 4 ). We also sought to determine if genes on chr. 20 have evidence for relaxed purifying selection. We used BUSTED without specifying foreground lineages to quantify dN/dS and performed a Wilcoxon rank sum test to assess whether the mean ratio in genes on chr. 20 was different from those on autosomes. We considered tests for orthologues that had loci from both reference genomes present (Extended Data Fig. 3a ). Higher values of dN/dS in genes on LG20 relative to those on autosomes could suggest relaxation in the strength of purifying selection, the presence of or increase in the strength of positive selection or a combination of these factors. Chr. 20 genomic diversity To examine the patterns of genomic variation per chromosome within S. divinum populations, DNA from ten genotypes collected across North America (Supplementary Table 9 ) was extracted, as noted above in DNA extraction and library preparation sections. SNP variation and MDS coordinates from each library were collected after aligning reads to the S. angustifolium reference (as noted in the Sphagnum diversity panel section). Variation on autosomes (chr. 1–19) followed rough geographic distributions, whereas variation on chr. 20 split into two clusters, independent of location. To examine patterns of genetic variation between clusters, we performed sliding window analyses using PopGenome (v.2.7.5) 121 in R v.4.0.3. We calculated nucleotide diversity within and between clusters, as well as F ST between clusters, using a window size of 100,000 bp with a jump of 10,000 bp. Windows were plotted using karyoploteR (v.1.16.0) 122 . Sex chromosome PCR confirmation To find conserved regions of the genome for primer design, the transcript sequence of the gametologue (Sphfalx20G000800) was aligned to the genome assemblies across the diversity panel using BLAT (v.30) 123 with default parameters. Diversity panel samples were binned together on the basis of suspected males and males (using their mapping ratio results (Supplementary Table 13 )). The bounds of the top match alignment were extracted used bedtools (v.2.29.0) 124 getfasta and combined for multiple sequence alignment using MAFFT (v.7.487) 89 . Gaps in the alignment were removed using Trimal (v.1.4.rev15; parameters: -gappyout) 99 . The conserved aligned regions among suspected male/female bins were used to design female (forward CCCTAGCTTCCAGCCAATTA, reverse CCTTCTTCTTGGCCTCATCTAC; expected amplicon size 394 bp) and male (forward TCCACAGAGGTGGACATAGA, reverse GTGGGATGAGAACTGGGATAAG; expected amplicon size 444 bp) primer sets for PCR. To determine whether the PCR primers were sex specific, Sphagnum samples ( n = 28) where sexual structures were observed in the field (and confirmed under microscope (for example, antheridia and capsules)) were used for DNA isolation and PCR amplification. DNA was extracted from a single capitulum of each sample using the modified CTAB extraction process described in ref. 125 . For each primer pair, genomic DNA was amplified by PCR in 30 μl volumes using KAPA HiFi HotStart ReadyMix and contained 50 ng of template DNA, a 0.25 mM concentration of each primer pair, 1 KAPA HiFi HotStart ReadyMix and molecular grade water. PCR amplifications were performed with the conditions 95 °C for 3 min, 25 cycles of 95 °C for 30 s, 58 °C for 30 s and 72 °C for 30 s and a final extension of 72 °C for 5 min. The PCR products were run in a 2% agarose gel at 80 V for 2 h (Extended Data Fig. 3c,d ). GeneRuler (100 bp) size fragment DNA ladder (Thermo Scientific; SM0241) is included in gel lanes 1 and 20. Gel lane details per sample are found in Supplementary Table 12 . U/V sex chromosome comparative genomics To examine whether the Sphagnum sex chromosomes share an origin with other U/V species, we built gene trees to examine the topology. We used peptides for 57 mosses and liverworts from existing de novo assemblies from ref. 18 and genome annotations for C. purpureus v.1.1 (ref. 18 ), P. paten s v.3.3 (ref. 22 ), M. polymorpha v.6.1 (ref. 21 ) and as outgroups we used Azolla filiculoides v.1.1 (ref. 126 ), Salvinia cucullata v.1.2 (ref. 126 ) and Selaginella moellendorffii v.1.0 (ref. 127 ). We used OrthoFinder v.2.5.2 (ref. 88 ) in ultrasensitive mode to identify orthogroups that contained genes on chr. 20 in our Sphagnum genomes and were sex-linked in C. purpureus or M. polymorpha . We aligned each gene using MAFFT (v.7.471) 89 with the parameter maxiterate set to 1,000 and using genafpair. We built gene trees using RAxML (v.8.2.12) 128 with 100 bootstrap replicates and the model PROTGAMMAWAG. We visually assessed each tree to determine if the topologies supported that any Sphagnum chr. 20 genes were found in a monophyletic clade with other V-linked genes. QTL mapping Quantitative loci mapping was performed in R/qtl (v.1.50) 129 using the Haley–Knott regression method on hidden Markov model-calculated genotype probabilities. One-way and multiple QTL model scans were conducted on log-transformed phenotypes to correct for right-skewed distributions. In all QTL scans, sex was treated as a covariate as determined both by markers extracted from the genetic map for chr. 20, as well as the ratio of reads mapped to the shared region among chr. 20 and Scaffold9707 (as described in main text). Any sample where the marker data or mapping ratio were ambiguous was assigned ‘NA’. To determine the significance thresholds for each QTL, 1,000 permutations were performed. To estimate the effects of predicted sex (male and female), genotype at each QTL locus (A or B) and treatment (control and low pH) on log-transformed growth, we fit a univariate mixed linear model with all two- and three-way interactions, with a random effect of individuals to account for the same individual measured in two conditions. Sex-specific linear models were run with interactions, with goodness of fit compared between each. S. angustifolium genes within significance intervals are listed in Supplementary Table 15 . Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Additional work to support the findings of this paper can be found in the Supplementary Figures and Tables . Sequencing libraries (Illumina DNA/RNA and PacBio CLR) are publicly available within the SRA. Individual accession numbers are provided in Supplementary Table 16 , with additional data submitted under BioProject PRJNA799298 . Genome assemblies and annotations (v.1.1) are freely available at Phytozome ( ). These whole-genome shotgun projects have been deposited at DDBJ/ENA/GenBank under the accessions JAJQJK000000000 ( S. angustifolium ) and JAKJHR000000000 ( S. divinum ). The versions described in this paper are versions JAJQJK010000000 and JAKJHR010000000. Raw data used for analysis in this paper are freely available on figshare ( ) 130 . Source data are provided with this paper. Code availability Scripts used for analysis in this paper are freely available on figshare ( ) 130 .
A quest to understand how Sphagnum mosses facilitate the storage of vast amounts of carbon in peatlands led scientists to a surprising discovery: The plants have sex-based differences that appear to impact the carbon-storing process. The insight can help researchers better understand how the mosses tolerate stressful environments, including those of a warming climate. The research team co-led by the Department of Energy's Oak Ridge National Laboratory sequenced the genome of two key species of Sphagnum, the mossy plants that dominate peatlands and store about one-third of the world's soil carbon despite covering just 3%–5% of Earth's land surface. Sphagnum mosses are known as the chief engineer of long-term carbon storage in peat, helping keep the bogs wet, acidifying the environment and slowing down plant decay, which in turn retains carbon in the soil. Sphagnum, living and dead, likely store more carbon than any other genus of plant. These unique, soggy peat bogs are under threat, however, from rising temperatures that could dry them and hamper their ability to absorb and retain carbon. In fact, research at the DOE Spruce and Peatland Responses Under Changing Environments, or SPRUCE, whole-ecosystem manipulation experiment in northern Minnesota has revealed that warming conditions result in peat bogs turning from carbon accumulators into carbon emitters. To better comprehend the genetics at play in peat carbon cycling, scientists at ORNL teamed with researchers from the HudsonAlpha Institute for Biotechnology; the DOE Joint Genome Institute, or JGI, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory; Duke University and others to sequence the complete genome of two Sphagnum species—S. divinum and S. angustifolium—present at the SPRUCE site. ORNL scientists also created a pedigree population of the mosses to link genes with Sphagnum traits. The research revealed tiny chromosomes that determine whether the plant is male or female. Scientists also found that these sex-determining chromosomes interact with other chromosomes to regulate plant responses to stress. The result, as described in Nature Plants, is important not just to the mosses' survival, but to their role in accumulating and holding carbon over time. "We know that the climate is changing, and it's changing rapidly at high latitudes," said Bryan Piatkowski, an evolutionary biologist and distinguished staff fellow at ORNL who began working on the project in 2018 at Duke. "Basically, the growth rate of these Sphagnum species is influenced by both plant genotype and the environment in a manner that depends on the sex of the plant." The discovery could lead to scientific solutions to help Sphagnum survive a changing climate. "These genomes are coming from the plants that are largely responsible for storing carbon in these ecosystems," Piatkowski said. "Knowledge of their genetics can provide us with insights to help peatlands continue being the carbon sinks they have been for thousands of years, instead of net sources of greenhouse gases like carbon dioxide and methane as the climate warms." "The presence of the sex chromosome together with interactions with non-sex chromosomes and environmental conditions influence the plant's ability to survive and adapt to harsh conditions," said Dave Weston, a molecular plant biologist who led ORNL's efforts. "Understanding those contributions to Sphagnum survival and reproduction will be super important in understanding how resilient this ecosystem is to changing climatic conditions, which cascades to their ability to sequester carbon for long-term storage." The research is a good example of linking genes to ecosystem function and emphasizing the importance of ecological genomics in advancing biology questions, Weston said. Piatkowski said the pedigree analysis on the moss species enables new insights into how Sphagnum relates to symbiotic microbes—how relationships with bacteria, for instance, might help plants survive under warmer scenarios in the future. "The genetic resources developed as part of this project are now allowing our team to investigate the benefits of the plant microbiome under stress at the molecular level. It's an exciting area of research not possible without these genomes." The sequencing work and much of the comparative genomics and quantitative genetics was led by HudsonAlpha and JGI, while Duke focused on plant taxonomy, population genetics and plant collections. ORNL conducted the experimentation, performed analysis of the mosses' evolutionary history, collected plant material, performed nucleotide extractions for genome sequencing and developed the pedigree populations that enabled gene-to-trait linkages.
10.1038/s41477-022-01333-5
Physics
Researchers set time limit for ultrafast perovskite solar cells
Johannes M. Richter et al. Ultrafast carrier thermalization in lead iodide perovskite probed with two-dimensional electronic spectroscopy, Nature Communications (2017). DOI: 10.1038/s41467-017-00546-z Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-00546-z
https://phys.org/news/2017-09-limit-ultrafast-perovskite-solar-cells.html
Abstract In band-like semiconductors, charge carriers form a thermal energy distribution rapidly after optical excitation. In hybrid perovskites, the cooling of such thermal carrier distributions occurs on timescales of about 300 fs via carrier-phonon scattering. However, the initial build-up of the thermal distribution proved difficult to resolve with pump–probe techniques due to the requirement of high resolution, both in time and pump energy. Here, we use two-dimensional electronic spectroscopy with sub-10 fs resolution to directly observe the carrier interactions that lead to a thermal carrier distribution. We find that thermalization occurs dominantly via carrier-carrier scattering under the investigated fluences and report the dependence of carrier scattering rates on excess energy and carrier density. We extract characteristic carrier thermalization times from below 10 to 85 fs. These values allow for mobilities of 500 cm 2 V −1 s −1 at carrier densities lower than 2 × 10 19 cm −3 and limit the time for carrier extraction in hot carrier solar cells. Introduction Hybrid perovskite semiconductors have attracted strong scientific interest for optoelectronic applications. Simple and low-cost solution-based fabrication techniques can be used to obtain poly-crystalline thin films 1 , 2 , 3 , as well as single crystals 4 and colloidal nanostructures 5 . Reports on sharp absorption onsets, long carrier lifetimes and high photoluminescence quantum efficiencies 4 , 6 , 7 , 8 , 9 , 10 , 11 highlight the semiconducting quality of these materials. Despite the current drive for higher optoelectronic device efficiencies and stabilities, little information exists about the fundamental non-equilibrium interactions of photo-excited charge carriers in these semiconductors, which define, for example, the fundamental limits of charge transport. Photo-excitation of a semiconductor with free band continuum states leads to an electron population in the conduction band. Initially, the excitations are in a superposition of ground and excited states. Over the dephasing time these coherences are lost and a pure population is left. However, after light absorption this carrier population is not in thermodynamic equilibrium immediately, as the energy of the excited carriers matches that of the absorbed light. The spectrum of the excitation pulse and the selection rules of the semiconductor determine the initial energetic distribution of the carrier population, which can lead to the observation of a “spectral hole” in the absorption spectrum for narrowband excitation 12 , 13 . Ultrafast scattering processes, such as carrier-carrier or carrier-optical-phonon scattering, lead to a broadening of the energetic distribution of the carrier population 14 , 15 , 16 , which can eventually be described by an equilibrium carrier temperature higher than the lattice temperature. This process of exchange of energy among charge carriers is called thermalization. Subsequently, a cooling of the carriers occurs through carrier-phonon and carrier-impurity scattering processes, bringing the carriers and lattice to thermodynamic equilibrium 14 . The carrier thermalization and cooling processes after light absorption depend on the properties of the excited charge carriers and the band structure of the semiconductor. In optoelectronic applications, the carrier scattering rates determine the fundamental limits of carrier transport and electronic coherence. For the prototypical semiconductor GaAs, studies of thermalization dynamics reported timescales ranging from 100 fs to 4 ps 12 , 15 , 17 , and provided insights into dephasing times 12 , band structure 15 and carrier-carrier scattering processes 18 , which affect subsequent carrier cooling and recombination processes. In hybrid perovskites, ultrafast transient absorption spectroscopy has been used to study the carrier cooling process, which was found to occur on 100 s of femtoseconds timescales 19 , 20 , 21 , 22 , with a strong contribution from a hot phonon effect at high fluences 23 . The thermalization process, on the other hand, has not yet been experimentally resolved and is expected to occur on much faster timescales 14 , 15 , 16 . To resolve ultrafast thermalization dynamics, the time evolution of the time-dependent energy distribution of an initial population excited at a well-defined energy needs to be tracked 24 . The necessary femtosecond time resolution requires the use of ultrashort, broadband light pulses, which compromises the desired high resolution in excitation energy. Two-dimensional electronic spectroscopy (2DES) is an extension of traditional transient absorption techniques that achieves both conditions simultaneously 25 by using a Fourier transform (FT) approach. In 2DES, two pump pulses are used instead of one, with the time delay between them (labeled t 1 ) being scanned with interferometric precision for a fixed delay between the second pump and the probe (the waiting time, labeled t 2 ). Taking the FT of the nonlinear optical signal over t 1 provides the excitation frequency axis, with a resolution that is only limited by the t 1 scanning range. Here, we report 2DES experiments on lead-iodide hybrid perovskites using thin films of the prototypical material CH 3 NH 3 PbI 3 . We extract thermalization time constants in the range of below 10–85 fs, depending on carrier density and excess energy and find that the main thermalization process is carrier-carrier scattering. Results Pump–probe experiment with sub-10 fs pulses We prepare thin films of CH 3 NH 3 PbI 3 on 170 μm thick glass substrates. Figure 1a shows the absorption spectrum of the samples with a bandgap around 760 nm (1.63 eV), which contains contributions from an excitonic transition near the bandgap and free carrier absorption towards shorter wavelengths 26 . Fig. 1 Pump–probe spectroscopy of lead iodide perovskite. a Absorption spectrum of lead iodide perovskite and broadband laser spectrum used for the degenerate pump–probe and 2D electronic spectroscopy (2DES) experiments. The pump spectrum overlaps with the free band continuum as well as the excitonic transition of perovskite. b Pump–probe spectroscopy of perovskite: Δ T / T spectrum as a function of probe wavelength and pump–probe delay (excitation density: 2 × 10 18 cm −3 ). The broadband nature of the short pump pulses makes the observation of a non-thermalized distribution and thermalization difficult in pump–probe experiments. Inset: Pump–probe dynamics at 745 nm probe wavelength. The signal rises over two distinct timescales Full size image We perform pump–probe experiments using sub-10 fs laser pulses, with a spectrum spanning from 550 nm (2.25 eV) to 750 nm (1.65 eV), as shown in Fig. 1a . All measurements were performed in the linear excitation regime as can be seen in Supplementary Fig. 2a and in conditions under which the samples showed a high photo-stability (see Supplementary Fig. 2b ). Figure 1b shows the differential transmission (∆ T / T ) spectrum as a function of pump–probe delay and probe wavelength. For positive time delays the signal is dominated by a strong photo-bleaching (PB, Δ T / T > 0) between 600 and 750 nm, which we attribute to a phase space filling of the electronic states and thus reduced absorption near the band edge due to excitation of carriers in the band-to-band continuum. The broad negative signal between 550 and 650 nm overlapping the PB has been shown to originate from a transient change in reflectivity 18 , 19 . At negative time delays, i.e. when the probe pulse arrives before the pump, we observe spectral oscillations (see Supplementary Fig. 3 for detailed map) which display an increasing period when approaching time zero 14 , 27 , 28 , defined as the point of temporal overlap of pump and probe pulse. These oscillations, known as the pump-perturbed free-induction decay, originate from a transient grating signal between pump and probe, which is emitted along the probe’s propagation direction and is only present at negative time delays 13 , 29 , 30 . It will therefore not affect the analysis of the signal at positive time delays. The inset in Fig. 1b shows the Δ T / T dynamics near the band edge at 745 nm probe wavelength. We see a rise in the PB signal with two distinct components: A fast rise within the first 130 fs and a slower rise longer than the 500 fs maximum time delay. While the second component is consistent with the reported carrier cooling times 19 , the first component is likely to be due to carrier thermalization. This signal contains contributions from carriers excited at all energies within the broad pump pulse spectrum, which thermalize to a distribution peaked close to the bandgap. For this reason, pump–probe spectroscopy with broadband pulses is unable to resolve an excitation energy-dependent thermalization process. At the same time, broadband pump pulses are required in order to provide the time resolution necessary to measure the observed ultrafast thermalization process. Carrier relaxation probed by 2DES spectroscopy In order to achieve a high resolution in pump energy while maintaining ultrafast time resolution, we perform 2DES experiments with the same broadband sub-10 fs pulses employed for pump–probe. An illustration of the 2DES setup layout can be found in Supplementary Fig. 1 and a detailed description in Supplementary Note 2 . We perform 2DES measurements in a range of the waiting time t 2 (which corresponds to the pump–probe delay) from −100 to 500 fs and for excitation densities of 2 × 10 18 cm −3 and 2 × 10 19 cm −3 . The full sets of 2DES maps are available as video files (Supplementary Movies 1 and 2 ). The lower fluence measurement is performed at an excitation density just below the onset of the hot phonon effect (re-absorption of excited phonons by charge carriers) 19 , which allows us to observe carrier cooling. The higher fluence reaches an excitation density where the hot phonon effect slows down carrier cooling, preventing its observation within the 500 fs measurement window. In order to check the quality of the computed 2DES maps, we plot the projection of the 2DES data along the pump wavelength axis in Supplementary Fig. 4 and compare it with the pump–probe data. We find a very good agreement between the two measurements indicating a high reliability of the 2DES data. At negative time delays, we observe spectral oscillations, which again can be identified by their increasing period when approaching time zero. These are likely to be due to the pump-perturbed free induction decay. However, the current scanning of t 1 in our 2DES setup is such that the pulse sequence is changed for negative t 2 time delays, which complicates the interpretation of the data in this regime beyond the scope of this report. At a time delay t 2 = 0 fs, we observe a PB signal dominantly along the diagonal of the 2DES map as seen in Fig. 2a for a carrier density of 2 × 10 18 cm −3 . This signal originates from a non-thermalized carrier distribution, which has not yet undergone scattering events, so that we observe a PB at the same wavelengths at which we excite with the pump pulse. Fig. 2 Carrier relaxation probed with 2D electronic spectroscopy. a – c 2DES maps for time delays t 2 of a 0 fs, b 80 fs and c 500 fs for an excitation density of 2 × 10 18 cm −3 . The diagonal of the 2DES maps is indicated with orange lines. d Schematic illustration of carrier relaxation processes. Initially, a non-thermal carrier energy distribution is excited. After undergoing carrier-carrier scattering, carriers form a thermalized distribution with a temperature higher than the lattice. Through carrier-phonon scattering, the carriers subsequently cool down until they reach an equilibrium with the lattice temperature Full size image Within the first 100 fs after excitation, we observe a spectral broadening of the PB signal for each pump wavelength as seen for t 2 = 80 fs in Fig. 2b , indicative of carrier thermalization, i.e., the exchange of energy amongst charge carriers. After completion of the thermalization process, the carrier population can be described by a carrier temperature T C but still remains out of thermodynamic equilibrium with the lattice (typically T C > T L where T L denotes the lattice temperature). The carrier energy distribution and thus the Δ T / T signal at high probe energies (energies larger than 1.7 eV or 730 nm) are expected to follow a Boltzmann function, according to 19 $$\frac{{\Delta T}}{T}\left( E \right)\sim{\rm{exp}}\left( { - \frac{{E - {E_{\rm{f}}}}}{{{k_{\rm{B}}}{T_{\rm{C}}}}}} \right)$$ (1) where k B denotes the Boltzmann constant and E f the Fermi energy. We fit Eq. ( 1 ) to the Δ T / T spectrum at 625 nm pump wavelength and a delay of t 2 = 52 fs (Supplementary Fig. 5 ), from which we derive a carrier temperature of 1600 K. At later time delays, carriers undergo carrier-phonon scattering, which will bring the excited carrier population into an equilibrium with the lattice and cool the carrier temperature T C down to ambient levels (about 300 K). The cooling time has been reported to be around 200–400 fs below the onset of the hot phonon effect 19 , 20 . At t 2 = 500 fs time delay, we extract a carrier temperature of 890 K (Supplementary Fig. 5 ) showing the progressive carrier cooling in good agreement to published carrier cooling times 19 , 20 , 21 . This confirms that the lower fluence measurement was performed below the onset of the hot phonon effect. At a time delay t 2 = 500 fs, we observe a carrier population which has now significantly cooled down compared to the carrier distribution at 80 fs, so that the 2DES map is now a vertical stripe corresponding to the band edge in Fig. 2c . The different stages of carrier relaxation are illustrated in Fig. 2d . The clear separation in the time scales of the different relaxation processes can be seen in Fig. 3a by monitoring the dynamics of the peak in the 2DES map corresponding to 655 nm pump and 695 nm probe wavelength: At negative time delays, we observe spectral oscillations characteristic of the pump-perturbed free induction decay. At positive time delays up to 100 fs, the PB signal rises due to thermalization while it subsequently decays when carriers cool to the band edge under carrier-phonon scattering. This is consistent with the observed two regimes in the rise of the bandgap signal in the inset of Fig. 1b . Fig. 3 Carrier thermalization in perovskites. a 2D electronic spectroscopy (2DES) kinetic at 655 nm pump and 695 nm probe wavelength, which is on the right side of the 2DES map diagonal and thus between the pump wavelength and the bandgap (excitation density: 2 × 10 18 cm −3 ). We observe three different regimes: a coherent regime at negative times during which we observe spectral oscillations, a thermalization regime during which we observe a rise in signal much slower than the temporal width of the instrument response function and delayed relative to the rise of the diagonal, and a carrier cooling regime during which the signal slowly decays. b Δ T / T spectra for 625 nm pump wavelength extracted from 2DES maps for different time delays t 2 at an excitation density of 2 × 10 18 cm −3 . Initially, we observe a peak around the pump energy. Carriers quickly thermalize and form a Boltzmann distribution. c , d Kinetics of carrier thermalization for c 662 nm and d 720 nm pump wavelength at an excitation density of 2 × 10 19 cm −3 . While the diagonal signal decays, the off-diagonal signals rise indicating that carriers scatter from the initial sharp energy distribution into a broad statistic energy distribution. The lines represent a mono-exponential fit to the experimental data Full size image Investigating carrier thermalization Figure 3b shows the time evolution of the Δ T / T spectra at 625 nm pump wavelength, extracted from the 2DES measurements. Around 0 fs, we observe a peak in the spectrum near the pump wavelength of 625 nm. On either side of this peak, we observe a negative transmission change. For delay times close to the temporal overlap of pump and probe, this effect has been observed for GaAs before and was attributed to many-body edge singularities due to the non-equilibrium distribution function under photoexcitation 14 . This effect might also reduce the total photo-bleach intensity at time zero compared to later time delays. Other contributing factors will be a negative signal due to a change in reflectivity of the perovskite film 19 , which can also be seen at 500 fs time delay as plotted in Fig. 2c in the probe wavelength range of 550–650 nm. At positive time delays, the peak in Fig. 3b at 625 nm probe wavelength decays, while the signal on either side of the peak rises. This shows that carriers are now occupying a broader energetic range of states. We interpret this as carriers undergoing scattering processes, which will eventually lead to a thermal distribution. The peak of the spectrum is therefore now close to the band edge at 760 nm. Carrier thermalization can be visualized from the dynamics in Fig. 3c , where we plot signal traces at different probe wavelengths for 662 nm pump wavelength at a carrier density of 2 × 10 19 cm −3 . For the diagonal position at 662 nm probe, we observe a rapid decay at early times after excitation. At the same time, the signal on both sides of the diagonal rises as seen by the kinetics of the 600 and 745 nm probe. This demonstrates that carriers initially excited at 662 nm scatter into other energetic states and that the carrier distribution function broadens. We plot similar dynamics in Fig. 3d for a pump wavelength of 720 nm. Again, we observe a decay of the diagonal signal and a rise at longer and shorter wavelengths. However, the timescale is now longer than for 662 nm pump. In the following, we use the decay time of the signal along the diagonal of the 2DES map as a measure for the thermalization time. From mono-exponential fits to this decay, we derive a thermalization time constant of 15 fs for 662 nm pump and 45 fs for 720 nm pump. This difference in thermalization time constants can be explained by the higher kinetic energy of carriers pumped at 662 nm, which makes the time between carrier scattering events shorter than for carriers pumped at 720 nm. It will take a few time constants for the carriers to reach a fully thermal distribution. It is, however, difficult to quantify the time point of completed carrier thermalization due to the asymptotic nature of the process. Thermalization time is strongly pump energy dependent To gain more insights on the underlying scattering process that leads to thermalization of photo-excited carriers, we study the fluence and pump wavelength dependence of the diagonal peak decay time. Figure 4a shows the dynamics of different points on the diagonal of the 2DES maps. We observe that the thermalization time is strongly dependent on pump wavelength and measure a thermalization time constant of 85 fs for excitation near the band edge (737 nm), which decreases to sub-10-fs when exciting at 581 nm. We determine carrier scattering rates by fitting the dynamics of the diagonal peaks with an exponential decay ∝ exp(− k scat t ). Figure 4b shows the excess energy dependence of the carrier scattering rate for the two excitation densities. In both measurements, we find an increasing scattering rate with increasing excess energy above the bandgap. When comparing the two fluences, we observe that the scattering rates are lower for the lower excitation density. The rate at which carriers scatter during thermalization is thus carrier density and excess energy dependent, indicating that the dominant thermalization process is carrier-carrier scattering. Fig. 4 Pump wavelength dependence of carrier thermalization. a Pump wavelength dependence of the initial decay of the diagonal signal at an excitation density of 2 × 10 19 cm −3 . The shorter the pump wavelength, the faster the diagonal decays. The solid lines represent a mono-exponential fit to the experimental data. b Carrier scattering rate vs excess energy over the bandgap at 1.63 eV. The scattering rates increase with excess energy. We observe that the scattering rate shows a fluence dependence, which suggests that the main thermalization process is carrier-carrier scattering. The error bars represent the error of the mono-exponential decay time fits. c Schematic illustration of carrier thermalization under continuous wave illumination in a hot carrier extracting device. Due to the fast timescales of cooling compared to carrier recombination, there will always be a cold carrier population in a perovskite device. This population will quickly thermalize with any newly excited charge carriers. The thermalization time is therefore the limiting factor for hot carrier extracting devices Full size image Discussion 2DES is an excellent tool for studying ultrafast thermalization processes with high temporal and energetic resolution. We measured thermalization time constants from below 10 to 85 fs for lead iodide perovskite depending on the excess energies of carriers. These time scales are fast compared to GaAs where carrier thermalization times have been measured in the range of 100 fs to 4 ps at room temperature 12 , 15 , 17 . Interestingly, Hunsche et al. 18 report no significant dependence of thermalization times on carrier density and excess energy for GaAs. The carrier thermalization times we observe for perovskites, however, show a strong dependence on both excess energy and carrier density. For an excess energy of 60 meV at an excitation density of 2 × 10 18 cm −3 , we measure a thermalization time of 70 fs (Fig. 4b ). This is three times faster than the 200 fs reported for GaAs for similar excitation conditions 18 . The origin for the faster carrier-carrier scattering in hybrid perovskites is likely to be due to a weaker Coulomb screening compared to GaAs. The carrier-carrier scattering rate k e−e is expected to depend on the optical (high-frequency) dielectric constant ε according to k e−e ~ 1/ ε 2 31 . With a dielectric constant of 6.5–8 32 , 33 for perovskite and about 11 for GaAs 34 , we expect carrier-carrier scattering in perovskites to be faster by a factor of (11/7.25) 2 = 2.3 due to a weaker Coulomb screening. This rough estimate already gives reasonable agreement with our extracted scattering rate. However, we expect that detailed theoretical calculations, which are beyond the scope of the current report, will give more accurate values. These fast carrier scattering processes in perovskites eventually also destroy the electronic coherence of carriers, so that the thermalization times are an upper boundary for the coherence times. Processes that can lead to carrier thermalization include carrier-carrier scattering, carrier-phonon scattering and carrier-impurity scattering. The increase of the scattering rates with increasing fluence suggests that the energy redistribution time for each carrier depends on the density of surrounding carriers. Furthermore, we observe a continuous increase of the scattering rates with excess energy and thus kinetic energy of the carriers. This suggests that the dominant scattering process for thermalization is carrier-carrier scattering under the investigated carrier densities. This can include scattering of electrons with electrons and holes with holes as well as scattering in between these two species. There might also be a contribution from carrier-phonon scattering, which would cause higher scattering rates in the hot phonon effect regime. Carrier thermalization will ultimately limit the time for carrier extraction in hot carrier extracting solar cells (see Fig. 4c ). Extraction of hot carriers has been reported for perovskite nanocrystals 35 and was recently suggested for polycrystalline lead iodide films 36 . Under the pulsed excitation regime used in these reports, hot carrier extraction is only limited by the carrier cooling time. However, under continuous illumination, such as standard sunlight illumination, there will be a large background population of cold carriers in the polycrystalline perovskite layer (around 10 14 –10 15 cm −3 assuming an absorbed photon flux of around 10 10 cm −3 ps −1 ) due to imperfect carrier extraction and due to the long carrier lifetimes of 100 s of nanoseconds 4 , 6 compared to the cooling time of less than 1 ps 19 , 20 , 21 . This cold population will undergo thermalization with any newly excited charge carriers without a significant change in the temperature of the total carrier population. The excess energy will therefore be rapidly lost after carrier thermalization. Even reported longer cooling times of around 100 ps for a sub-population of the carriers 37 are still far shorter than the lifetime of charge carriers making hot carrier extraction difficult. Strong carrier-carrier scattering can limit the carrier mobility μ under high excitation densities. By using the expression $$\mu = \frac{e}{{2{m_{{\rm{eff}}}}}} \cdot {\tau _{{\rm{scat}}}}$$ (2) we can estimate an upper boundary for the charge carrier mobility. Here, m eff denotes the effective mass of the carriers ( m eff ≈ 0.15 m e for perovskites 19 , 38 ), e the elementary charge and τ scat the average scattering time. By using the thermalization time constant near the band edge, in the range of 85 fs, we estimate upper boundaries for the mobility of 500 cm 2 V −1 s −1 for a carrier density of 2 × 10 19 cm −3 . Even with the fastest measured thermalization time constant of 8 fs, the mobility would only be limited to 50 cm 2 V −1 s −1 . These values are within the range of reported carrier mobilities at low excitation densities 38 . Other carrier momentum scattering processes like acoustic phonon scattering might limit the mobilities to lower values as we did not probe carrier momentum relaxation in our experiment. We note that these estimates for the mobility are an average mobility for electron and hole, since the extracted thermalization time constant is measuring electrons and holes. We identified the main scattering process to be carrier-carrier scattering which will get slower with lower fluence. Eventually, at low fluences, the thermalization times will be limited by carrier-impurity scattering and carrier-phonon scattering. The latter has been shown to occur on timescales of 200–400 fs 19 , 20 , in agreement with our measurements. In conclusion, we report on carrier thermalization in lead iodide perovskite measured by 2DES. We find thermalization time constants of below 10–85 fs with carrier-carrier scattering being the dominant process. Furthermore, we discussed that these timescales are the limiting factor for hot carrier extracting devices. The reported timescales give an insight into the fundamental carrier–carrier interactions and provide a deeper understanding of the photophysics of these emerging photovoltaic materials. Methods Film preparation For the iodide perovskite films, 3:1 molar stoichiometric ratios of CH 3 NH 3 I and Pb(CH 3 COO) 2 (Sigma Aldrich 99.999% pure) were made in N , N -dimethylformamide in 20 wt% solution. This solution was spun inside a nitrogen filled glove box on quartz substrates at 2000 r.p.m. for 60 s followed by 3 min of thermal annealing at 100 °C in air to form thin films. The samples were encapsulated with a second glass slide and epoxy adhesive (Loctite Double Bubble) under inert conditions to avoid sample degradation and beam damage. Pump–probe experiment We perform pump–probe experiments with sub-10 fs laser pulses. The pump–probe setup starts with an amplified Ti:sapphire laser system (Libra, Coherent), which delivers 4-mJ, 100-fs pulses around 800 nm at 1-kHz repetition rate. A portion of the laser with 300 μJ energy is used to pump a non-collinear optical parametric amplifier (NOPA), which is subsequently split into pump and probe pulse. The NOPA delivers a pulse with a spectrum spanning from 550 nm (2.25 eV) to 750 nm (1.65 eV), as shown in Fig. 1a , compressed to sub-10-fs duration by multiple reflections on custom-designed double-chirped mirrors (DCMs). Pulse duration is measured by second harmonic generation frequency resolved optical gating 39 . The pump energy is 3 nJ which, focused to a spot size of ≈100 μm, yields a fluence of ≈10 μJ cm −2 . Two-dimensional electron spectroscopy We perform 2DES in the partially collinear pump–probe geometry, according to the scheme shown in Supplementary Fig. 2 . 2DES can be seen as an extension of conventional pump–probe spectroscopy, where two identical collinear pump pulses are used and their delay t 1 (coherence time) is scanned in time, for a fixed value of the probe pulse delay t 2 (population time). The probe pulse is dispersed in a spectrometer, providing resolution in the detection frequency. The FT with respect to the pump pulses delay provides the resolution of the signals with respect to the excitation frequency 40 , 41 , 42 . The 2DES setup uses the same NOPA as pump–probe, which is divided by a beam splitter (90% transmission, 10% reflection) into pump and probe lines. The identical and phase-locked pair of femtosecond pump pulses is generated by the Translating-Wedge-Based Identical-Pulses-eNcoding System (TWINS) technology 43 , 44 . TWINS uses birefringence to impose user-controlled temporal delays, with attosecond precision, between two orthogonal components of broadband laser pulses. Rapid scanning of the inter-pulse delay allows robust and reliable generation of 2DES spectra in a user-friendly pump–probe geometry. In order to determine zero delay between the pump pulses and properly phase the 2DES spectra, part of the pump beam is split off and sent to a photodiode to monitor the interferogram of the pump pulse pair. The additional dispersion introduced by the TWINS on the pump pulse pair is compensated by a suitable number of bounces on a pair of DCMs, and spectral phase correction is verified using a Spatially Encoded Arrangement for Temporal Analysis by Dispersing a Pair Of Light E-fields (SEA-TADPOLE) setup 45 . Pump and probe pulses are non-collinearly focused on the sample and the transient transmission change Δ T / T is measured by a spectrometer 46 . Data availability The experimental data that support the findings of this study are available in the University of Cambridge Repository ( ).
Researchers have quantified the astonishingly high speeds at which future solar cells would have to operate in order to stretch what are presently seen as natural limits on their energy conversion efficiency. The study, which investigated photovoltaic devices based on a type of materials called perovskites, suggests that these could achieve unprecedented levels of super-efficiency. But to do so, they will need to turn sunlight into electrons and then extract these as electrical charge within just quadrillionths of a second – a few "femtoseconds", to give them their scientific name. Moving electrons at this ultrafast rate would enable the creation of "hot carrier" cells. These are solar cells which can generate electricity more efficiently by making use of the added kinetic energy that electrons have for a brief moment just after they are created, while they are moving at high speed. The amount of electrical energy that can be extracted from a hot carrier cell, relative to the amount of light absorbed, could potentially match or even break an energy efficiency rate of 30%. In rough terms, this is the maximum energy efficiency that solar cells can conceivably achieve – although standard silicon cells typically have efficiencies closer to 20% in practice. Despite the minuscule fractions of time involved, the authors of the new paper say that it is possible that perovskites could ultimately push this efficiency barrier. The study, published in the journal Nature Communications, was carried out by academics in Italy and the UK. The British team involved researchers in the Cavendish Laboratory's Optoelectronics research group of Professor Sir Richard Friend, a Fellow of St John's College, Cambridge. The Italian team are based at the Politecnico di Milano in the group of Professor Guilio Cerullo. Johannes Richter, a PhD student in the Optoelectronics group and the paper's lead author, said: "The timescale that we calculated is now the time limit that we have to operate within if we want to create super-efficient, hot carrier solar devices. We would need to get electrons out before this tiny amount of time elapses." "We are talking about doing this extremely quickly, but it's not impossible that it could happen. Perovskite cells are very thin and this gives us hope, because the distance that the electrons have to cover is therefore very short." Perovskites are a class of materials which could before long replace silicon as the material of choice for many photovoltaic devices. Although perovskite solar cells have only been developed within the past few years, they are already almost as energy-efficient as silicon. Partly because they are considerably thinner, they are much cheaper to make. While silicon cells are about a millimetre thick, perovskite equivalents have a thickness of approximately one micrometre, about 100 times thinner than a human hair. They are also very flexible, meaning that in addition to being used to power buildings and machines, perovskite cells could eventually be incorporated into things like tents, or even clothing. In the new study, the researchers wanted to know for how long the electrons produced by these cells retain their highest possible levels of energy. When sunlight hits the cell, light particles (or photons), are converted into electrons. These can be drawn out through an electrode to harvest electrical charge. For a brief moment after they are created, the electrons are moving very quickly. However, they then start to collide, and lose energy. Electrons which retain their speed, prior to collision, are known as "hot" and their added kinetic energy means that they have the potential to produce more charge. "Imagine if you had a pool table and each ball was moving at the same speed," Richter explained. "After a certain amount of time, they are going to hit each other, which causes them to slow down and change direction. We wanted to know how long we have to extract the electrons before this happens." The Cambridge team took advantage of a method developed by their colleagues in Milan called two dimensional spectroscopy. This involves pumping light from two lasers on to samples of lead iodide perovskite cell in order to simulate sunlight, and then using a third "probe" laser to measure how much light is being absorbed. Once the electrons have collided and slowed down, and are thus starting to take up space in the cell, the amount of light being absorbed changes. The time it took for this to happen in the study effectively allowed the researchers to establish how much time is available to extract electrons while they are still "hot". The study found that electron collision events started to happen between 10 and 100 femtoseconds after light was initially absorbed by the cell. To maximise energy efficiency, the electrons would thus need to reach the electrode in as little as 10 quadrillionths of a second. The researchers are nonetheless optimistic that this might be possible. As well as taking advantage of the intrinsic thinness of perovskite, they believe that nanostructures could be created within the cells to reduce further the distance that the electrons need to travel. "That approach is just an idea for now, but it is the sort of thing that we would require in order to overcome the very small timescales that we have measured," Richter added. The paper, "Ultrafast carrier thermalization in lead iodide perovskite probed with two-dimensional electronic spectroscopy," is published in Nature Communications.
10.1038/s41467-017-00546-z
Medicine
Exploring how social touch affects communication between female animals
Social touch promotes interfemale communication via activation of parvocellular oxytocin neurons. Nature Neuroscience (2020). DOI: 10.1038/s41593-020-0674-y Social contacts. Nature Reviews Neuroscience (2020). DOI: 10.1038/s41583-020-0365-4 Journal information: Nature Neuroscience , Nature Reviews Neuroscience
http://dx.doi.org/10.1038/s41593-020-0674-y
https://medicalxpress.com/news/2020-08-exploring-social-affects-female-animals.html
Abstract Oxytocin (OT) is a great facilitator of social life but, although its effects on socially relevant brain regions have been extensively studied, OT neuron activity during actual social interactions remains unexplored. Most OT neurons are magnocellular neurons, which simultaneously project to the pituitary and forebrain regions involved in social behaviors. In the present study, we show that a much smaller population of OT neurons, parvocellular neurons that do not project to the pituitary but synapse onto magnocellular neurons, is preferentially activated by somatosensory stimuli. This activation is transmitted to the larger population of magnocellular neurons, which consequently show coordinated increases in their activity during social interactions between virgin female rats. Selectively activating these parvocellular neurons promotes social motivation, whereas inhibiting them reduces social interactions. Thus, parvocellular OT neurons receive particular inputs to control social behavior by coordinating the responses of the much larger population of magnocellular OT neurons. Main The hypothalamic neuropeptide OT promotes various types of social behavior 1 , 2 , 3 . OT is mainly synthesized in neurons of the paraventricular nuclei (PVN) and supraoptic nuclei (SON) of the hypothalamus. The vast majority of these neurons project to the posterior pituitary, where OT is secreted into the blood for essential physiological effects, such as suckling-induced milk letdown and regulation of uterine contractions during birth 4 . In parallel, these neurons project axonal collaterals to forebrain regions 5 that express OT receptors (OTRs), including the central nucleus of the amygdala, nucleus accumbens, lateral septum, hippocampus and medial prefrontal cortex 6 , 7 . Studies employing microdialysis to measure OT concentrations within socially relevant brain regions revealed that OT is released in the bed nucleus of the stria terminalis, lateral septum and central nucleus of the amygdala during social investigation of a conspecific 2 , 8 , 9 . However, to date, no direct measurement of OT neuron activity during actual social interaction of freely moving conspecifics has been performed, although it was recently reported that social approach triggers calcium release in PVN OT neurons in immobilized, head-fixed male mice 10 . Several studies suggest that female–female interactions are predominantly mediated via somatosensory inputs 11 , 12 , whereas other interactions such as male–male, male–female or parental contact may rely on other sensory modalities. However, whether these sensory stimulations can activate OT neurons is unknown because, to date, there has been no direct recording of activity from identified OT neurons during actual social behavior. In an attempt to address these points, in the present study, we performed ex vivo and in vivo manipulation of OT neuron activity primarily in the PVN—the main source of OT in the brain 5 —to decipher their involvement in the modulation of social interaction in freely moving female rats. Results PVN OT neurons are activated on social interaction To identify OT neurons electrophysiologically, we injected a recombinant adeno-associated virus (rAAV-OTp-ChR2-mCherry) bilaterally into the PVN to induce expression of the light-sensitive ion channel Channelrhodopsin-2 (ChR2) under the control of the OT promoter 5 , 13 . This resulted in 90.4% of ChR2-expressing neurons being OT positive, showing the high specificity of the infection in the PVN (Extended Data Fig. 1a ). We then recorded individual neurons in the PVN using implanted tetrodes combined with an optic fiber to identify the OT neurons by their electrophysiological response to blue-laser pulses, similar to methods described previously 14 . In total, we recorded 90 neurons in 10 adult female rats at the diestrus phase of the ovarian cycle, while monitoring the behavior of the rats and their ultrasonic vocalizations during both open field (OF) exploration and free social interactions (FSIs) (Fig. 1a,b ). Of these neurons, 15 (in 5 animals) were stringently identified as single OT neurons (Extended Data Fig. 1e ). In the OF arena, the patterns of spiking activity of these neurons (Fig. 1d and Extended Data Fig. 2d ) were indistinguishable from those of OT neurons observed under basal conditions in anesthetized rats, because these neurons displayed typical OT neuron characteristics 15 . Specifically, they all display a low rate of tonic firing (~1 Hz) with a low index of dispersion of spikes (<1), and a distribution of interspike intervals consistent with random spike generation subject to a prolonged relative refractory period. In contrast, during episodes of FSI with an unfamiliar conspecific, the same neurons fired at a higher rate (mean increase 1.5 ± 0.4 spikes s −1 , P = 0.001, n = 15; Fig. 1c,d ) and more irregularly; the second-by-second firing rates showed a high index of dispersion, reflecting the prominent occurrence of clusters of spikes (Fig. 1d and Extended Data Fig. 1n ). Fig. 1: In vivo recording of individual OT neurons in the PVN. a , Setup for recordings of behavior, ultrasonic vocalizations and neural activity. b , Video-tracking and electrophysiological recording from a rat alone in the OF arena (top) and during FSI (bottom): animal movement path (blue line), location of prominent OT cell activity (colored dots), heatmap of time spent by the rat in different locations. c , Example firing rate of four identified OT neurons recorded simultaneously during FSI. Red bars indicate periods of social interaction. d , Average firing rate of 15 OT neurons from 5 rats: OF baseline 1.1 ± 0.4 Hz, not socially interacting (not-SI) 1.6 ± 0.3 Hz and SI 2.6 ± 0.2 Hz (OF not-SI, P = 0.07; OF SI, P = 0.001; not-SI SI, P = 0.03; one-way ANOVA). Average index of dispersion on 1-s time bins of 15 OT neurons: OF 0.9 ± 0.2, not-SI 1.4 ± 0.3, SI 3.4 ± 0.4 (OF not-SI, P = 0.16; OF SI, P = 0.0004; not-SI SI, P = 0.001; one-way ANOVA). Average pairwise Pearson’s correlation of spiking activity (1-s time bins) of 17 OT neuron pairs recorded in OF and SI ( P = 0.005, unpaired, two-sided Student’s t -test). e , Frames of recorded videos (top) of experimental rats that were placed either alone (OF), or with a mesh between rats (CSI) or for FSI with a stimulus rat; representative spike raster plots of an OT cell in each condition (bottom). f , Average firing rate of 15 OT neurons while rats underwent OF, CSI and FSI tests (OF CSI P = 0.14; OF FSI, P = 0.004; CSI FSI, P = 0.006; n = 15 cells; one-way ANOVA). Average index of dispersion on 1-s time bins (OF CSI, P = 0.21; OF FSI, P = 0.001; CSI FSI, P = 0.003; n = 15 cells; one-way ANOVA). Average pairwise Pearson’s correlation of spiking activity (1-s time bins) of 17 OT neuron pairs (OF CSI, P = 0.39; OF FSI, P = 0.002; CSI FSI, P = 0.003; one-way ANOVA). g , Normalized firing rates of OT neurons during each behavior; ‘crawling on top’ and ‘being crawled’ elicited the strongest responses (* P = 0.036, ** P = 0.024; n = 8 cells, one-way ANOVA, followed by Tukey’s post hoc test). h , Representative spike raster plots, averaged response and PSTHs of OT cell activity during ‘crawling on top’ (increased response, P = 0.036, n = 6 cells, Wilcoxon’s test) and ‘being crawled’ (increased response, P = 0.024, n = 6 cells; Wilcoxon’s test) behaviors. Data represented as mean ± s.e.m. Full size image As revealed by cross-correlation analysis, OT neurons also displayed increased synchronicity during FSI (mean pairwise correlation: OF, 0.10 ± 0.04; FSI, 0.40 ± 0.08, P = 0.001; Extended Data Fig. 1k–l ). In anesthetized rats, adjacent OT neurons showed no such cross-correlated activity. We also recorded local field potentials in the PVN and found a substantial increase of oscillatory power in the theta (5–10 Hz) frequency band during FSI (Extended Data Fig. 1f–h ). The spike activity of OT neurons tended to be phase-locked with theta oscillations during FSI, but not in the OF arena (Extended Data Fig. 1i,j ). In contrast to OT neurons, non-OT PVN neurons did not show an increase in spiking activity when comparing exploratory behavior and social interaction (Extended Data Fig. 2e–g ). Thus, during FSI with actual physical contact, OT neurons in the PVN were more active and exhibited frequent clusters of spikes, and this activity was correlated among the OT neurons. Social physical contact increases PVN OT neuron activity To examine which component of social interaction activates these neurons, we first recorded their neuronal activity during a chambered social interaction (CSI) 16 . In this setup, experimental and stimulus rats were separated by a transparent wall with small holes (7.5 mm), allowing rats to see, sniff and hear, but not touch, each other (Fig. 1e ). OT neurons showed little change in spiking activity between CSI and baseline recordings in an OF (CSI: 1.4 ± 0.4 spikes s −1 ; OF: 1.0 ± 0.2 spikes s −1 , P = 0.14; Fig. 1f ). When the wall was removed to allow FSI, the same OT neurons displayed a significant increase in activity (FSI: 3.0 ± 0.4 spikes s −1 , P < 0.001; Fig. 1f ), accompanied by an increase in index of dispersion (FSI 3.2 ± 0.4, CSI 1.3 ± 0.3, P = 0.006 versus FSI; OF 0.9 ± 0.2, P = 0.004 versus FSI). To estimate the amount of OT axonal release due to the increase in firing rate, together with the altered firing pattern, we employed an activity (spike)-dependent model of OT secretion 17 (Extended Data Fig. 2h–j ) that quantitatively captures the features of stimulus secretion coupling at the nerve terminals. To dissect which sensory modalities activate OT neurons during FSI, we categorized rat social behaviors into ‘sniffing’, ‘head-to-head’ and ‘crawled on top’ or ‘being crawled’ events and constructed peristimulus time histograms (PSTHs) of spiking activity before, during and after the onset of each sequence (Fig. 1g,h ). ‘Crawled on top’ and ‘being crawled’ induced the greatest increases in firing rates ( P = 0.036 and 0.024, respectively; Supplementary Video 1 ), whereas ‘sniffing’, ‘chasing’ and ‘head-to-head’ events induced lesser, non-significant changes (Fig. 1h and Extended Data Fig. 2a–c ). In addition, ultrasonic vocalizations during FSI revealed the appearance of bands between 40 and 90 kHz known to be related to social communication in rats 18 (Extended Data Fig. 3a,b ), but we found no time-locked (in ranges up to ±5 s) correlation between OT neuron activity and ultrasonic vocalizations (Extended Data Fig. 3c–e ). Although we could not discriminate individual ultrasonic vocalizations between the two conspecifics, we hypothesized that OT neurons were activated mainly by physical contacts and investigated this further by modeling gentle, non-nociceptive mechanical stimuli. Gentle non-nociceptive mechanical stimuli trigger OT neuron activation To test whether somatosensory stimulation itself is sufficient to increase OT cell activity, we performed controlled tactile stimulations using compressed air delivery (airpuffs) in isoflurane-anesthetized rats as described previously 19 (Fig. 2a ). Stimulation of the skin on the dorsal body region by airpuffs (at three sites) reproducibly activated 19 of 23 (83%) recorded PVN OT neurons (mean increase 1.3 ± 0.5 spikes s −1 , mean P = 0.021; Fig. 2a,b and Extended Data Fig. 4a,b ). Fig. 2: Gentle non-nociceptive mechanical stimuli trigger OT neuron activation. a , Head-fixed rats injected with rAAV-pOT-ChR2-mCherry were stimulated with airpuffs at anterior, central and posterior portions of the dorsal body region, whereas OT neurons were recorded with an opto-electrode. Top: PSTH example of OT neuron responses to airpuffs. Bottom: normalized PSTHs of 10 (of 23) recorded OT neuron responses to airpuffs in three dorsal body regions ((i) anterior; (ii) central; (iii) posterior); red indicates high spiking activity. ITR, inverted terminal repeat; NS, not significant. b , Top: statistics of average firing rate of OT neuron responses to airpuff stimulations (peak versus baseline, * P = 0.017, ** P = 0.025, *** P = 0.021; n = 23 cells from 8 rats; one-way ANOVA followed by Bonferroni’s post hoc comparison) indicates a significant increase above basal rate (dashed line). Bottom: latency of OT neuron responses to airpuffs. All data shown as average ± s.e.m. c , Fluorogold-injected rats received continuous airpuffs for 10 min and were killed and perfusion-fixed 90 min later. PVN slices were triple stained with antibodies against OT (blue), Fluorogold (red) and c-fos (green). The confocal image shows a Fluorogold-negative parvocellular OT neuron expressing c-fos (1 of 99 such double-labeled neurons observed in 4 rats). Scale bars, 100 and 10 µm (inset). d , Rats injected bilaterally with CAV2-Cre into the SON and rAAV-OTp-DIO-mCherry into the PVN were exposed to airpuffs for 10 min and killed 90 min later. The confocal image shows c-fos expression in a parvOT neuron (mCherry-positive, labeled via the retrograde CAV2-Cre, and is 1 of 60 such triple-labeled neurons observed in 4 rats). Scale bars, 100 and 10 µm (inset). e , f , Viral vectors for recording Ca 2+ signals in GCaMP6s-expressing OT neurons during chemogenetic activation ( e ) or silencing ( f ) of parvocellular OT neurons. g , h , Examples of fiber photometry-based Ca 2+ signals of PVN OT neuron population during airpuff stimulation (orange bars). Top: response to airpuffs 30–60 min after saline injection (control); bottom: response to airpuffs 30–60 min after CNO-induced activation ( g ) or silencing ( h ) of parvOT neurons. i , Average traces of Ca 2+ responses to airpuffs 30–60 min after injection of either CNO to activate (Gq) parvOT neurons or saline (control). Each graphic is the average of 33 airpuff responses (11 airpuffs per animal, n = 3; AUC 0–30 s after airpuffs, relative to control; * P = 0.03, paired, two-sided Student’s t -test). j , Average traces of Ca 2+ responses to airpuffs 30–60 min after injection of either CNO to silence (Gi) parvOT neurons or saline (control). Each graphic is the average of 33 airpuff responses (11 airpuffs per animal, n = 3; AUC 0–30 s after airpuffs, relative to control; ** P = 0.007, paired, two-sided Student’s t -test). All data show average ± s.e.m. Full size image Airpuffs applied to the abdominal skin produced few or no changes in their activity (mean change 0.5 ± 0.3 spikes s −1 , P = 0.33), and there were no detected effects after stimulation of the anogenital area or the whiskers pad (Extended Data Fig. 4c ). For potential involvement of the olfactory system in PVN OT neuron activation during social interaction, we exposed female rats to either a neutral odor (clean bedding) or a socially relevant odor (urinated-on female bedding). We found that the exposure to odorants did not elicit significant changes in either firing rate or spike distribution ( P = 0.34 or 0.48, respectively; Extended Data Fig. 4d–f ) in any of the recorded OT neurons. There was also no difference on presentation of neutral odor. Hence, we concluded that somatosensory inputs are the dominant signals that activate PVN OT neurons during social interactions. ParvOT neurons respond to gentle non-nociceptive mechanical stimuli Although the overwhelming majority of OT neurons in the PVN (97%) are magnocellular OT (magnOT) neurons, there is also a population of parvocellular OT (parvOT) neurons (~3%) that do not project to the pituitary 20 , but that are crucial for the transmission of nociceptive signals to the magnOT cells 13 . To study whether parvOT neurons are also activated by non-nociceptive stimuli, we applied airpuffs to conscious rats trained and adapted for short-term immobilization. For this purpose, we first used rats that had been injected systemically with the retrograde tracer Fluorogold to label all neurons in the brain that project outside the blood–brain barrier, including in particular magnOT, but not parvOT, neurons. To identify neurons strongly activated by airpuffs, we used the expression of c-fos as an indicator of activated OT neurons. Previous studies have found that c-fos expression is activated in a non-identified OT neuron cell type after social interaction in voles 21 , mice 22 and rats 23 . Immunocytochemistry revealed the presence of c-fos in 30% of parvOT neurons in the PVN of stimulated rats (average 12.4 ± 3 neurons per PVN per hemisphere, n = 4; Fig. 2c and Supplementary Table 1a ), but not in magnOT neurons or in any OT neurons in nonstimulated control rats, indicating that airpuffs specifically applied to the dorsal body region seem to predominantly activate parvOT neurons. In a second step, we labeled parvOT neurons retrogradely by injecting the canine adenovirus serotype 2 (CAV2-Cre) 24 into the SON, and concomitantly injected the Cre-responder rAAV-expressing mCherry under the control of the OT promoter into the PVNs. In line with our previous results, airpuffs induced c-fos expression exclusively in retrogradely labeled mCherry-positive OT neurons (average 47.6%, 7.5 ± 3 neurons per PVN per hemisphere, n = 4; Fig. 2d and Supplementary Table 1b ). To explore the role of parvOT neurons in social interaction and their response to gentle non-nociceptive mechanical stimuli (airpuffs), we chose to manipulate their activity via virally expressed, designer receptors exclusively activated by designer drugs (DREADDs). To this end, we used a similar Cre-dependent viral-based strategy employing OTp-DIO-hM4D(Gi)-mCherry and OTp-DIO-hM3D(Gq)-mCherry rAAVs (Fig. 2e,f ). As a first step, we verified the efficiency of DREADDs in modulating of parvOT neuron activity ex vivo, showing that hM3D(Gq)-CNO-induced parvOT activation significantly increased the spontaneous action potential (AP) frequency (baseline 0.85 ± 0.39 Hz versus clozapine N -oxide (CNO) 1.31 ± 0.51 Hz, n = 9; P = 0.0039; Extended Data Fig. 5a–c ) and the number of evoked APs (16.18 ± 3.89 APs versus CNO 22.55 ± 5.66 APs, n = 11; P = 0.0314; Extended Data Fig. 5d–f ). Consistent with this, hM4D(Gi)-CNO-induced inhibition (10 µM, 6 min) significantly decreased both the spontaneous AP frequency (baseline 1.38 ± 0.38 Hz versus CNO 0.36 ± 0.18 Hz, n = 7; P = 0.0469; Extended Data Fig. 5g–i ) and the number of evoked APs (baseline 13 ± 2.02 APs versus CNO 7.75 ± 2.03 APs, n = 11; P = 0.0007; Extended Data Fig. 5j–l ). After the ex vivo results, we next performed in vivo recording in anesthetized animals to better understand the airpuff-induced activation of parvOT. For this purpose, PVN parvOT activity was imaged using the GCaMP6s reporter and fiber photometry 25 (Fig. 2e–h ). Then, rats were injected with the DREADD ligand CNO (3 mg kg −1 intraperitoneally) and OT neuron Ca 2+ transients were analyzed. Chemogenetic activation of the parvOT neurons enhanced the Ca 2+ response to airpuffs (45 ± 9% increase of area under the curve (AUC; P = 0.03; Fig. 2i ). Conversely, chemogenetic inhibition of the parvOT neurons reduced the response to airpuffs (65 ± 5% decrease of AUC; P = 0.009 compared with control; Fig. 2j ). Thus, we concluded that gentle non-nociceptive mechanical stimulation of the dorsal region activates parvOT neurons, which we hypothesized may drive the activity of the larger population of magnOT neurons. Intra-PVN connectivity of parvOT and magnOT neurons To validate this hypothesis, we first looked for direct synaptic contact of parvOT neurons on to magnOT somata and/or dendrites via injection of OTp-DIO-GFP rAAV into the PVN and Cav2-Cre into the SON to specifically label parvOT neurons (Extended Data Fig. 6a,b ) in analogy to a previous study 13 . For three-dimensional (3D) reconstruction of interposition between axons of parvOT neurons and somatodendritic domains of magnOT neurons, we employed the IMARIS technique 26 , 27 . This approach allows precise identification of the location of synaptic contact by quantifying overlap with SYN-immunoreactive puncta. By performing IMARIS-assisted Sholl analysis, we found synaptic-like contacts of parvOT neurons with magnOT somata and dendrites (Fig. 3a and Extended Data Fig. 6 ; 6 dendritic contacts, 124 somatic contacts, n = 354) as well as an average chance of innervation of 34.9% (Fig. 3b ), indicating that approximately a third of PVN magnOT neurons receive parvOT input. Based on these anatomical observations, we performed patch-clamp recording for functional validation of parvOT–magnOT neuron connection via rAAV-OTp-DIO-ChR2-mCherry (to label specifically parvOT) and rAAV-OTp-Venus (to label all OT neurons) injected into the PVN and Cav2-Cre injected into the SON (Fig. 3g ). First, we confirmed the magnOT nature of recorded neurons through the presence of a hyperpolarizing transient outward rectification, as well as a weak low-threshold depolarization (Fig. 3h ), by comparison to the electrophysiological properties of identified parvOT neurons (Fig. 3c–f ). We observed that stimulation of parvOT neurons evoked responses in 45% of recorded magnOT neurons (9 of 20; Fig. 3i ) with a significant increase in postsynaptic current (PSC) frequencies (baseline 0.158 ± 0.055 Hz versus ChR2 0.346 ± 0.15 Hz, n = 9; P < 0.01; Fig. 3i ). Next, we aimed to visualize Ca 2+ variations in magnOT neurons on DREADD-mediated activation of parvOT neurons via rAAV-OTp-DIO-hM3D(Gq)-mCherry and rAAV-OTp-GCaMP6s injected into the PVN and Cav2-Cre into the SON (Fig. 4a–d ). After application of CNO (10 µM, 1 min), we observed that 40 ± 8% of recorded magnOT neurons responded to parvOT hM3D(Gq) stimulation, again confirming described anatomical connectivity (Figs. 3b,i and 4d ). In responsive neurons, the number of Ca 2+ transients was significantly increased, a result mirrored by the increase of AUCs (Fig. 4d ). However, the width of these Ca 2+ transients did not show any significant change, indicating that parvOT-induced magnOT activity does not trigger long-lasting Ca 2+ transients, but several bursts of sharp Ca 2+ peaks, as observed in the example traces (Fig. 4b ). This feature was further confirmed by plotting the time course of Ca 2+ event probability, showing that the probability of observing magnOT Ca 2+ transients is increased over the 4 min after the ex vivo CNO treatment (Fig. 4c ). These data indicate that parvOT neurons synapse on magnOT neurons within the PVN to drive their activity, as similarly reported for SON magnOT neurons in vivo 13 . Fig. 3: Intra-PVN connectivity of parvOT and magnOT neurons. a , Images show the 3D surface reconstruction of OT, GFP and SYN. Circles with dashed lines indicate the overlap of OT, GFP and SYN. Scale bar, 10 μm. b , Confocal image shows a single magnOT neuron (purple) innervated by a parvOT fiber (green). Scale bar, 10 μm. Dot–plot graph shows that the chance of innervation by parvOT neurons depends on the anatomical location of magnOT neurons within the PVN. Bar graph shows the average chance for magnOT PVN neurons to be innervated by parvOT axons ( n = 214 cells from 3 rats). c , Schema of the viral injection into the SON and PVN plus the electrophysiological recording in the PVN (with pipette) for the recording of parvOT neurons (expressing mCherry + GFP) and magnOT neurons (expressing mCherry). d , Comparison of average and individual points of voltage amplitude between parvOT neurons ( n = 17 cells from 4 rats) and magnOT neurons ( n = 7 cells from 4 rats) for different electrophysiological parameters (AP; parvOT 70.12 ± 2.87 mV versus magnOT 71.65 ± 7.414 mV; P = 0.82, unpaired, two-sided Student’s t -test; transient outward rectification (TOR): magnOT = 4.39 ± 0.79 mV; low threshold depolarization (LTD): parvOT 14.88 ± 0.81 mV versus magnOT 5.93 ± 1.98 mV; ** P = 0.0019, two-sided Mann–Whitney U -test). e , Example responses of three parvOT neurons to a hyperpolarizing current at −100 pA followed by four current injections starting from 0 to 60 pA. f , Example responses of three magnOT neurons to a hyperpolarizing current at −100 pA followed by four current injections starting from 0 pA to 60 pA. g , Left: schematic representation of viral vectors injected in the PVN (OTp-DIO-ChR2-mCherry and OTp-Venus) and the SON (CAV2-Cre) to transduce the expression of ChR2-mCherry in parvOT neurons and of Venus in PVN OT neurons. Right: image showing viral expression in the PVN in one of four rats. Scale bar, 100 μm. h , Average and individual points of voltage amplitude of magnOT neurons ( n = 8 cells from 4 rats) for different electrophysiological parameters: AP, TOR and LTD. i , Average percentage (45%) of responding magnOT neurons ( n = 9 cells) in all the magnOT neurons that have been recorded ( n = 20 cells from 4 rats). MagnOT PSC frequency reversibly increases after parvOT ChR2 photostimulation ( n = 9 cells). Example responses of three magnOT neurons in voltage clamp configuration at −70 mV before and after the ChR2 optogenetic stimulation of parvOT neurons. Baseline versus BL: ** P < 0.001; baseline versus wash: ** P < 0.001, Friedman’s test followed by Dunn’s post hoc test. BL, blue light. All data are represented as mean + s.e.m. Full size image Fig. 4: Magnocellular neurons and their release of OT into blood are controlled by parvOT neurons. a , To allow the expression of hM3D(Gq) on parvOT PVN to SON projecting neurons, rats’ SON are infected with a CAV2-Cre rAAV and the PVN are infected with an AAV, allowing the Cre-dependent expression of hM3D(Gq) under the control of the OT promoter. We also make PVN OT neurons express the calcium indicator GCaMP6s to monitor calcium transients in parvOT neurons. b , Example traces of the effect of CNO (10 µM, 6 min) on PVN oxytocinergic neuron calcium activity. c , CNO application increases the number of calcium transients by 5-fold ± 1-fold (solid line: average, shaded area: s.e.m., P = 0.0019, Wilcoxon’s test) and the AUC by 15- ± 9-fold ( P = 0.0043, Wilcoxon’s test) in 40 ± 8% of recorded magnOT neurons ( n = 20 slices from 7 rats, 70 cells). d , Calcium event probability, fraction of responses, number of peaks, AUC and peak duration of calcium events in PVN OT neurons. After CNO application, the probability of observing a calcium peak is increased over ~4 min, but the duration of those peaks remains unchanged (ratio = 2 ± 0.7, P = 0.46, paired, two-sided Student’s t -test). Bar plots show mean ± s.e.m. e – h , Schema of viral vectors injected and implanted optic fiber-for-fiber photometry recording ( e ) of PVN OT neurons with concomitant DREADD-Gq activation of parvOT neurons. Example traces ( f ) of recorded GCaMP6s signal from PVN OT neurons before and after CNO-induced activation of parvOT neurons. Normalized AUC of GCaMP6s signal ( g , solid line: average, shaded area: s.e.m., 1-min bin size) of PVN OT neurons showing increase of cellular activity after parvOT activation mediated by CNO intraperitoneal injection (indicated by arrow). The 30-min averaged AUC ( h ) showing a gradual increase of cellular activity (baseline AUC versus 0–30 min, P = 0.0606, versus 30–60 min, * P = 0.0403) that lasts at least 120 min (baseline AUC versus 60–90 min, * P = 0.028; versus 90–120 min, * P = 0.0325, n = 6 rats, two-way ANOVA and Tukey’s corrected post hoc comparison). i – l , Schema of viral vectors injected and implanted optic fiber for fiber photometry recording ( i ) of PVN OT neurons with concomitant DREADD-Gi inhibition of parvOT neurons. Example traces ( j ) of recorded GCaMP6s signal from PVN OT neurons before and after CNO-induced inhibition of parvOT neurons. Normalized AUC of GCaMP6s signal ( k , solid line: average, shaded area: s.e.m., 1-min bin size) of PVN OT neurons showing decrease of cellular activity after parvOT inhibition mediated by intraperitoneal CNO injection (indicated by arrow). The 30-min averaged AUC ( l ) showing a gradual decrease of cellular activity (baseline AUC versus 0–30 min, P = 0.058, versus 30–60 min, *** P = 0.00013) that lasts at least 120 min (baseline AUC versus 60–90 min, 90–120 min, *** P = 0.00019, n = 3 rats, two-way ANOVA and Tukey’s corrected post hoc comparison). m – o , Schema of viral vectors injected and implanted optic fiber-for-fiber photometry recording ( m ) of PVN OT neurons in control animals (DREADD free) expressing GFP in parvOT neurons. Normalized AUC of GCaMP6s signal ( n , solid line: average, shaded area: s.e.m., 1-min bin size) of PVN OT neurons showing no significant changes in Ca 2+ signal on CNO injection. No significant changes are detected in 30-min averaged AUC ( o ) up to 120 min ( P = 0.109, n = 2 rats, two-way ANOVA and Tukey’s corrected post hoc comparison). p , Panels of immunostained section of the PVN showing post hoc verification of implanted optic fiber above the PVN and co-localization of immunoreactive GCaMP6s (green, top left), DIO-hM3D(Gq)-mCherry (red, bottom left) and OT (blue, right) in one of six rats. Arrows indicate mCherry-positive parvOT neurons. Scale bars, 100 μm and 10 μm (inset). q , Schema of viral vectors injected for DREADD-Gq activation of parvOT neurons and blood sampling from the jugular vein. r , Chemogenetic activation of parvOT neurons evokes peripheral OT release. Plasma OT (pg ml −1 ) taken under basal conditions and 45 and 90 min after intraperitoneal CNO (3 mg kg −1 ; depicted by arrow; n = 8 rats parvOT Gq group, n = 6 rats control group). At 45 min, ++ P = 0.00093 versus basal (−45 min), ** P = 0.0036 versus control (OTp-mCherry) and, at 90 min, ++ P = 0.002 versus basal, ** P = 0.0017 versus control. Two-way repeated-measures ANOVA with Bonferroni post hoc correction. Data are presented as mean ± s.e.m. Full size image Magnocellular neurons and their release of OT into blood, controlled by parvOT neurons Using similar viral strategies, we expressed DREADDs—hM3D(Gq) or hM4D(Gi)—specifically in parvOT neurons and injected rAAV-OTp-GCaMP6s into the PVNs to express the Ca 2+ indicator GCaMP6s in all PVN OT neurons (1,193 of 1,371 OT neurons expressed GCaMP6s, 87 ± 4%, n = 4; Fig. 4p ). This allowed us to monitor the global activity of PVN OT neurons via fiber photometry in isoflurane-anesthetized rats on activation/inhibition of parvOT neurons. Activation of parvOT cells induced an increase in Ca 2+ fluorescent signal of the PVN OT neuron population approximately 30 min after CNO injection (intraperitoneal 3 mg kg −1 ) and lasting for >2 h (Fig. 4e–h ). Conversely, inhibition of parvOT neurons decreased Ca 2+ fluorescent signals of the general population 30 min after CNO injection and the effect lasted for more than 2 h (Fig. 4i–l ). Administration of CNO did not have any effect on the Ca 2+ signal in control animals lacking the DREADD receptors (Fig. 4m–o ). Considering that the contribution of parvOT neurons to the OT population Ca 2+ signal is negligible (Extended Data Fig. 7a–e ), those results suggest that changes in parvOT neuron activity directly influence the firing pattern of large populations of PVN magnOT neurons. The similar kinetics of Ca 2+ signal fluctuations after CNO activation of parvOT PVN neurons, together with airpuff application, were detected during recording of magnOT neurons in the SON, which do not contain parvOT neurons (Extended Data Fig. 7f–o ). To investigate whether parvOT-induced magnOT activity is followed by actual OT release, we analyzed neurohypophyseal OT release after chemogenetic activation of parvOT neurons. We performed blood sampling from the jugular vein before and after CNO injection (3 mg kg −1 ; Fig. 4q ) and found a significant increase in plasma OT 45 min ( P = 0.00093 versus basal; P = 0.0036 versus OTp-mCherry control) and 90 min ( P = 0.002 versus basal; P = 0.0017 versus OTp-mCherry control; Fig. 4r ) after intraperitoneal CNO injections. Taken collectively, these results indicate that parvOT neurons tightly control magnOT neuron activity in vivo to regulate peripheral OT release. Differential neural inputs to parvOT and PVN magnOT neurons These findings suggest that parvOT neurons act as ‘first responders to somatosensory input ’ , conveying information to the rest of the PVN OT neuronal population (that is, magnOT neurons). Hence, we asked whether parvOT neurons receive more synaptic inputs than magnOT ones in the PVN. In an attempt to assess potential differences of synaptic inputs to parvOT and magnOT neurons, we used IMARIS to quantify the total amount of SYN fluorescence at somata and dendrites. To perform an unbiased analysis, we created spheres that precisely engulfed magnOT and parvOT somata and accounted for individual variances in cell roundness and surface area ( Methods ). We found statistically significant differences at both the soma (Fig. 5a ) and dendritic locations (Fig. 5b , at two different locations, 5 and 20 µm from the soma) and analyzed a total of 104 neurons (parvOT = 56, magnOT = 48), suggesting that parvOT neurons might receive more overall synaptic input. Fig. 5: ParvOT neurons receive more inputs than magnOT neurons. a , Three-dimensional reconstruction of parvOT and magnOT neurons and the quantification of SYN fluorescence. Asterisks (white) indicate the placement of the spheres (yellow) used to quantify the total amount of SYN fluorescence (red). Top: the placement of a sphere around a magnOT neuron soma; bottom: the placement of a sphere on to a parvOT neuron dendrite. Scale bars, 5 μm. b , Quantification of SYN fluorescence in close proximity to parvOT ( n = 56 cells from 3 rats) and magnOT ( n = 48 cells from 3 rats) neurons at somatic (top) and dendritic locations (bottom) considering differences in cellular roundness and surface area (adjusted) (unpaired, two-sided Student’s t -test, *** P < 0.0001). c , e , Virus injection strategy to retrotrace inputs from parvOT ( c ) and magnOT ( e ) neurons, respectively. d , f , Schema representing the proportion of inputs (number of inputs from one brain area/total number of inputs) from each brain area to parvOT ( d ) and magnOT ( f ) neurons, respectively. Brain areas projecting only to parvOT or magnOT neurons are circled in green or purple, respectively. See Supplementary Table 2 for a full list of abbreviations for structures in d and f . g , Quantification of the total number of inputs to parvOT and magnOT neurons (two-sided Student’s t -test, * P = 0.0223, n = 5 rats per group). h , Proportion of inputs to parvOT and magnOT neurons located in or outside the hypothalamus. Numbers indicate average number of neurons. i , Bar graphs showing the proportion of inputs coming from brain areas that show preferential innervation of parvOT or magnOT neurons. Two-sided Student’s t -test; asterisks indicate significant difference: * P = 0.0315, ** P = 0.0153, *** P = 0.0264, **** P = 0.0299, ***** P = 0.0453 and ****** P = 0.0011; n = 5 rats per group. Data represented as mean ± s.e.m. Full size image Next, to uncover the origin of synaptic inputs to parvOT and magnOT neurons, we employed the retrograde trans-synaptic, EnvA-pseudotyped, G-deletion-mutant rabies virus (Rb-GFP 28 ). To specifically distinguish inputs to parvOT and magnOT neurons, we used a double-conditional approach, which allows retrotracing of inputs to OT neurons that project to an area of choice (SON for parvOT and posterior pituitary for magnOT) ( Methods ; Fig. 5c,e ). In both groups of rats, we found green fluorescent protein (GFP)-expressing neurons in numerous brain regions, including the septum, medial preoptic area and amygdala (Fig. 5d,f and Extended Data Fig. 8h ), demonstrating that parvOT and magnOT neurons receive a large number of common inputs (Supplementary Table 2 ). However, we detected the presence of GFP neurons in the paraventricular nucleus of thalamus, insula and habenula only after infection of parvOT neurons (Extended Data Fig. 8i ), whereas GFP neurons in the substantia nigra were found only after primary infection of magnOT neurons (Extended Data Fig. 8j ). In line with the IMARIS analysis, the total number of neurons projecting to parvOT and magnOT neurons was 1,963.6 ± 710 and 694.8 ± 121 neurons, respectively ( P = 0.02; Fig. 5g ). Although we did not find between-group differences in the proportion of inputs coming from hypothalamic and extrahypothalamic areas (Fig. 5h ), the periqueductal gray and subfornical organ showed preferential innervation of parvOT and magnOT neurons, respectively (Fig. 5i ). This indicates that parvOT neurons receive at least partially distinct, and more pronounced neuronal inputs than magnOT neurons. ParvOT neurons modulate social behavior To test whether this small population of parvOT neurons can modulate social behavior by their effects on the activity of the much more abundant magnOT neurons, we used the previously described chemogenetic approach to silence or activate them during behavioral tests (Fig. 6a ). Then 3 weeks after viral injection, rats were injected intraperitoneally with either CNO (3 mg kg −1 ) or saline 60 min before social interaction tests (Fig. 6b ). Selective inhibition of the parvOT neurons resulted in less social interaction: in the FSI test, the time spent with a conspecific was reduced by 37 ± 6 s (over 5-min sessions, P < 0.001; Supplementary Video 2 ). By contrast, in the CSI test, where no physical contact is allowed, the time spent by the experimental rat approaching the stimulus rat was unchanged (Fig. 6c–e ; n = 15 rats). Conversely, CNO-induced activation of parvOT neurons led to more social interaction: in the FSI test, the time spent with a conspecific increased by 10 ± 6 s ( P = 0.04). In the CSI test, no significant difference in approaching time was measured between saline- and CNO-injected rats (Fig. 6f–h ; n = 9 rats). Fig. 6: Modulation of parvocellular OT neurons alters social behavior. a , Viral vectors used to express genes of interest (hM4D(Gi)-mCherry or hM3D(Gq)-mCherry) in parvOT neurons. b , CNO or saline was injected intraperitoneally (i.p.) 60 min before the behavioral tests. c , Silencing parvOT neurons (Parvo-Gi group): percentage of time spent by an experimental rat injected with saline or CNO socially interacting with a conspecific in CSI ( P = 0.41) and FSI ( n = 15 rats, *** P = 0.0001, paired, two-sided Student’s t -test), calculated over the 5-min session. d , Temporal dynamics of time spent in social interaction in 1-min bins (second minute, ** P = 0.01; third minute * P = 0.03; fifth minute * P = 0.04; n = 15 rats, two-way ANOVA time × treatment). e , Parvo-Gi group: time spent in different social behaviors in rats injected with saline or CNO: crawling on top (** P = 0.008), sniffing (* P = 0.012), chasing ( P = 0.13), head to head ( P = 0.31) ( n = 15 rats, one-way ANOVA, Tukey’s corrected post hoc comparison). f , Activation of parvOT neurons (Parvo-Gq group): average time spent in social interaction with conspecific stimulus in CSI ( P = 0.32) and FSI ( n = 9 rats, * P = 0.04, paired, two-sided Student’s t -test) after CNO or saline injection. g , Temporal dynamics of time spent in social interaction in 1-min bins for rats injected with CNO or saline (fourth minute, P = 0.03; fifth minute, n = 9 rats, P = 0.05, two-way ANOVA time × treatment). h , Parvo-Gq group: time spent in different social behaviors in rats injected with saline or CNO: mounting (** P = 0.006), sniffing ( P = 0.44), chasing ( P = 0.27), head to head ( P = 0.11) ( n = 9 rats, one-way ANOVA, Tukey’s corrected post hoc comparison). i , OTR antagonist intracerebroventricular infusion decreases social interaction even in the presence of pharmacological activation (hM3D-Gq) of parvOT neurons. Percentage of social interaction time is shown in different conditions: saline (control), CNO, OTR antagonist (OTR-a) or CNO + OTR antagonist administration. Time spent social interacting over 5-min sessions. Saline intraperitoneally and intracerebroventricularly: 90 ± 19 s; CNO intraperitoneally and saline intracerebroventricularly: 105 ± 15 s, * P = 0.04, n = 6 rats; saline intraperitoneally and OTR antagonist intracerebroventricularly: 54 ± 17 s, ** P = 0.007; CNO intraperitoneally and OTR antagonist intracerebroventricularly: 56 ± 16 s, ** P = 0.009, n = 6 rats (one-way ANOVA and Tukey’s corrected post hoc comparison). j , Control group in which parvOT neurons express GFP. Saline intraperitoneally and intracerebroventricularly: 88 ± 18 s; CNO intraperitoneally and saline intracerebroventricularly: 89 ± 14 s, n = 5 rats; saline intraperitoneally and OTR antagonist intracerebroventricularly: 53 ± 14 s, ** P = 0.008; CNO intraperitoneally and OTR antagonist intracerebroventricularly: 57 ± 15 s, ** P = 0.001, n = 5 rats (one-way ANOVA and Tukey’s corrected post hoc comparison). All data are represented as mean ± s.e.m. Full size image Inhibition and activation of parvOT neurons also had opposite effects on crawling behavior (Fig. 6e,h ). Moreover, after inhibiting parvocellular OT neurons, rats often actively avoided the stimulus rat, a behavior never observed in the control group (Extended Data Fig. 9c ). Control rats injected with control virus rAAV-OTp-DIO-GFP receiving saline or CNO showed no behavioral differences (Extended Data Fig. 9a ). To show that alterations of social behaviors induced by DREADD-based manipulation of parvOT neuron activity were indeed an effect mediated by central OT release, the parvOT activation (Gq) experiment was repeated while applying an OTR antagonist 29 by intracerebroventricular infusion (0.75 μg per 5 μl) 30 . Compared with saline-infused control animals, OTR antagonist-infused animals showed a strong reduction in social interactions (37 ± 18% reduction, P = 0.007, n = 12 rats), regardless of CNO administration, whereas, without OTR antagonist, CNO application caused increased social interactions (16 ± 3% increase, P = 0.04, n = 12; Fig. 6i and Extended Data Fig. 9h ). We did not observe a CNO- or OTR antagonist-induced effect on locomotor activity (Extended Data Fig. 9b,d–e ). This result confirms that the downstream effect on CNO-induced activation of parvOT neurons of social behavior is indeed mediated by OT and its receptors. In a second group of rats ( n = 10) expressing GFP in parvOT neurons, administration of an OTR antagonist also had a comparable effect in reducing social behavior; as expected, CNO itself did not have any effect on social interaction of animals (Fig. 6j and Extended Data Fig. 9i ). Discussion In the present study, we provide evidence that somatosensory stimulation in female rats activates parvOT neurons, which subsequently drive the activation of the much larger population of magnOT neurons. Using ex vivo and in vivo approaches, we demonstrated that parvOT neurons synapse on magnOT neurons to elicit a central effect of OT to promote interfemale communication. Social touch evokes OT neuronal activity The use of single-unit in vivo recording precludes discrimination between parvOT and magnOT neurons. However, considering the limited number of parvOT neurons (~30 parvOT cells 13 versus ~1,200 magnOT cells 31 in the PVN of each hemisphere), it is highly likely that we exclusively recorded from magnOT cells. In support, we found that nonaggressive social interactions of female rats and, in particular, physical contacts elicited a coordinated, clustered spiking activity of PVN OT neurons—a pattern that strongly facilitates activity-dependent secretion of OT from nerve terminals of magnOT cells in the pituitary 17 (Extended Data Fig. 2h–j ). This activity is almost synchronous across recorded OT neurons, and is highly correlated with theta rhythmicity of PVN local field potentials. These coordinated changes in OT neuronal electrical activity occurred only during FSI, allowing physical contacts between conspecifics, but not during CSI, where physical touch between animals was prevented by a barrier. Moreover, detailed analysis of PVN OT neuron activity during social behaviors revealed that the highest increase in neuronal firing occurred immediately after (0–10 s) crawling on top or being crawled behaviors (Fig. 1 ), that is, social contacts involving activation of cutaneous sensory nerves. To test whether non-noxious repetitive somatosensory stimulations directly influence PVN OT neuron activity, even in the absence of other stimuli, we applied airpuff stimulations to the skin of the dorsal area of the rat, in lightly anesthetized conditions, while measuring action potentials of PVN neurons. Notably, airpuffs induced a significant increase in spiking activity of most (83%) recorded putative magnOT neurons, but had little or no effect on the activity of non-OT PVN neurons, reinforcing the idea that somatosensory inputs selectively activate magnOT neurons. This finding is in line with previous studies 32 that reported increased OT plasma levels in rats after 10 min of massage-like stroking. Furthermore, the stimulation of low-threshold mechanoreceptors, particularly the touch-sensitive nerve fiber C-tactile afferents is known to trigger OT release, and has been associated with increased social motivation in rodents and humans 33 . ParvOT neurons control PVN magnOT neuron activity To shed light on the causal link between somatosensory stimulation and social behavior, we focused our research on a specific subtype of ‘parvocellular’ OT neurons. These neurons communicate with various autonomic centers in the brain stem and spinal cord 20 , and are involved in analgesia during acute pain 13 . When we applied low-intensity, non-noxious cutaneous stimulation (airpuff) in awake rats, we observed a sustained increase of c-fos expression in parvOT neurons in the PVN (Fig. 2 ). Of note, we found that airpuffs induced c-fos expression in parvOT, but not in magnOT, neurons. However, the absence of c-fos expression in magnOT cells does not necessarily indicate the absence of their increasing activity 34 , 35 , 36 . Indeed, only dramatic physiological challenges such as hemorrhage, salt loading or fear evoke c-fos expression in magnOT neurons 34 , 37 . Importantly, during lactation magnOT neurons release a large amount of OT into the peripheral circulation although an increase in c-fos expression was never found. In analogy, our findings demonstrate increased OT plasma concentrations after chemogenetic activation of parvOT neurons (Fig. 4 ) via a demonstrated parvOT → magnOT connectivity, although without detectable c-fos immunosignal in magnOT neurons releasing the neuropeptide into the blood. Furthermore, chemogenetic activation or inhibition of parvOT neurons via DREADDs resulted, respectively, in an increase or decrease in OT neuron activity in response to the airpuff stimulation (Fig. 2 ). This suggests that parvOT neurons can be activated by both nociceptive 13 and non-nociceptive stimuli (in the present study) and subsequently promote analgesia as well as social behavior. Such pleotropic effects of OT originating from the same parvOT neurons require further investigation. To provide additional evidence that parvOT neurons modulate magnOT neuron activity within the PVN, we employed a combination of immunohistochemistry and 3D anatomical reconstruction. We found 1.5- to 4-fold more synaptic-like contacts on parvOT somata and dendrites compared with the respective compartments of magnOT neurons (Fig. 5 ). This finding is supported by retrograde tracing data, which demonstrate substantially more inputs to parvOT neurons than to magnOT neurons (Fig. 5 ). Do parvOT neurons control social behavior? To investigate how parvOT neurons modulate social behavior, we performed chemogenetic manipulation of parvOT neurons by viral means. We found that targeted activation or inhibition of parvOT neurons increased or decreased the total time of social interaction with a conspecific, respectively. Furthermore, the intracerebroventricular application of an OTR antagonist prevented CNO-induced social interaction after chemogenetic activation of parvOT neurons (Fig. 6 ). This suggests that the excitation of parvOT neurons is transmitted to magnOT cells, which, in turn, project axonal collaterals to numerous forebrain regions 5 , 7 . Given that parvOT neurons exclusively project to the brain stem and spinal cord 38 , our results allow the hypothesis that the OTR antagonist blocks the action of OT released from magnOT axons in socially relevant brain regions, resulting in the attenuation of social communication between female conspecifics. A stable OT-mediated social interaction throughout female life? Although we exclusively used virgin females in our present study, it will be important to investigate how pregnancy and lactation change the OT-dependent response to somatosensory stimulation. Given the drastic activation of the OT system and close physical contact with the offspring peripartum 39 , 40 , 41 , it is plausible that the reward of tactile stimulation changes as well. Moreover, due to the interaction of OT and prolactin during the milk letdown reflex 42 , 43 , the nipples might become more sensitive to the suckling of pups, which could translate into a more rewarding experience for mothers. Further studies are needed to assess the intricate interrelationship of social touch, social behavior and social motivation, which requires concomitant actions of OT, serotonin and dopamine within the nucleus accumbens and ventral tegmental area 22 , 44 , in females as well as males. Accordingly, we found that parvOT, but not magnOT, neurons are innervated by neurons of the insular cortex, which is a critical region processing social touch 45 , and could thus be potentially involved in the recruitment of the oxytocinergic system during social tactile stimulation. Taken together, our data extend the current knowledge of the interrelationship of intracerebral OT release, social touch and its behavioral correlates. Our results suggest that parvOT neurons translate mechanosensory information from the periphery into social behavior (Extended Data Fig. 10 ), but the precise ascending pathways from cutaneous nerves—via the parvOT → magnOT circuit—to forebrain regions controlling social behaviors await further investigation. Although intranasal OT application has improved clinical outcomes of schizophrenia, post-traumatic stress disorders and autism spectrum disorders, there is still an ongoing debate about the validity of these findings 46 , suggesting that evoking endogenous OT release might be a more reliable way to exploit the benefits of this neuropeptide. Thus, a combination of gentle touch, social interaction and/or intranasal OT application might be a powerful tool to treat human mental diseases, in which the OT system is compromised 47 , 48 . Methods Animals Female Wistar rats aged 4–8 weeks were purchased from Janvier and housed under standard laboratory conditions (12-h light:dark cycle, lights on at 07:00, 22–24 °C, 50 ± 5% humidity, free access to food and water). All experiments were conducted under license G-102/17 (authorized by the German Animal Ethics Committee of the Baden Württemberg, Regierungspräsidium Karlsruhe) and in accordance with German law, under license 3668-2016011815445431 from the French Ministry and EU regulations. In total, 194 rats were used, of which 15 were excluded due to mistargeting or insufficient expression of viral vectors (Supplementary Table 3 ). Viruses The rAAVs (serotype 1/2) used in the present study (carrying the conserved region of the OT promoter and genes of interest in direct or ‘floxed’ orientations) were cloned and produced as reported previously 5 , 13 , 30 , 49 . HEK293T cells (Addgene, catalog no. 240073) were used for the virus production. The rAAVs produced included: rAAV-OTp-mCherry/Venus, rAAV-OTp-ChR2-mCherry, rAAV-OTp-DIO-ChR2-mCherry, rAAV-OTp-DIO-hM3D(Gq)-mCherry, rAAV-OTp-DIO-hM4D(Gi)-mCherry, rAAV-OTp-DIO-GFP, rAAV-OTp-DIO-ChR2-EYFP, rAAV-OTp-GCaMP6s, rAAV-OTp-TCB (TVA fused mCherry) and rAAV-Ef1A-DIO-oG. The CAV2-CMV-Cre was purchased from the Institute of Molecular Genetics 24 . The rAAVretro-Ef1A-Cre was purchased from the Salk Institute Viral Vector Core. Modified rabies virus was produced at the Gene Center Rabies laboratory, Ludwig Maximilian University. Stereotactic injections of viral vectors For stereotactic injections of viruses, rats were anesthetized with a mixture of ketamine (65 mg per kg birth weight) and xylazine (14 mg per kg birth weight). The rAAV genomic titers were determined using QuickTiter AAV Quantitation Kit (Cell Biolabs, Inc.) and reverse transcription PCR using the ABI 7700 cycler (Applied Biosystems). The rAAV titers were between 10 9 and 10 10 genomic copies µl −1 . We injected 300 nl per PVN. CAV2-Cre was purchased from the Institute of Molecular Genetics (diluted to 10 9 genomic copies µl −1 , 300 nl per SON). Viruses were injected via a glass pipette into the target regions at 150 nl min −1 using a syringe pump as previously described 50 . Coordinates were chosen in accordance with a rat brain atlas 51 for PVN (anteroposterior (A/P): −1.8 mm; mediolateral (M/L): ±0.3 mm; dorsoventral (D/V): −8 mm), SON (A/P: −1.8 mm, M/L: ±1.2 mm, D/V: −9.25 mm) and posterior pituitary (A/P: −5.6 mm, M/L: ±0.1 mm, D/V: −10.5 mm). Verification of injection and implantation sites and expression of genes of interest was confirmed in all rats post hoc in 50-μm sections containing the PVN and SON ( Histology ). Ex vivo experiments Slice preparation Some 4–8 weeks after injection of the viruses into the PVN and SON of 5-week-old virgin female rats, animals were anesthetized using ketamine (Imalgene 90 mg kg −1 ) and xylazine (Rompun, 10 mg kg −1 ) administered intraperitoneally. Then, intracardiac perfusion was performed with an ice-cold, N -methyl- d -glucamine (NMDG)-based artificial cerebrospinal fluid (aCSF), which contained (in mM): NMDG (93), KCl (2.5), NaH 2 PO 4 (1.25), NaHCO 3 (30), MgSO 4 (10), CaCl 2 (0.5), 4-(2-hydroxyethyl)-1-piperazine-ethanesulfonic acid (Hepes) (20), d -glucose (25), l -ascorbic acid (5), thiourea (2), sodium pyruvate (3), N -acetyl- l -cysteine (10) and kynurenic acid (2). The pH was adjusted to 7.4 using either NaOH or HCl, after bubbling in 95% O 2 /5% CO 2 gas. Rats were then decapitated, the brains removed and 350-µm-thick coronal slices containing the hypothalamus were obtained using a Leica VT1000s vibratome. Slices were warmed for 10 min in 35 °C NMDG aCSF and placed for a minimum of 1 h in a holding chamber at room temperature, containing normal aCSF. Normal aCSF, also used during all ex vivo experiments, is composed of (in mM): NaCl (124), KCl (2.5), NaH 2 PO 4 (1.25), NaHCO 3 (26), MgSO 4 (2), CaCl 2 (2) and d -glucose (15), adjusted to a pH value of 7.4 with HCl or NaOH, and continuously bubbled with 95% O 2 /5% CO 2 gas. All aCSF was checked for osmolality and kept to a value of between 305 and 310 mosmol. In electrophysiology or calcium-imaging experiments, slices were transferred from the holding chamber to an immersion recording chamber and superfused at a rate of 2 ml min −1 . CNO-containing solution (10 µM) was applied in a bath through a 6-min-long pumping episode, corresponding to several times the volume of the recording chamber (two applications per slice maximum). All ex vivo experiments were conducted at room temperature. Patch-clamp recording Whole-cell patch-clamp recordings were visually guided by infrared oblique light videomicroscopy (DM-LFS; Leica), using 4- to 9-MΩ borosilicate pipettes filled with a KMeSO 4 -based intrapipette solution composed of (in mM): KMeSO 4 (135), NaCl (8), Hepes (10), ATPNa 2 (2) and GTPNa (0.3). The pH was adjusted to 7.3 with KOH and osmolality checked to be 300 mosmol l −1 , and adjusted with sucrose if needed. Data were acquired with an Axopatch 200B (Axon Instruments) amplifier and digitized with a Digidata 1440A (Molecular Devices). Series capacitances and resistances were compensated electronically. Data were sampled at 20 kHz and low-pass filtered at 5 kHz using the pClamp10 software (Axon Instruments). Further analysis was performed using Clampfit 10.7 (Molecular Devices) and Mini analysis v.6 software (Synaptosoft) in a semi-automated fashion (automatic detection of events with chosen parameters followed by a visual validation). Evoked activity To test the effects of CNO on neuronal excitability ex vivo, we used a current step method. For this purpose, we make PVN → SON projecting neurons express the DREADD receptors by injecting rats’ SON with a CAV2-Cre virus (rAAV-CAV-Cre) and PVN with an OT-specific Cre-inducible DREADD construct (rAAV-OTp-DIO-hM4D(Gi)-mCherry or OTp-DIO-hM3D(Gq)-mCherry). Then, 6–8 weeks after infection, coronal slices were prepared and fluorescent neurons (indicative of the viral expression) were selected for whole-cell patch-clamp recordings. After establishing the clamp, neurons were recorded in current clamp mode with 0 pA injected. To test the effects of DREADD activation—hM4D(Gi) or hM3D(Gq)—neurons were subjected to the following current steps: for hM4D(Gi), neurons received an injection of an −100 pA negative current to hyperpolarize the neuron membrane (reaching −100 mV) before each step. These steps start at −80 pA and increase by 20 pA, reaching +120 pA. For hM3D(Gq), steps start at −20 pA and increase by 10 pA, reaching 80 pA. To quantify the effects of DREADD activation, the number of APs triggered by these steps was evaluated. Spontaneous activity To evaluate the effect of DREADD activation on neuronal activity, neurons were also recorded 2 min before and after CNO exposure in voltage or current clamp mode. In these cases, the frequency of the PSCs or APs was quantified. Identification of parvOT and magnOT The identity of PVN OT neurons was verified through a current step protocol 52 ; this method has been used in several other studies to allow discrimination between parvocellular and magnocellular neurons 13 , 53 , 54 , 55 , 56 . Neurons received an injection of an −100 pA current to hyperpolarize the neuron membrane (reaching −100 mV) before each step. These steps start at 0 pA and increase by 20 A, reaching +60 pA. To discriminate between parvOT and magnOT, we have measured the hyperpolarizing notch and the T-outward rectification. ChR2 stimulation of SON’s parvOT neurons To decipher the connection between SON parvOT neurons and PVN magnOT neurons, we used an optogenetic strategy. First, we identified PVN OT neurons by injecting rats’ PVN with an rAAV containing the coding sequence of the fluorescent marker Venus, under the control of the OT promoter (OTp-Venus). Then, we aimed to specifically activate SON → PVN projection by using a combination of two rAAVs: the first was injected in the SON and induces the expression of the Cre recombinase in SON-targeting neurons, and the second was injected into the PVN to allow the expression of the ChR2 in OT neurons after a Cre-dependent recombination (OTp-DIO-ChR2-mCherry). Then, 6–8 weeks after infection, coronal slices containing the PVN were prepared and the Venus + /mCherry − neurons were selected for whole-cell patch-clamp recordings. This combination of fluorescent markers allows us to select PVN OT neurons that are not directly targeting the SON. Neurons were recorded for 2 min in the voltage clamp to establish the baseline frequency of PSCs and, then, we performed an optogenetic stimulation of ChR2-expressing oxytocinergic neurons by applying light pulses (10 ms at 30 Hz for 20 s) using light source X-Cite 110LED from Excelitas Technologies, through a GFP filter, controlled with a Clampex-driven TTL. Neurons were also recorded during the ChR2 stimulation to observe that the neurons were not expressing ChR2 itself. Finally, we continued to recorded 10 min after the stimulation to observe the effect of the SON parvOT neuron stimulation on the PSC frequency of the recorded neurons. PSCs were detected using Mini analysis v.6 software (Synaptosoft). Calcium imaging To test whether the chemogenetic activation of PVN → SON projecting oxytocinergic neurons can modify the intra-PVN microcircuit activity, we used an ex vivo calcium-imaging approach. To this end, rats’ SON were infected with CAV2-Cre and the PVN with a virus allowing the expression of hM3D(Gq) under the control of OT promoter after a Cre-dependent recombination (OTp-DIO-hM3D(Gq)-mCherry). We also made PVN OT neurons express the calcium indicator GCaMP6s using a third viral vector (rAAV-OTp-GCaMP6s). Then, 6–8 weeks after infection, coronal slices containing the PVN were prepared and neurons that were positive for GCaMP, but negative for mCherry, were recorded. To perform this fluorescence microscopy, we used a Zeiss Axio examiner microscope with a ×40 water immersion objective (numerical aperture of 1.0), mounted with an X-Light Confocal unit—CRESTOPT spinning disk. Images were acquired at 5 Hz with an optiMOS sCMOS camera (Qimaging). Neurons within a confocal plane were illuminated for 100 ms at wavelength λ = 475 nm using a Spectra 7 LUMENCOR. The different hardware elements were synchronized through the MetaFluor software (Molecular Devices, LLC). Neuron calcium levels were measured in a hand-drawn region of interest (ROI). In all recordings, the Fiji rolling ball algorithm was used to increase the signal:noise ratio. Recordings in which movements/drifts were visible were discarded. Offline data analysis was performed using a customized python-based script. First, a linear regression and a median filter were applied to each trace. Peaks were then detected using the ‘find_peaks’ function of the SciPy library. More precisely, fluorescence variation was identified as a calcium peak if its prominence exceeded twice the s.d. and if the maximum peak value surpasses three fluorescence units. The ROIs with zero calcium variations were excluded from the analysis. The remaining ROIs were considered as living neurons and the number of peaks quantified before and after the drug application. The AUC was estimated as the sum of the local area of each peak to avoid a biased AUC estimation due to baseline drift. All these data were normalized according to the duration of the recording and neurons were labeled as ‘responsive’ when their AUC or number of peaks was increased by at least 20% after drug application. As the time post-stimulation is longer than the baseline, the probability of observing a spontaneous calcium peak is stronger post-stimulation. To avoid this bias, neurons with only one calcium peak during the whole recording were removed from responsive neurons. The response probability was calculated as the number of responsive neurons with a least 1 calcium event per time bin (30 s) divided by the number of responsive neurons in each recording. Finally, all data were normalized per slice and this result was used as the statistical unit. All data were compared using paired statistical analysis (before versus after drug application) and the results are expressed as a ratio (baseline:drug effect), with a ratio of 1 meaning neither an increase nor a decrease in the measured parameter. In vivo opto-electrode recordings Implantation of opto-electrodes Silicon probes (A1x32-Poly3-10mm, NeuroNexus) containing a 32-channel single shank combined with an optic fiber (diameter: 100 µm, Thorlabs) (opto-electrodes) were used in acute (anesthetized and head-fixed) recordings. For freely moving recordings, 32-channel chronic opto-electrodes were hand made, consisting of 8 tetrodes and 1 specially designed microdrive. The microdrives and tetrodes were manually assembled as described previously 57 . The tetrodes were made with 0.0005-inch tungsten wires (Stablohm 675, California Fine Wire Company). Eight tetrodes and an optic fiber (200 µm, Thorlabs) were loaded into the microdrive via a guiding tube and were arranged in parallel order. Assembled opto-electrodes were gold plated and the impedance of each channel was measured between 250 and 350 kΩ. For implantation, rats were anesthetized with 2% isoflurane and placed in a stereotaxic frame. Bregma position and horizontal level were aligned during the implantation. Opto-electrode tips were implanted into the target location and the microdrive was fixed on the skull by six microscrews (Knupfer) and dental cement (Paladur, Heraeus Kulzer). Optogenetic identification of OT neurons Electrophysiological signals were acquired by an Open-Ephys acquisition board and sampled at 30 kHz. To identify ChR2 + OT neurons in the PVN, pulses of blue light (λ = 473 nm, DreamLasers) were delivered by the optic fiber while recording extracellular electrical activity of the neurons. The pulse train was controlled by a pulse generator (Master9, A.M.P.I.), and pulses had a duration of 10 ms and were applied at stimulation frequencies of 1, 5 and 10 Hz. In each session, the laser output at the optic fiber terminal was measured as 20 mW mm −2 . Neurons with a clear time-locked response to light pulses (spikes within 2–8 ms from onset of pulses) were classified as OT neurons (Extended Data Fig. 1e ). Analysis of spike waveforms Spike sorting was done manually in Plexon Offline Sorter v.4.0 (Plexon, Inc.), with tetrode mode. The raw data were filtered at 250 Hz with a Butterworth high-pass filter, and waveform detection thresholds were placed at −0.5 to +0.8% of the analog-to-digital converter range (or −0.32 or ~−0.51 mV), depending on the signal:noise ratio. Magnocellular neurons have spikes with a width at half-amplitude of about 0.5 ms, an absolute refractory period of about 2.5 ms and a long relative refractory period, reflecting a prominent hyperpolarizing afterpotential 17 . Therefore, the sample length in waveform detection was set to 1.4 ms (400 µs pre-threshold period; at the 30-kHz sampling rate, a single waveform consists of 42 data points and, in the tetrode waveform, each unit detected 168 data points), and dead time was set to 1.2 ms. Next, the detected waveforms were aligned at the valley point, when the neurons were depolarized at their maximum, and principal component analysis and slice features of waveform were plotted and projected into 3D space for visual separation of clusters into presumptive single units. The timestamp feature was used to exclude mechanical noise recorded at the same time across four channels among the tetrodes. In different recording sessions (for example, OF and social interaction), we analyzed whether the features of spike waveforms remain consistent with the 3D plot results. After clustering, units with a minimum interevent interval exceeding 2,500 µs were accepted as single hypothalamic neurons. Units displaying minimum interevent intervals between 1,200 and 2,500 µs were recognized as arising from multiple neurons and excluded from the statistics of the study. Statistical analyses of spike patterning From segments of stationary activity recorded in OF conditions, interspike interval distributions were constructed to verify that these were consistent with distributions characteristic of OT neurons under basal conditions recorded in anesthetized rats 58 . To quantify the regularity of spike firing, we calculated the index of dispersion (IoD) of firing rate in 1-s bins as the ratio of the variance to the mean. For events that arise as a result of a random process that is invariant in time, the IoD will be equal to 1, independent of the mean rate and the binwidth. If events arise more regularly than chance, the IoD will be <1, and if they are more variable than expected by chance—as when spikes occur in clusters or bursts—the IoD will be >1. In OT neurons, spikes cannot arise purely randomly because of the refractory period, and the IoD reduces slightly with increasing firing rate because, at higher rates, the relative refractory period is larger as a proportion of the mean interspike interval. The IoD also reduces with increasing binwidth because OT neurons also display a prolonged activity-dependent afterhyperpolarization that acts to stabilize mean firing rates over a timescale of seconds. Collectively the known intrinsic membrane properties of rat OT neurons, as tested through computational models, imply that, if spikes arise as a result of a purely random and time-invariant process, then the IoD of the firing rate in 1-s bins will be in the range 0.3–1.0 for neurons firing at up to 6 spikes s −1 , depending on firing rate and individual variability in membrane properties 17 , 58 . LFPs in the PVN Local field potentials (LFPs) were sampled at 1 kHz with a low-pass filter. Subsequent analysis was done using customized MATLAB (MathWorks) scripts. We estimated the power spectrum density of the LFP signal using a multi-tapper approach, based on Thomson’s method (‘pmtm’ function). Spectrograms were computed for each recording using a standard ‘spectrogram’ function. The power of theta oscillations was calculated as an average of power spectrum densities in the range 5–10 Hz. Phase-lock analysis was performed to investigate the relationship between theta oscillations in the PVN and the timing of spikes in OT neurons. The phase of the oscillatory activity was extracted with Hilbert’s transformation (‘Hilbert’s’ function) and converted into angle degrees. Then, we used Rayleigh’s tests for circular uniformity, which indicate whether there is a significant correlation between the timing of spikes and a specific phase of the theta cycle (Extended Data Fig. 1h–j ). In vivo fiber photometry Optic-guided implantation of optic fibers We injected a modified adenovirus (AAV-OTp-GCaMP6s) bilaterally into the PVN or SON to transduce expression of the Ca 2+ indicator GCaMP6s in OT neurons, and verified that this was expressed cell specifically (87 ± 4% of OT neurons, n = 1,371 neurons, n = 4 rats, Fig. 4p ). Optic fibers (M127L01 diameter 400 µm, numerical aperture 0.50, length 10 mm, Thorlabs) were implanted ~100 µm above the dorsal border of the PVN (A/P: −1.8 mm; M/L: 0.35 mm; D/V: −7.85 mm) or SON (A/P: −1.25 mm; M/L: 1.90 mm; D/V: −9.0 mm) under 1.5% isoflurane anesthesia. Four 1-mm screws (Knupfer) and a metal implant guide (OGL, Thorlabs) were attached to the skull with OptiBond FL (Kerr) and fixed using dental cement (Paladur, Heraeus Kulzer). During implantation, the implantable cannula was fixed in an adaptor (ADAL3, Thorlabs) attached to a stereotactic holder, whereas the other end of the cannula was connected through a pre-bleached Patch cord (Thorlabs, FP400URT) to the photodetector and light-emitting diode (LED) of the fiber photometry system (FOM, NPI Electronic). The digitized photometry signal was monitored and recorded via a digital input/output board (Open-Ephys) to the DAQ system (Open Ephys) with 0.1- to 20-Hz bandpass filter and 20-s timescale set in to visualize the Ca 2+ signal online, while the cannula tip was gradually lowered into the PVN at 1 mm min −1 . When the optic fiber tip was close to the PVN where GCaMP6s was expressed, a slight increase in the signal baseline and a minor spontaneous fluctuation could be visually detected. During implantation, rats were under 1.5% isoflurane anesthesia and the body temperature was kept stable at 37 °C by a heating plate (Temperature controller 69001, RWD Life Science). The LED power in the fiber photometry system was set at a constant value between 5 and 10 mW mm −2 . The fiber photometry recordings were conducted after 1 week of recovery from the implantation. Fiber photometry raw data were sampled at 30 kHz in Open-Ephys GUI and analyzed with customized MATLAB scripts. Fiber photometry data analysis Digitized optical signal acquired from the fiber photometry system was first downsampled at 3,000 Hz and then low-pass filtered (MATLAB ‘butterworth’ function) at 10 Hz to exclude noise at higher frequency. Second, to correct the baseline drifting due to photo-bleaching of fluorophores, we fitted the signal with a polynomial curve (MATLAB ‘polyfit’ function) and subtracted it from the signal. Next, we smoothed the signal with a Savitzky–Golay filter (MATLAB ‘smooth’ function, option ‘sgolay’). For each experiment, the signal F was converted to Δ F / F 0 by: $$\Delta F/F\left( t \right) = \frac{{F\left( t \right) - F_0}}{{F_0}}$$ where t is time and F 0 was calculated as the average value of F of a 600-s recording at the start of the experiment. The data were subdivided into 1-min bins and the mean Δ F / F 0 was calculated for each bin. We detected calcium transients similar to those reported in our previous study 59 . Finally, we calculated the AUC of the Ca 2+ signal (MATLAB ‘trapz’ function) to estimate the cumulative fluorescence for each bin and normalized the AUC to values from 0 to 1. Values of normalized AUC were displayed in a 1-min bin and averaged in 30-min bins. The ratios of AUCs between experimental and control conditions were used for quantitative analysis and called ‘relative AUC increase’. Application of airpuffs and OT neuron response Airpuffs from a pressurized air can (Toolcraft, 20793T, 400 ml) were applied through a stiff micropipette tip with a 2-mm opening positioned 10–15 mm above the skin in an area of ~2 cm 2 . A plastic cover with 2-cm holes was placed above the rat’s body to restrict the area of stimulation. The controlled air pressure was 1.139 g cm −3 . During in vivo electrophysiology recordings, in each stimulation point, five airpuffs (duration 0.2 s, interval between puffs 1 s) were delivered in sequence with intervals of 1 min between sequences (Fig. 2a and Extended Data Fig. 4 ). During fiber photometry recordings, one airpuff (duration 1 s) was applied every 1 min (Fig. 2e–j ). OT neuron response to airpuff stimulations We applied airpuffs to the skin of three regions of the rat’s dorsal body area (anterior, central and posterior parts), two regions of the rat’s ventral area (abdomen and anogenital area) and the whiskers on both sides. We considered a recorded neuron as responsive to airpuff stimulations if the average firing rate after (from 0 s to 2 s) stimulus onset increased by at least twice the s.d. of the baseline activity (2 s before stimulus onset). Onset of the response was calculated as the time at which the firing rate of a responsive neuron increased by 1× the s.d. of the baseline activity. We recorded the activity of n = 23 OT neurons in response to airpuffs applied to the rat’s dorsal body area, which showed variable response latencies of up to 30 s (Extended Data Fig. 4a,b ); 10 of those neurons exhibited a response within 1 s after stimulus onset and are shown in Fig. 2a,b . Blood sampling and plasma OT measurements To monitor neurohypophyseal OT release after chemogenetic activation of hypothalamic parvOT neurons, we performed blood sampling from the jugular vein in urethane-anesthetized rats. After surgery, rats were placed on a heating pad for the rest of the experiment to maintain constant body temperature. The jugular vein catheter was connected to a 1-ml syringe containing sterile heparinized saline (30 U ml −1 ); 45 min before, and 45 min as well as 90 min after, intraperitoneal CNO, 500 μl blood was drawn (Fig. 4q,r ), which was replaced by 500 μl sterile saline. After each sample, the catheter was filled with heparinized saline to avoid blood clotting. Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes (Bayer) on ice, centrifuged (5,000 g , for 10 min at 4 °C), and 200-μl plasma samples were stored at −80 °C before extraction and OT quantification by radioimmunoassay. The OT content in extracted plasma was analyzed by a highly sensitive radioimmunoassay with a detection limit of 0.1 pg and cross-reactivity <0.7% (RIAgnosis) 60 , 61 . Behavior Starting from 14 d before behavioral tests, vaginal smears were collected to monitor ovarian cycle. Rats in the metestrus, proestrus and estrus phases were excluded from the experiments and reintroduced once they reached diestrus. Behavioral tests were conducted in an arena (material nonabsorbent to odors) with dimensions 60 × 60 × 60 cm 3 under dim light conditions (<20 lux; lux-meter SO 200K, Sauter). On the day before the test, the experimental rat was exposed to the arena for 15 min for habituation. The arena was cleaned with 70% ethanol after each session to eliminate residual odors. Experimental and stimulus rats were housed in separate cages and had not previous encountered each other before the social interaction tests. The same rat was exposed to social interaction tests twice on separate days, each time with a different social stimulus rat so that the experimental paradigm always represented interaction with a novel, unfamiliar conspecific. OF test The experimental rat was placed in a corner of the arena and was allowed to freely explore the environment. These tests served as a ‘baseline’ for social interaction tests. FSI test The experimental and the stimulus rats were placed in opposite corners of the arena at the same time and were allowed to freely interact with each other and/or explore the environment. CSI test For this test, two Plexiglas transparent meshes (dimensions 20 × 30 × 1 cm 3 ) provided with three openings/holes (dimensions 15 × 0.75 cm 2 ) were placed in two opposite corners of the arena. The mesh separated a little triangular area (14 × 14 × 20 cm 3 , corresponding to ~3% of the total area of the arena) from the rest of the arena (central compartment). The experimental rat was placed in the central compartment whereas the stimulus rat was placed in one of the two little compartments. The two rats were able to see, hear and smell each other through the openings, but they were not able to touch each other. Chemogenetic inhibition or activation of parvocellular OT neurons by DREADD To selectively activate or inhibit parvocellular OT neurons, rats were injected with rAAV-OTp-DIO-hM3D(Gq)-mCherry (Parvo-Gq group), rAAV-OTp-DIO-hM4D(Gi)-mCherry (Parvo-Gi group) or rAAV-OTp-DIO-GFP (Parvo-GFP control group) into the PVN and CAV2-Cre into the SON, as previously described 13 . All groups (Parvo-Gq, Parvo-Gi and Parvo-GFP) were subjected to the same protocol. On day 1 experimental rats were exposed to the OF arena for 15 min for habituation. On day 2, the experimental rat was injected intraperitoneally with either CNO or saline solution 60 min before starting the tests, and was then subjected to one CSI and one FSI session for 5 min each. Intracerebroventricular administration of OTR antagonist Guide cannulas were implanted above the lateral ventricle for intracerebroventricular infusion of the OTR antagonist des -Gly-NH 2 , d (CH 2 ) 5 (Tyr(Me) 2 ,Thr 4 )OVT 29 . OTR antagonist 0.75 μg per 5 μl (ref. 30 , 62 ) was infused 15 min before the behavioral tests. Four groups of rats were studied, which received intraperitoneal injection and intracerebroventricular infusion of saline/saline, CNO/saline, saline/OTR antagonist or CNO/OTR antagonist, respectively. Video and audio analyses of behavior The videos were recorded using a GigE color HD camera (Basler AG). The tracks of the experimental and stimulus rats were extracted from videos using two software packages: Ethovision XT v.11.5 (Noldus) and MATLAB Toolbox idTracker (MathWorks). The results of the two software packages were compared and crossvalidated. The distance moved by each rat, the velocity, time spent in different areas of the arena, and distance between rats and time spent in close proximity were calculated automatically. Social interactions were also analyzed manually to classify social behaviors into different categories: ‘sniffing’, ‘chasing’, ‘crawling on top’, ‘being crawled’ and ‘head-to-head’ approaching; the time spent by the experimental rat for each behavioral category was used for all analyses. Manual scoring of social behavior scoring was done by a researcher (different from the person who performed the experiment) who was blind to treatment conditions. Ultrasonic vocalizations were recorded with an ultrasound microphone (Avisoft-Bioacustic) and analyzed using Avisoft-SASlab Pro v.5.2 software. After calculation of a sound spectrogram, the vocalization time, duration and frequency were extracted. Each ‘call’ was classified into a non-social (peak frequency ~22 kHz) or an appetitive/social (peak frequency ~50 kHz) call. Social vocalizations were further classified into trills (<10 ms), single component calls (>10 ms, not modulated) and complex vocalizations (>10 ms, frequency modulated or combined) 63 . Freely moving single unit recordings: experimental groups Open field and FSI groups : experimental rats implanted with opto-electrodes for single-unit recordings in the PVN were subjected to one OF session and one FSI session for 10 min each. Between the two sessions the rat was placed in the home cage (single housed) for 15 min. Open field, CSI and FSI groups : experimental rats implanted with opto-electrodes for single-unit recordings in the PVN were subjected to one OF, one CSI and one FSI session for 10 min each, without pauses in between. Stimulus rats were placed in one of the little chambers separated by a Plexiglas mesh at the start of the CSI session; the wall was then lifted up (Fig. 6b ) at the start of the FSI session, allowing the stimulus rat to join the experimental rat in the central compartment. Histology Anesthetized rats were transcardially perfused with phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA). Brains were dissected out and post-fixed overnight in 4% PFA at 4 °C with gentle agitation. Then, 50-µm vibratome coronal sections containing the PVN and the SON were cut and collected. Immunohistochemistry was performed on free-floating sections using the following antibodies: anti-OT (PS38, 1:2,000; mouse; kindly provided by H. Gainer), anti-OT (T-5021, 1:50,000, Peninsula, guinea-pig), anti-SYN (Abcam, anti-rabbit, ab32127, 1:1,000), anti-Ds-Red (Clontech, catalog no. 632397, 1:1,000; rabbit), anti-GFP (Abcam, ab13970, 1:1,000, chicken), anti- c-fos (Cell Signaling, catalog no. 9F6, 1:500, rabbit), anti-Fluorogold (Protos Biotech. catalog no. NM-101, 1:1,000, guinea-pig) and anti-Cre (Novagen, catalog no. 69050, 1:2,000, mouse). Further information on validation of primary antibodies can be found in the Nature Research Reporting Summary . The signals were visualized with the following secondary antibodies: CysTyr3 conjugate (711-165-152) or CysTyr5 conjugate (Jackson Immuno-Research Laboratories, 115-175-146) or Alexa 488 (Invitrogen, A11039) and Alexa 594 (Invitrogen, A11012) and Alexa-594 (Jackson Immuno-Research Laboratories, 715-585-151) and Alexa-647 (Jackson Immuno-Research Laboratories, 713-645-147). All secondary antibodies were diluted 1:500. Fluorogold treatment and visualization To discriminate between magnOT and parvOT neurons, rats received a single injection of Fluorogold (Santa Cruz Biotechnology, 15 mg per kg birth weight intraperitoneally) 7 d before the perfusion. Brain sections were stained with a primary antibody for Fluorogold (guinea-pig anti-FG, dilution 1:1,000, Protos Biotech Corp) and Fluorogold immunosignal was visualized by secondary antibodies conjugated with CysTyr3 (Jackson Immuno-Research Laboratories, goat anti-rabbit, dilution 1:500). The co-localization of Fluorogold, OT and c-fos signals was manually quantified in the PVN ( n = 4 rats, 6 sections per brain). Images of immunostained tissue sections All images were acquired on a Leica TCS SP5 (DKFZ Light Microscopy Facility), confocal laser-scanning microscope. Digitized images were analyzed using Fiji (National Institute of Mental Health) and Adobe Photoshop CS5 (Adobe). Confocal microscopy and 3D IMARIS analysis For the 3D reconstruction of OT neurons, we took z -stack images (50 µm depth, 1-µm steps, ×40 magnification) of PVN and SON using a Zeiss LSM 780 confocal microscope (1,024 × 1,024 pixels, 16-bit depth, pixel size 0.63 μm, zoom 0.7). Raw czi files were used for further analysis with IMARIS 26 , 27 , 64 software (v.9.31, Oxford Instruments: ). First, IMARIS was used to reconstruct the cellular surface using the following customized settings: surface detail 0.700 µm (smooth); thresholding background subtraction (local contrast), diameter of largest sphere, which fits into the object: 2.00; color: base; diffusion transparency: 65%. After surface reconstruction, we used the filter function to remove unspecific background signals: filter: volume maximum 400 µm 3 . After deletion of all background signals the ‘mask all’ function was used to create the final surface reconstruction. Next, the surface reconstruction was used as the template for the filament reconstruction using the following customized settings: detect new starting points: largest diameter 7.00 µm, seed points 0.300 µm; remove seed points around starting points: diameter of sphere regions: 15 µm. Seed points were corrected for (either placed in or removed from the center of the somata) manually if the IMARIS algorithm placed them incorrectly. All surface and filament parameters were exported into separate Excel files and used for data analysis. For all quantifications, we used 6–8 ×40 z -stacks per animal (2 z -stacks per brain hemisphere). We used a computer suited for IMARIS analysis (Intel Core i7 8700 @ 3.2 GHz, 64 GB RAM, x-64-bit, Windows 10 Enterprise). All images used for analysis were taken with the same confocal settings (pinhole, laser intensity, digital gain and digital offset). Sholl analysis was performed using IMARIS in the filament reconstruction mode and individual datasets were exported into separate Excel files for further analysis. To assess the number of SYN + /GFP + axons, we used a simplified version of the Sholl analysis, where we included only the first two to eight spheres (starting in the soma center) until either we could detect SYN + /GFP intersections or they were >2 µm apart from the border of the respective soma. The total amount of immunofluorescence (SYN) was calculated using the extract intensity/number of spots function. First, we created spheres that precisely engulfed the respective somata (parvOT and magnOT neurons) so that both ends of the cell soma (maximum diameter) touched the border of the respective sphere. To account for individual variability in roundness and surface area, we calculated the surface area for each individual OT cell using the surface reconstruction mode. Given that cells with a larger surface area occupy more 3D space within the artificially constructed sphere, which could confound precise quantification of SYN fluorescence, we adjusted each calculated value (SYN + voxels per sphere) based on the surface area. Assuming an inverse almost-linear relationship between cell volume and the total amount of SYN fluorescence within a sphere, we calculated the degree of occupancy (that is, percentage) for each soma within the respective sphere. Finally, we calculated the final SYN + voxels using the following equation: (Number of SYN + voxels) × (Degree of occupancy). For the quantification along the dendrites we used spheres with a 10-µm radius along the dendrite for both parvOT and magnOT neurons. Projection-specific, trans-synaptic retrograde tracing Input tracing experiments were performed in female Wistar rats (aged 10–12 weeks). We used Rb-GFP 28 to monosynaptically retrogradely trace neurons projecting to parvOT and magnOT neurons. Rb-GFP selectively enters neurons expressing the avian sarcoma and leukosis virus receptor (TVA), and can spread presynaptically only from neurons expressing the rabies virus glycoprotein (we used the optimized glycoprotein, oG, used previously 65 ). We injected a 300 nl mixture of 1:1 rAAV-OTp-TCB:rAAV-Ef1A-DIO-oG into the right PVN of female rats. Then, to specifically trace inputs to parvOT neurons, we injected rats ( n = 5) with CAV2-CMV-Cre into the right SON (Extended Data Fig. 8a ). In another group of rats ( n = 5), we employed a similar strategy to express oG only in magnOT neurons: we injected an AAV retrograde-expressing Cre (rAAVretro-Ef1A-Cre) into the posterior pituitary (Extended Data Fig. 8c , based on previous work 66 ). This strategy makes Rb-GFP selectively enter all OT neurons, but specifically spread retrogradely from neurons expressing oG (that is, parvOT or magnOT neurons). After 2 weeks, we injected 300 nl EnvA ΔG-EGFP into the right PVN, and, 7 d later, animals were perfused with 4% PFA. The number of projecting neurons was quantified from brain sections as follows: every third 50-μm section was imaged and neurons were counted, and then multiplied by three, to estimate the real number of inputs. GFP + neurons on the injected hemisphere were counted and assigned to brain areas based on classifications of the Paxinos Mouse Brain Atlas 37 , using anatomical landmarks in the sections visualized by tissue autofluorescence. Very few contralateral inputs were noticed and we therefore decided to neglect them. Although we had good infection at injection sites for both parvOT and magnOT groups (Extended Data Fig. 8g ), starter neurons could not be reliably counted, because rabies virus toxicity prevented us correctly visualizing mCherry in the PVN. Thus, the analysis presented here does not take into account inputs to OT neurons from within the PVN. The percentage of inputs from each region was obtained by dividing the number of inputs from one region by the total number of inputs. Input regions that were detected in a subset of animals only were discarded from the analysis. We used unpaired, two-sided Student’s t -tests to compare the total number of inputs with parvOT and magnOT neurons and χ 2 tests to compare proportions of inputs between regions. We controlled TVA being selectively expressed in OT neurons by injecting control rats ( n = 2) with rAAV-OTp-TCB in the PVN, and staining for OT. This revealed that most OT neurons expressed mCherry and no non-OT neurons expressed mCherry (Extended Data Fig. 8e ). Furthermore, we verified that Rb-GFP was selectively entering OT neurons by injecting control rats ( n = 2) with rAAV-OTp-TCB, and Rb-GFP 2 weeks later. This resulted in specific expression of GFP in PVN OT neurons (Extended Data Fig. 8f ). In each rat, we confirmed the SON injection site by staining for Cre for the parvOT neurons tracing (Extended Data Fig. 8a,b ) and injecting a virus Cre-dependently expressing mCherry in the SON of magnOT neuron tracing, which led to expression of mCherry in SON magnocellular neurons (Extended Data Fig. 8c,d ). OT secretion model The OT secretion model 17 simulates stimulus-secretion coupling in OT neurons. The model is a continuous approximation of the stochastic release process from all neuronal compartments. It is based on extensive studies on activity-dependent hormone secretion from magnocellular neurosecretory neurons 58 and it matches experimental data closely. In the model, when spikes invade the secretory terminals, exocytosis occurs in response to fast rising Ca 2+ concentrations ( e ). At higher frequencies, the spikes broaden, producing a larger increase in e . The rate of secretion is modeled as the product of: e raised to the power of φ (which accounts for the cooperativeness of the Ca 2+ activation), of the pool of releasable OT p , and a secretion-scaling factor α , and is calculated as: $$s = e^\varphi \times \alpha \times p$$ where φ = 2 and α = 0.003 pg s −1 . The nonlinear dependence of the secretion rate gives high secretion probability on short spike intervals. To infer OT secretion arising from the spike trains observed in the present study, the recorded event timings were used to drive the secretion model described fully elsewhere 17 . The published model is scaled to quantitatively match secretion from the pituitary nerve terminals of a single OT neuron. The scaling factor α cannot be used for absolute quantitative estimates of release within the brain, but the relative efficacy of two firing patterns can be compared using the model, because α is eliminated in the ratio. Statistics Statistical analyses were performed using SigmaPlot v.11 (Systat) and GraphPad Prism v.7.05 (GraphPad Software). The two-sided, Wilcoxon’s signed-rank W -test was used to compare the variation of spike frequencies measured for the same neuron in different conditions. The two-sided Mann–Whitney U -test was used to compare low threshold depolarization in different cells. Two-sided Student’s t -tests were used to compare average values in two conditions when the data satisfied assumptions of normality. Pearson’s correlation coefficient was used to measure the linear correlation between firing rate and animals’ distance. One-way analysis of variance (ANOVA), followed by a multiple comparison post hoc test, was used to compare averages in three or more conditions. Two-way ANOVA, followed by a multiple comparison post hoc test, was used to analyze electrophysiological or behavioral data with repeated measures and CNO/saline/OTR antagonist treatment (time × treatment). No statistical methods were used to predetermine sample size, but our sample sizes are similar to those reported in previous publications 5 , 13 , 30 . Differences were considered significant for P < 0.05. Asterisks were used to indicate the significance level: *0.01 ≤ P < 0.05, **0.001 ≤ P < 0.01, *** P < 0.001. Statistical analyses of neuronal spike trains and local field potentials, such as PSTHs, auto- and cross-correlation, spike burst analysis, power spectrum density and phase locking were performed using NeuroExplorer 3 (Nex Technologies) and customized MATLAB scripts. Randomization and blinding Randomization was used to assign brain samples and animals to experimental groups whenever possible, with the constraint that, in social behavior experiments, interacting rats had to be unfamiliar conspecifics, as described under Methods . Most of the measurements were made using a machine, and are not subject to operator bias, with the exception of manual scoring of social behaviors from videos; in this case, all scorings were done by a researcher (different from the one who performed the experiment) who was blind to treatment conditions. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data and code availability Python code (used for ex vivo calcium-imaging data analysis in Fig. 4a–d ) and MATLAB code (used for in vivo fiber photometry data analysis in Fig. 4e–o and Extended Data Fig. 7a–n ) can be found in Supplementary Software . All data that support the findings of the present study, as well as MATLAB codes for the analysis of extracellular recording data, are available from the corresponding author upon reasonable request.
The sense of touch can significantly affect how animals and humans perceive the world around them, enriching their experiences and allowing them to gather more information about their surroundings and other living organisms. Although touch is a crucial aspect of perceptual experience, in philosophical, psychological and neuroscientific research it has often been overshadowed by vision. Researchers at the Central Institute of Mental Health, University of Heidelberg, CNRS, University of Strasbourg, and other institutes in Europe, Israel and the US have recently carried out a fascinating study investigating how social touch can affect communication between female animals. Their paper, published in Nature Neuroscience, shows that in female rats, social touch promotes communication through the activation of parvocellular oxytocin neurons. "From the softest caress to the harshest blow, touch lies at the heart of our sensory experience of the world and shapes the way we perceive it, especially during intimate interactions with other humans," Valery Grinevich, one of the researchers who carried out the study, told Medical Xpress. "During evolution, vertebrates developed a plethora of sophisticated sensory systems, which represented a clear evolutionary advantage and entailed that higher mammals were able to discriminate between pain, moderate and gentle forms of touch." Social touch is crucial for intimacy and can also contribute to the formation of trust-based relationships between most mammals. Different types of touch, ranging from gentle touch, to grooming and caressing behaviors can be commonly observed among several classes of mammals, including rodents, felines, canines and primates. The chemicals in the brain produced by social touch can play a part in establishing and maintaining social hierarchies within groups of animals. In their study, Grinevich and his colleagues specifically focused on a chemical that is particularly crucial for establishing these relationships: the neuropeptide oxytocin. "Oxytocin not only facilitates birth and lactation, but actively fine-tunes the brain to modulate emotions, sexual intercourse, pair bonding and parental behavior," Alexandre Charlet, another researcher involved in the study, told Medical Xpress. "However, how, exactly, oxytocin promotes these prosocial behaviors and what triggers the actual release of the neuropeptide remained a mystery. This is the key question addressed in our paper." Grinevich, Charlet and their colleagues used a variety of modern neuroscience techniques to monitor and manipulate the activity of oxytocin neurons in freely moving female rats. Remarkably, they were the first research group to ever collect electrophysiological recordings of individual oxytocin cells and their populations. The researchers also periodically activated and inhibited a specific subset of oxytocin neurons, known as parvocellular neurons, using pharmacogenetic techniques. This allowed them to specifically examine the unique contributions of this small population of oxytocin neurons to inter-female communication. "We also used viral vector-based tracer methods to visualize oxytocin neurons and establish their connectivity," Grinevich explained. "Finally, we monitored social communication between females and males in which the activity of oxytocin neurons was enhanced or silenced." The results gathered by Grinevich, Charlet and their colleagues suggest that parvocellular oxytocin neurons play a key role in touch-based communication between female rats. More specifically, these neurons are responsible for translating sensory signals picked up by an animal's body into various types of social interactions. "Our results provide fundamentally new insights into how the neuropeptide orchestrates social behavior," Charlet said. "Furthermore, they support a vision of oxytocin as a powerful agent to treat mental disorders: A combination of sensory body stimulation (for example, via huddling or massage) and intranasal oxytocin administration might synergistically mitigate socio-emotional alterations in humans afflicted with mental diseases, such as autism spectrum disorder and posttraumatic stress disorder." While the researchers specifically conducted their study on virgin female rats, the findings they collected could also be applicable to other mammals, including humans. In the future, their work could thus pave the way toward the development of new forms of therapy for mental health disorders that integrate soothing forms of social touch. "In our recent study, we focused on social behavior of unfamiliar female rats," Grinevich said. "In the future, we plan to study whether the same oxytocin-ergic mechanisms underlie other types of social behavior, such as pair bond and parental behavior. Following translational aspect, we are interested in investigating whether the activity of the oxytocin system (particularly the parvocellular magnocellular oxytocin neuron pathway) is affected in animal models of mental diseases, mimicking human's autism spectrum disorder and posttraumatic stress disorder."
10.1038/s41593-020-0674-y
Biology
Honeycrisp genome will help scientists breed better apples
Awais Khan et al, A phased, chromosome-scale genome of 'Honeycrisp' apple (Malus domestica), Gigabyte (2022). DOI: 10.46471/gigabyte.69
https://dx.doi.org/10.46471/gigabyte.69
https://phys.org/news/2022-10-honeycrisp-genome-scientists-apples.html
Abstract"; color: #333; font-size: 20px; font-weight: 600; text-align: left; margin-top: 6%; } .kwd-group { text-align: left; } .body { margin-top: 1%; text-align: left; } .sec .label { display: none; } .sec .p { padding-top: 1em; } .title { color: #333; font-size: 18px; margin-top: 2%; font-weight: 600; border-right: 5px solid transparent; border-bottom: 1px solid #3D913a; line-height: 1.3; } .ref { font-size: .8rem; padding: 12px 10px; } .download-button-group { min-width: 200px; } .download-button-dropdown { min-width: 200px; padding: 15px; } .sidebar-heading { padding-top: 5%; } .article-side-menu-ul { list-style: none; padding-left: 7%; } .article-side-menu-ul a { cursor: pointer; } .ref .label, .ref .surname, .ref .given-names { font-weight: 500; color: #3e3e3e; font-style: normal; } .ref .surname { margin-right: .45em; } .ref .etal:after { content: "et al."; } .ref .label { margin-right: .5rem; } @font-face { font-family: 'OpenDyslexic3-Regular'; src: url('OpenDyslexic3-Regular.ttf'); } .sidebar-heading-main { border-top: 2px solid #3D913a; } .menu-side1 { position: sticky; top: 5em; } .top-return { padding-left: 0px !important; cursor: pointer; } .metric-counter { padding: 2px 10px 3px; border-radius: 2px; line-height: 1em; color: #444; margin-left: 2px; background: #d5e1e8; font-weight: 200; font-size: smaller; text-shadow: none; } .related-sub li { line-height: 1em; padding-bottom: 5%; } .related-sub li:first-child { padding-top: 5%; } .menu-side1 .dropbtn { background-color: #3e8e41; color: white; padding: 12px; font-size: 16px; border: none; width: 98%; } .menu-side1 .dropdown { position: relative; display: inline-block; width: 50%; } .menu-side1 .dropdown-content { display: none; position: absolute; background-color: #f1f1f1; min-width: 100px; box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2); z-index: 1; width: 98%; } .menu-side1 .dropdown-content a { color: black; padding: 8px 10px; text-decoration: none; display: block; font-size: 12px; } .menu-side1 .dropdown-content a:hover { background-color: #ddd; } .menu-side1 .dropdown:hover .dropdown-content { display: block; } .menu-side1 .dropdown:hover .dropbtn { background-color: #3e8e41; } .flat-button { border-radius: 0px; width: 98%; } .flat-button-group { min-width: 100px; width: 50%; padding-top: 1%; display: inline-block; } .menu-side { box-shadow: 0 0 30px -5px rgba(0, 0, 0, 0.1) !important; } .sb-wrapper { border-radius: 1.5em !important; min-width: 3em !important; height: 3em !important; } * { box-sizing: border-box; } *:after, *:before { box-sizing: border-box; } body { margin: 0; } .btn-float { width: 50px; height: 50px; line-height: 50px; display: inline-block; border: none; font-size: 18px; color: #fff; text-align: center; position: relative; transition: 0.3s; border-radius: 0px 30px 31px 0px; cursor: pointer; } .btn-float:hover { text-decoration: none; box-shadow: 0 5px 10px rgba(0, 0, 0, 0.15), 0 4px 15px rgba(0, 0, 0, 0.13); } .btn-float:active, .btn-float:focus { outline: none; } .yellow { background: #ffa000; } .blue { background: #40c4ff; } .green { background: #00e676; } .purple { background: #8e24aa; } .pink { background: #e91e63; } .grey { background: #607d8b; } .icon-bars { background: #fff; height: 1px; width: 22px; margin: auto; display: block; position: relative; -moz-transition: 0.3s 0.3s; -o-transition: 0.3s 0.3s; -webkit-transition: 0.3s; -webkit-transition-delay: 0.3s; transition: 0.3s 0.3s; } .icon-bars:after { content: ''; position: absolute; height: 22px; width: 1px; background: #fff; top: -10px; } .float-btn-group { position: relative; float: right; transition: 0.3s; } .float-btn-group .btn-triger { z-index: 15; float: left; } .float-btn-group .btn-dark-grey { background: #464a4c; } .float-btn-group .btn-list { position: absolute; right: 0; transition: 0.3s; } .float-btn-group .btn-list li { display: inline-block; } .float-btn-group.open .icon-bars { -webkit-transform: rotate(45deg); transform: rotate(45deg); } .model-6 { background: #546e7a; } .model-6 .float-btn-group { float: none; margin: auto; width: 50px; } .model-6 .float-btn-group .icon-bars { -moz-transition: 0.3s 1.3s; -o-transition: 0.3s 1.3s; -webkit-transition: 0.3s; -webkit-transition-delay: 1.3s; transition: 0.3s 1.3s; } .model-6 .float-btn-group .btn-list { width: 50px; height: 50px; left: 0; top: 0; } .model-6 .float-btn-group .btn-list .btn-float { position: absolute; -webkit-transform: scale(0, 0); transform: scale(0, 0); left: 0; top: 0; margin: auto; } .model-6 .float-btn-group .btn-list .btn-float:nth-child(1) { -moz-transition: 0.2s 0.2s; -o-transition: 0.2s 0.2s; -webkit-transition: 0.2s; -webkit-transition-delay: 0.2s; transition: 0.2s 0.2s; } .model-6 .float-btn-group .btn-list .btn-float:nth-child(2) { -moz-transition: 0.2s 0.4s; -o-transition: 0.2s 0.4s; -webkit-transition: 0.2s; -webkit-transition-delay: 0.4s; transition: 0.2s 0.4s; } .model-6 .float-btn-group .btn-list .btn-float:nth-child(3) { -moz-transition: 0.2s 0.6s; -o-transition: 0.2s 0.6s; -webkit-transition: 0.2s; -webkit-transition-delay: 0.6s; transition: 0.2s 0.6s; } .model-6 .float-btn-group .btn-list .btn-float:nth-child(5) { -moz-transition: 0.2s 0.8s; -o-transition: 0.2s 0.8s; -webkit-transition: 0.2s; -webkit-transition-delay: 0.8s; transition: 0.2s 0.8s; } .model-6 .float-btn-group .btn-list .btn-float:nth-child(4) { -moz-transition: 0.2s 1s; -o-transition: 0.2s 1s; -webkit-transition: 0.2s; -webkit-transition-delay: 1s; transition: 0.2s 1s; } .model-6 .float-btn-group.open .icon-bars { transition: 0.3s; } .model-6 .float-btn-group.open .btn-list .btn-float { -webkit-transform: scale(0.9, 0.9); transform: scale(0.9, 0.9); } .model-6 .float-btn-group.open .btn-list .btn-float:nth-child(1) { left: 20px; top: -80px; } .model-6 .float-btn-group.open .btn-list .btn-float:nth-child(2) { left: 62px; top: -50px; } .model-6 .float-btn-group.open .btn-list .btn-float:nth-child(3) { left: 80px; top: 0px; } .model-6 .float-btn-group.open .btn-list .btn-float:nth-child(4) { left: 20px; top: 80px; } .model-6 .float-btn-group.open .btn-list .btn-float:nth-child(5) { left: 62px; top: 50px; } .model-6 .float-btn-group.open .btn-list .btn-float:hover { transition: 0.3s; text-shadow: 1px 4px 1px #666; } .share-btn { position: absolute; bottom: 30%; } .btn-list a { z-index: 99; } .model-6 { position: sticky; top: 50%; z-index: 99; } .model-4 { position: absolute; right: 10%; } .btn-article-export { border-radius: 0; background: none; color: #28a745; font-size: 1.6rem; margin-top: -5px; } .crossmark-link { padding: 0em 0.5em; } .img-btn-download { width: 32px; } .download-btn .btn-float:hover { box-shadow: none !important; } .open .btn-article-export, .btn-article-export .icon-download:hover { color: #043c34; } .download-as-options { height: auto; opacity: 1; transition: opacity 100ms, height 1000ms ease-in; display: flex; width: 100%; display: flex; flex-wrap: wrap; align-content: space-between; } .download-as-options a { z-index: 1; } .download-options { padding: 1.5% 5%; } .download-options:hover { -webkit-transform: scale(1.3); transform: scale(1.3); } .opendownload .download-as-options { opacity: 1; height: auto; transition: opacity 100ms, height 1000ms ease-in; } .model-4 .btn-list { right: 0px !important; width: 100px; position: absolute; height: 0px; border: 1px solid #ccc; background-color: #f2f2f2; border-radius: 5px; color: black; opacity: 0; transition: height 500ms ease-out, opacity 1000ms; overflow: hidden; } .model-4 .btn-list a { z-index: 1; } .model-4 .open .btn-list { right: 0px !important; width: 100px; opacity: 1; position: absolute; background: #aaa; height: 293px; border: 1px solid #ccc; background-color: #f2f2f2; border-radius: 5px; color: black; transition: opacity 100ms, height 1000ms ease-in; } .download-btn .btn-list { padding: 5px; } .download-btn .btn-list a { padding: 5px; padding-top: 10px; padding-bottom: 10px; border-radius: 5px; } .download-btn .btn-list a:hover { background: #ddd; } .download-min { margin: 0px .4em; color: #b3aeae; cursor: pointer; background: #d8efda; } .model-4 { width: 100%; margin-right: -20px; } .btn-right { float: right; } .dyslexic { border: 1px solid #ddd; border-radius: 3px; width: 100px; } .dyslexic-container { padding-top: 4px; } .elife { padding-left: 10px; padding-top: 7px; } .site-navbar-wrap.scrolled { padding-bottom: 5px; } .top-overlay-btn { color: #8e978e !important; background: #fff; border: 0px solid #3D913a; margin-right: 10px; text-decoration: none; } .top-overlay-btn:hover { color: #3D913a !important; cursor: pointer; } .top-overlay-btn-active { color: #28a745 !important; } .ref-list .article-title { margin-bottom: 0; color: #22749b; text-align: left; font-weight: 500; font-size: .8rem; line-height: 1.3em; padding-top: 10px; } .table-wrap { width: 100%; overflow-x: scroll; } .table-wrap .th { display: table-cell; font-size: .75em; border-collapse: collapse; border: 1px solid #ccc; font-weight: bold; padding: .50em; overflow-wrap: break-word; } .table-wrap .label { border-left: 3px solid #3D913a; background: #ecf5e3; font-weight: bold; display: block; margin-top: 5%; text-align: left; padding: 1% 0 0 1%; color: #3D913a; } .table-wrap .caption .p { padding-top: 0; text-align: left; border-left: 3px solid #3D913a; background: #ecf5e3; padding: 0 1% 1% 1%; } .table-wrap .label:after { content: "."; padding-right: .25em; } .graphic { padding: 5%; max-width: 100% !important; } .spinner-wandering-cubes:after { font-size: 14px; content: 'Loading..'; margin-top: 75px; margin-left: -30px; position: absolute; } .twitterUrl blockquote:not(.Blockquote) { padding: 0 1.77778rem; color: #6E797A; font-size: 1rem; line-height: 1.4; max-width: calc(3.55556rem + 680px); margin: 2.22222rem auto; border-left: .44444rem solid #0CA750; margin-left: 5%; } .twitterUrl blockquote a { font-weight: 700; } .youtubeUrl { padding: 5%; } iframe { margin-top: 5% !important; margin-bottom: 5% !important; } .fixed-outline-share { position: sticky; top: 10%; max-height: 90vh; overflow-y: scroll; } .article-outline-heading { font-size: 16px; margin-top: 1.5%; color: #3D913a; font-family: roboto-bold; } .article-outline-heading .heading-text { padding-left: 5px; } .head_active { color: white !important; font-weight: bold; background: #888888 !important; border-radius: 5px 0px 0px 5px; } .head_active::after { content: ""; border-bottom: 21px solid transparent; border-top: 20px solid transparent; border-left: 26px solid #888888; position: absolute; right: 0; top: 50%; margin-top: -21px; } .article-outlines { list-style-type: none; padding-left: 11px; margin-top: 20px; color: #272727; font-family: roboto-regular; } .article-outlines li { position: relative; } .article-outlines li a { font-size: 14px; text-decoration: none; cursor: pointer !important; font-size: 14px; border-bottom: 1px solid #ddd; cursor: pointer !important; background: #ffffff; padding: 9px 6px; display: block; width: 89%; cursor: pointer !important; } .article-outlines li a :first-letter { color: #3D913a !important; font-weight: bold; } * { -webkit-tap-highlight-color: rgba(0, 0, 0, 0); } #ferromenu-controller { margin-top: 100px !important; display: block; width: 60px; height: 60px; text-align: center; border-radius: 50%; background: #45484d; color: #f2f2f2; font-family: 'Lato', sans-serif; font-size: 24px; line-height: 60px; vertical-align: middle; text-decoration: none; } #ferromenu-controller .label { transition: all 0.2s linear; } #ferromenu-controller.desktop:hover, #ferromenu-controller.mobile:active { background: #6e737b; } #nav li { color: #f2f2f2; text-align: center; } #nav li a { color: #f2f2f2; text-decoration: none; width: 40px; height: 40px; border-radius: 50%; background: #45484d; line-height: 40px; display: block; } #nav li.desktop a:hover, #nav li.mobile a:active { opacity: 0.7; } .article-detail .article-type { color: #3D913a; font-size: 16px; font-weight: bold; } .title-group .article-title { font-family: Roboto-light; margin-top: 1%; color: #333; font-size: 24px; cursor: pointer; } .article-detail .article-type .article-subscription { float: right; margin-top: 0px; width: 18px; height: 25px; } .article-subscription-icon-open { background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA8AAAAXCAYAAADUUxW8AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAABmJLR0QA/wD/AP+gvaeTAAAAB3RJTUUH5AIcAAcCgEG/mAAAAipJREFUOMtl1EtIV1EQBvDfvVmg2Es0SmhTkfakF9GmF9RCIrSiFgVBizZREISbIgtpWe16QdBjWUTRIogWLVwaBi6id0FUlL1ETcT03+LO3442cLn3zpmZb873zTmZsN/tNeXPBdiNjajFd3TgFl5CVdsPkCWJOXaiDUviv2wlPMMZ3MZIVdsPFUlAMy4F2gC68BH1WBUFL2CU0i1J9fpArMV7HMA27Iv3frxBDU6SzU2TN2MZhnA6WusrUPTjbhQfxGJsGWybOdb2SkzCazxMSUk4eRToS7Eiy8aT8g3v0J8mJoUG8DNcU/MsG0M+j+v4jYFEttRK8UCWMZb8KR6ojtYaMR29eI63E6tVTPhuwhGsjcSy9eIp5o1Ljhan4CiOJ0nfgunqkHBTuoWs9A95L06hSjGCF/E4CJoZUh7CwjLon1xWgflojcTuGJCuBOVD+DtwDcuxejQzL0cLGmIA2tE1PJpRHJDtsSYKtkdcA5pzrI8BeR6tmpyXVuIe7uA+1kWBxxE3CRtyzE7k6ouB2Ko4CJNjn03h70sknZ0HozANU4L9VzEwFPP+IlGlrEZ/npCzKNDgQbB7FYcVB0OsN5Y5yGNvX0PL1pBmCDdwMAoMhr814r7gXo7OCCxhBy5jDSoVB6cy/q/Eegk30VmBYZyN0duFPTFN3ehBXWg7K9q9E/HD6R1Wh2OKW2OO/+1zIJ5Dz8Q7rAcngoOWaHUGfuFJ+DsxUk74Cw+YnkB2UApyAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIwLTAyLTI4VDAwOjA3OjAyLTA1OjAwS8N29AAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMC0wMi0yOFQwMDowNzowMi0wNTowMDqezkgAAAAASUVORK5CYII=) no-repeat; } .article-subscription-icon-lock { background: url('GIGAbyte-icon-set.svg') no-repeat scroll -622px -199px; } .article-detail .article-main-details { font-family: Roboto-Light; margin-top: 5px; font-size: 13px; } .article-detail .article-doi-custom { border: 1px solid #3D913a; font-size: 13px; color: #3D913a; padding-right: 5px; border-radius: 10px; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; } .review-history { font-size: 14px; font-family: Roboto-Bold; cursor: pointer; color: #3D913a; text-align: right; } .review-history:hover { color: #0e8841; } .review-history .article-review_history-icon { color: #3D913a; } .review-history .article-published-date { float: right; font-size: 12px; } .stencilla-view-button { font-size: 14px; font-family: Roboto-Bold; cursor: pointer; color: #3D913a; } .stencilla-view-button:hover { color: #0e8841; } .article-versions { margin-top: 3%; background-color: #eee; border-radius: 8px; padding: 5px 10px 20px 15px; } .version-heading { font-size: 16px; font-family: Roboto-medium; } .version-heading span { padding-bottom: 4px; border-bottom: 1px solid #c1c1c1; } .versions { margin-top: 3%; } .versions span { margin-right: 3%; padding: 6px 22px 6px 22px; background-color: #0e8841; color: #fff; border-radius: 20px; font-family: Roboto-medium; cursor: pointer; } .abstract-button { padding: 0px; font-size: 24px; font-family: Roboto-Medium; text-decoration: none !important; color: #333 !important; outline: none !important; } .abstract-button span { margin-left: 10px; } .btn-link:hover { color: #333; text-decoration: none; } .article-abstract-body { margin-left: 32px; font-size: 18px; line-height: 1.8; margin-top: 5%; } .article-maintext { margin-top: 4%; padding-bottom: 15px; border-bottom: 1px solid #aaa; } .add-section { background-color: #eee; border-radius: 4px; padding-bottom: 15px; } .add-img-one { text-align: center; font-size: 140px; color: #9e9e9e; } .add-description { font-size: 18px; text-align: center; color: #9e9e9e; } .article-download-icons { cursor: pointer; } .article-download { margin-top: 5%; text-align: center; } .btn-article-download { padding: 10px 70px 10px 70px; border-radius: 10px; border: none; font-size: 18px; font-family: Roboto-Medium; color: #fff; background-color: #378740; } .article-sitations { background-color: #eee; margin-top: 5%; border-radius: 12px; padding: 15px; } .article-views-stats { padding-right: 15px !important; border-right: 2px solid #fff; } .article-views-counts, .article-downloads-counts { text-align: center; padding: 10px; background-color: #fff; border-radius: 5px; margin-top: 4%; font-family: Roboto-Medium; } .article-views-stats span { margin-left: 5px; } .article-views-stats i { color: #247226; } .article-downloads-stats { padding-left: 15px !important; } .article-downloads-stats span { margin-left: 5px; } .article-downloads-stats i { color: #247226; } .article-dimensions { margin-top: 5%; background-color: #fff; border-radius: 5px; padding: 5px !important; } .related-cited-articles-list { margin-top: 5%; padding-left: 3px !important; } .tabs { position: relative; min-height: 200px; clear: both; margin: 25px 0; } .tab { float: left; } .tab label { margin-right: 20px; border: none; margin-left: -1px; position: relative; left: 1px; cursor: pointer; } .tab [type=radio] { display: none; } .content { position: absolute; top: 20px; left: 0; background: white; right: 0; bottom: 0; padding: 15px 0px; border: none; height: 300px; overflow: auto; } [type=radio]:checked ~ label { background: white; border-bottom: 4px solid #378740; z-index: 2; color: #378740; } [type=radio]:checked ~ label ~ .content { z-index: 1; } .article-lists { list-style-type: none; padding: 0px !important; } .article-lists li { padding-top: 10px; padding-bottom: 5px; border-bottom: 1px solid #333; cursor: pointer; } .article-lists li :hover { color: #378740; border-bottom: 1px solid #378740; } .related-article-label, .cited-article-label { max-width: 140px; font-weight: normal !important; font-family: Roboto-Medium; } .width-100 { width: 100%; } .article-meta .title-group { display: none; } .crosref { text-align: left; height: auto; margin-top: .6em; } .article-details-info { font-size: .7em; display: block; font-family: roboto-bold; text-align: center; width: inherit; } #exTab1 .tab-content { color: white; background-color: #eee; padding: 5px 15px; border-top-right-radius: 5px; border-bottom-right-radius: 5px; border-bottom-left-radius: 5px; border: 1px solid #d4d2d2; } #exTab1 .nav-pills > li > a { border-radius: 0; } .extab-li { border-top-right-radius: 6px; border-bottom: none; border-top-left-radius: 6px; padding: 0px 10px; font-family: Roboto-regular; } .extab-a { font-size: .8em; text-decoration: none; } .extab-a:hover { color: #fff; } .gs-image-cover { border: 1px solid #eae9e9; width: inherit; } .scrool-to-top { margin-top: 5%; color: #bdd6ba; display: block; font-family: Roboto-regular; cursor: pointer; } .sticky { top: 5%; position: sticky; position: -webkit-sticky; } .blobtoolkitModal, .nanohubModal { height: 70vh; width: 88vw; } .blobtoolkitModal .modal-dialog, .nanohubModal .modal-dialog { max-width: 800px; } .blobtoolkitModal iframe, .nanohubModal iframe { margin-top: 0 !important; margin-bottom: 0 !important; } .view-blobtoolkit { padding: 15px 9px; color: orange; font-size: 14px; cursor: pointer; background: #f3f0f0; margin: 0 0 0 0; } .view-nanohub { background: #4d4c4c; padding: 10px; margin: 10px 2px; border-radius: 7px; } .blobtoolkit-iframe, .nanohub-iframe { height: 500px; width: 100%; } .blobtoolkit-closebtn, .nanohub-closebtn { color: #fff; font-size: 14px; text-shadow: none; opacity: 1; } .view-blobtoolkit-span { bottom: 0; background: #ffffff; font-size: 12px; border: 1px solid #a4ca90; right: 0; font-weight: 200; border-radius: 14px; padding: 4px 8px; color: #4c4c4c; } .view-blobtoolkit-span:hover { color: #4c4c4c; background: #bdd6ba; } .view-blobtoolkit-span i { color: #3D913a; } .blobImg-description { color: #378740; text-align: left; } .view-nanohub-span { background: #ffffff; font-size: 12px; margin: 6px; cursor: pointer; font-weight: 200; border-radius: 14px; padding: 4px 18px; color: #4c4c4c; } .view-nanohub-span:hover { color: #4c4c4c; } .view-nanohub-span:hover i { color: #3D913a; } .back-to-article-text { color: #949494; font-size: 14px; transition: -webkit-transform .2s; transition: transform .2s; transition: transform .2s, -webkit-transform .2s; font-family: roboto-regular; } .back-to-articles-link:hover .back-to-article-icon { color: #3D913a; -webkit-transform: scale(1.5); transform: scale(1.5); } .back-to-articles-link:hover .back-to-article-text { color: #3D913a; margin-left: 2% !important; } .back-to-article-icon { transition: -webkit-transform .2s; transition: transform .2s; transition: transform .2s, -webkit-transform .2s; color: #a4d69f; } .author-span:nth-of-type(n):not(:last-child)::after { content: ","; margin-right: .3em; } .author-span { display: inline-block; } .btn-author-name { padding: 0 .5em; } .remove-button-style { border: 0; background: none; padding: 0; margin: 0; } ::ng-deep .customTooltip { background: #00e676 !important; } ::ng-deep italic { font-style: italic !important; } .doi_image { width: 17px; height: 17px; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAASCAYAAAC9+TVUAAAACXBIWXMAAA3XAAAN1wFCKJt4AAAAB3RJTUUH4wgcCxUMRO0ltAAAAa1JREFUOMvlks1LVGEYxX/nvXfKimAUaaNQC1eiDdF11aY2fpAEBW1aBC1ksMtE/0Awf0GBUCCVm3BTu0pHiGYthC3CkKBV0YeYM8MkZtx736fNGH045bqe1XkOD4dzeA78U6OdyPzUSEHOnUbqAZpmLDZyq/MUl5K/inTfPXMw20puY5xFGPAW6AM+A6sOu7AeLzz7VcR9R9PHc9lW8hjoMccwYN5nwzKqMm4IzXpUzU+NFNqKdKaHisCQxD3nuQS8DgIXGbZi4iLyL4FlBW66bZzOm2PPgf2CDYN+4IOgDmBwDHgl2DQYwLmoPjm3/LOTctkBAwo4V4srEfAOs1ItrkStfR3PRC2uRAYv8L7we5xy2QOJz7SnZTA1R/jDXc5Jacv6XsHXNnFGq6BTu6iFEYZH6sVHb7aJYBt0jPfVhcYlTkpaAfJe7ihmH53oTsOw4Hw2ZGipMTl3Z8fvNC4vPARmzbiPtwYwKG+DggkznoZpOgM6nLos/nNjDXXdGr1q6BqQBzaBA4AHHiQZpY0rlbVd1b73+vl9XzqaJzJTL9AMfbb4qfTkPf/HfAMv1p/97KeTSAAAAABJRU5ErkJggg==) !important; background-repeat: no-repeat !important; display: inline-block; } .abstract-container { font-family: roboto-regular; padding: 20px; margin-top: 40px; box-shadow: inset -1px 0px 6px 7px #f5f5f5; } .kwd-group { display: none; } .codeocean-wrap { position: absolute; height: 680px; background: #6b6b6ba6; width: 96%; color: white; font-size: 2em; text-align: center; padding-top: 20%; } .tag { background: #249faf; border-radius: 3px 0 0 3px; color: white; display: inline-block; height: 26px; line-height: 26px; padding: 0 20px 0 23px; position: relative; margin: 0 10px 10px 0; text-decoration: none; -webkit-transition: color 0.2s; } .tag::before { background: #fff; border-radius: 10px; box-shadow: inset 0 1px rgba(0, 0, 0, 0.25); content: ''; height: 6px; left: 10px; position: absolute; width: 6px; top: 10px; } .tag::after { background: #fff; border-bottom: 13px solid transparent; border-left: 10px solid #249faf; border-top: 13px solid transparent; content: ''; position: absolute; right: 0; top: 0; } .tag:hover { background-color: crimson; color: white; } .tag:hover::after { border-left-color: crimson; } .keywords-container { padding-top: 20px; } .article-cover { padding: 15px 0px; text-align: left; width: 83%; } .remove-margin { margin-left: 0px; margin-right: 0px; } .widthset { width: 100px; height: 100px; } .pill-icon { color: white; display: inline-block; padding: 0px 8px; font-weight: bold; border-radius: 7px; background: #777777; } .pill-content { padding: 0px 5px; } .pill-container { display: inline-block; border: 1px solid #777777; font-size: 13px; color: #444444; padding-right: 5px; border-radius: 10px; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; margin-right: 1%; } .blue-theme { background: #2cc3d2; border: 1px solid #2cc3d2; } .green-theme { background: #3D913a; border: 1px solid #3D913a; } .green-theme .pill-content { color: white; } .green-border { border-left: 3px solid #3D913a; border-top: 0px solid transparent; border-right: 0px solid transparent; border-bottom: 0px solid transparent; background: #f6f9f3; } .floating-buttons { position: fixed; bottom: 4%; right: 2%; z-index: 10; background: #fff0; border-radius: 18px; text-align: center; } .floating-buttons-btn { border: 0; background: #606c88; background: linear-gradient(to bottom, #606c88 0%, #3f4c6b 100%); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#606c88', endColorstr='#3f4c6b', GradientType=0); padding: 5px 25px; color: white !important; } .floating-buttons-btn-active { border: 0; background-color: #5caf4c !important; padding: 5px 25px; } .right-btn { border-top-right-radius: 18px; border-bottom-right-radius: 18px; border-left: 3px solid #000; border-top-left-radius: 0; border-bottom-left-radius: 0; } .left-btn { border-top-left-radius: 18px; border-bottom-left-radius: 18px; } .article-citation { border-left: 3px solid #3D913a; padding: 10px; margin: 18px; font-family: roboto-light; color: #2c1b1b; background: #f6f9f3; position: relative; } .citation-heading { font-weight: bold; color: #3D913a; } .citation-content { font-size: 14px; color: #3c3a3a; } .citation-copy-icon { position: absolute; right: 5px; bottom: 2px; background-color: #3d913a; padding: .1em 1.2em; color: #ffffff; font-size: .7em; cursor: pointer; } .citation-copy-icon:hover { background-color: #378134; } .citation-copy-icon .copy-citation-text { font-weight: bold; } table, th, td { border: 1px solid #ccc; } .fig .label, .caption { text-align: center; } .table td, .table th { padding: .25rem; vertical-align: top; border-top: 1px solid #ccc; font-size: .75em; } .sidenav-pdf { position: fixed; right: 0px; bottom: 70%; transition: 0.3s; text-decoration: none; font-size: 20px; color: white; border-radius: 0 5px 5px 0; z-index: 100; width: 62px; background-color: #424242; border-radius: 30px 0 0 30px; color: #fff; font-size: 17px; margin-top: 45px; text-align: center; padding: 10px; cursor: pointer; } .sidenav { position: fixed; right: -200px; bottom: 45%; transition: 0.3s; text-decoration: none; font-size: 20px; color: white; border-radius: 0 5px 5px 0; z-index: 100; } .sidenav:hover { right: 0px; } .sidenav-toright { right: 0px; } #sidenav-download-heading { width: 62px; display: inline-block; position: relative; display: inline-block; background-color: #424242; border-radius: 30px 0 0 30px; color: #fff; font-size: 17px; margin-top: 45px; text-align: center; padding: 10px; cursor: pointer; } .downloads-sidenav { width: 200px; padding: 10px; display: inline-block; height: 250px; background-color: #dadada94; border: 1px solid #b9b9b9; border-radius: 8px 0 0 8px; } .downloads-sidenav .download-options { display: inline-block; padding: 5.5% 7%; } .open-nwtab-container { text-align: center; } .open-nwtab-btn { font-size: 12px; padding: 5px; } .sec .fig { text-align: center; background: #f6f9f3; position: relative; margin-top: 1em; border-left: 3px solid #3d913a; } .sec .fig .youtube_widget { padding: 1%; } .sec .fig .label { background: #ecf5e3; font-weight: bold; display: block; text-align: left; padding: 1% 0 0 1%; color: #3D913a; } .sec .fig .graphic { padding: 1% 5% 1% 5%; max-width: 65% !important; } .sec .fig .caption .p { padding-top: 0; text-align: left; background: #ecf5e3; padding: 0 1% 1% 1%; } .sec .fig .caption .p .youtube_widget { padding: 0; } .pretube_widget { text-align: center; } .sketchfab-embed-wrapper { text-align: center; } .sketchfab-embed-wrapper iframe { width: 100%; max-width: 550px; } .sketchfab-table-rep-image:hover { opacity: .5; cursor: pointer; } .sketchfab-table-rep-image:hover:before { content: "Open in sketchfab"; } .table-td-sketchfab { position: relative; } .table-td-sketchfab:hover .overlay-sketchfab-table { opacity: .9; cursor: pointer; } .overlay-sketchfab-table { position: absolute; top: 0; bottom: 0; left: 0; right: 0; height: 100%; width: 100%; opacity: 0; transition: .3s ease; background-color: #f2f2f2; } .overlay-sketchfab-table .icon { color: #212121; font-weight: bold; position: absolute; top: 50%; left: 50%; -webkit-transform: translate(-50%, -50%); transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); text-align: center; word-break: keep-all; } .overlay-sketchfab-table .play-button { background-color: #1caad9; box-shadow: 0 0 10px #333; position: absolute; top: 50%; left: 50%; margin-top: -25px; margin-left: -25px; display: flex; width: 50px; height: 50px; border: 0; border-radius: 30px; text-align: center; line-height: 0px; color: #fff; outline: none; transition: all .15s; } .overlay-sketchfab-table .play-button .fa-play { left: 16px; position: absolute; top: 25px; font-size: 35px; line-height: 0px; } .overlay-sketchfab-table .play-button .label-3d { width: 50px; font-size: 8px; z-index: 100; font-weight: 700; color: #1caad9; margin-top: 25px; margin-left: 2px; opacity: 1; transition: opacity .3s, -webkit-transform .3s; transition: opacity .3s, transform .3s; transition: opacity .3s, transform .3s, -webkit-transform .3s; } .doi-link-custom { color: #fff; } .doi-link-custom:hover { color: #ccb0b0; } .table-wrap .caption .p { font-weight: bold; } .author-popover-template span { color: #3D913a; } .custom-popover-title { width: 100%; display: block; padding: .4em .6em; color: white; background: #403c3c; font-size: 1.25em; } .custom-popover-body { padding: .7em; } .popover-orcid { color: #a7cf36; } .popover-orcid a { color: #a7cf36; } .popover-orcid a:hover { color: #94b834; } .popover-email { word-break: break-all; color: #1a91c8 !important; } .popover-email .mailto-icon { color: #1a91c8 !important; } .popover-email a { color: #1a91c8 !important; } .popover-email a:hover { color: #2584b1 !important; } .ref .label { display: inline-block; height: 44px; float: left; } .ref::before { font-weight: bold; } span.sup { vertical-align: super; font-size: smaller; } span.sup .xref { color: #119549; cursor: pointer; } .highlight-reference { background: #d6f11152; transition: 0.6s; } .back .ref-list .ref { position: relative; display: flex; } .continue-reading { transition: 0.6s; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; } .continue-reading-afterImg { width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; position: absolute; color: #c3c3c3; top: 0; right: 10px; cursor: pointer; } @-webkit-keyframes mymove { from { top: -10px; } to { top: 30px; } } @keyframes mymove { from { top: -10px; } to { top: 30px; } } a { text-decoration: none; } .popover__title { font-size: inherit; text-decoration: none; color: inherit; text-align: center; } .popover__wrapper { position: relative; display: inline-block; } .popover__content { opacity: 0; display: none; position: absolute; top: 41px; left: -152px; -webkit-transform: translate(0, 10px); transform: translate(0, 10px); background-color: #403c3c; padding: 1.5rem; border-radius: 7px; box-shadow: -1px 7px 12px 1px rgba(57, 66, 48, 0.53); min-width: 400px; white-space: normal; } .popover__content .surname { margin-right: .45em; } .popover__content:before { position: absolute; z-index: -1; content: ""; right: calc(61% - 10px); top: -10px; border-style: solid; border-width: 0 10px 10px 10px; border-color: transparent transparent #403c3c transparent; transition-duration: 0.3s; transition-property: -webkit-transform; transition-property: transform; transition-property: transform, -webkit-transform; } .popover__wrapper:hover .popover__content { z-index: 10; opacity: 1; display: block; -webkit-transform: translate(0, -20px); transform: translate(0, -20px); transition: all 0.5s cubic-bezier(0.75, -0.02, 0.2, 0.97); } .popover__message .article-title { color: white; } .popover__message .article-title .italic { color: white; } .popover__message span { color: #3D913a; font-size: 14px !important; } .popover__message .label { display: none !important; } .popover__message .ext-link { color: #087ec2; } .ref-pop-heading { font-family: roboto-bold; color: white; font-size: 16px; } .reference_popover_content { padding: 1rem 1.5rem; } .reference_popover_message { margin-bottom: 0px; } .article-authors .popover-title { color: white; background: #403c3c; } .article-authors .bs-popover-bottom .arrow::after { border-bottom-color: #403c3c; } .article-authors .bs-popover-bottom .popover-header::before { border-bottom: 1px solid #403c3c; } .article-authors .popover-body { padding: 0; } [attr-ref-type=fig], [attr-ref-type=table] { color: #3D913a !important; cursor: pointer; } .fig-preview-img { max-height: 500px; } .giga-modals-closebtn { color: #fff; font-size: 14px; text-shadow: none; opacity: 1; } .giga-modals-closebtn:hover { color: #ddd; } .figPreview-modal-body { text-align: center; } .giga-modals { font-family: roboto-bold; } .fig-caption { font-family: roboto-regular; font-size: 13px; } .mailto-icon { color: #9e9e9e; } [attr-fig-type="protocolIoUrl"] .graphic, [attr-fig-type="protocolIoUrl"] .open-nwtab-container, [attr-fig-type="sketchfabUrl"] .graphic, [attr-fig-type="sketchfabUrl"] .open-nwtab-container, [attr-fig-type="interactiveFigure"] .graphic, [attr-fig-type="interactiveFigure"] .open-nwtab-container, [attr-fig-type="youtubeUrl"] .graphic, [attr-fig-type="youtubeUrl"] .open-nwtab-container, [attr-fig-type="twitterUrl"] .graphic, [attr-fig-type="twitterUrl"] .open-nwtab-container, [attr-fig-type="codeoceanUrl"] .graphic, [attr-fig-type="codeoceanUrl"] .open-nwtab-container, [attr-fig-type="nmrUrl"] .graphic, [attr-fig-type="nmrUrl"] .open-nwtab-container, [attr-fig-type="openstreetmapUrl"] .graphic, [attr-fig-type="openstreetmapUrl"] .open-nwtab-container, [attr-fig-type="peertubeUrl"] .graphic, [attr-fig-type="peertubeUrl"] .open-nwtab-container, [attr-fig-type="juiceboxUrl"] .graphic, [attr-fig-type="juiceboxUrl"] .open-nwtab-container, [attr-fig-type="bloobtoolkit"] .graphic, [attr-fig-type="bloobtoolkit"] .open-nwtab-container { display: none; } .tablePreviewModal .figPreview-modal-body .table { table-layout: fixed; } .tablePreviewModal .figPreview-modal-body .label, .tablePreviewModal .figPreview-modal-body .caption { display: none; } .tablePreviewModal .figPreview-modal-body .table .td { font-size: .85em !important; word-break: break-all; } .tablePreviewModal .figPreview-modal-body .table-wrap-foot .p { font-size: .80em; } .tablePreviewModal .figPreview-modal-body .thead .tr { background-color: #f2f2f2; } .tablePreviewModal .figPreview-modal-body .table tr:nth-child(even) { background-color: #f2f2f2; } .table-wrap-foot .p { padding: 0 0 1em 0; font-size: .80em; color: #7d7c7c; } .alliliation-ul { padding-left: 1em; margin-bottom: 0em; } .alliliation-li { padding-bottom: .40em; } .sec .table-wrap .table { min-width: 100%; width: unset; table-layout: fixed; } .sec .table-wrap .table [align="right"] { text-align: right; } .sec .table-wrap .table [align="left"] { text-align: left; } .sec .table-wrap .table [align="center"] { text-align: center; } .sec .table-wrap .table .thead .td { word-break: keep-all; } .sec .table-wrap .label { display: block; font-weight: bold; padding-top: .50em; } .sec .table-wrap .caption .p { text-align: left; padding-top: 0em; } .sec .table-wrap .td { word-break: break-all; } .sec .table-wrap .thead .tr { background-color: #f2f2f2; } .sec .table-wrap .thead .tr .td { font-weight: bold; } .sec .table-wrap .table tr:nth-child(even) { background-color: #f2f2f2; } .article-update-statement { width: -webkit-fit-content; width: -moz-fit-content; text-align: center; margin: 0 auto; padding: .4em 1em; } .article-update-statement a { text-decoration: underline; } .article-information { background: #f6f9f3; font-size: .81em; border-left: 3px solid #3D913a; padding: 1em; margin: 18px; color: rgba(0, 0, 0, 0.74); } .article-information .article-information-heading { font-size: 1rem; font-weight: bold; color: #3D913a; } .article-information .subject-area-heading { font-weight: bold; display: block; padding-top: 1em; } .article-information .supplementary-heading { font-weight: bold; display: block; padding-top: 1em; } .article-information .supplementary-files-link { color: white !important; } .article-information .supplementary-files-badge { padding: .5em; } .article-information .article-history { padding-top: 1em; } .article-information .article-history .article-history-label { width: 10%; font-weight: bold; } .article-information .article-history .history-dates { display: inline-block; } .article-information .preprint { padding-top: 1em; } .article-information .preprint .preprint-title { font-weight: bold; } .article-information .copyright-statement { padding-top: 1em; } .article-information .article-info-label { font-weight: bold; } .article-information .article-primary-info { padding-top: 1em; } .permissions { background: #f6f9f3; font-size: .81em; border-left: 3px solid #3D913a; padding: 1em; display: none; } .permissions .copyright-statement::before { content: "Copyright"; margin-right: .30em; } .permissions .copyright-year { display: none; } .permissions .license { display: inline-block; } .ext-link[attr-ext-link-type="uri"] { word-break: break-all; } .article-categories { background: #f6f9f3; font-size: .81em; border-left: 3px solid #3D913a; padding: 1em; } .article-categories:before { content: "Article Information"; font-size: 1rem; font-style: italic; font-weight: bold; color: #3D913a; } .article-categories [attr-subj-group-type="heading"] { display: none; } .article-categories [attr-subj-group-type="classification"]:before { content: "Subject areas"; font-weight: bold; display: block; } .article-categories [attr-subj-group-type="classification"] .subject { display: inline-block; } .article-categories [attr-subj-group-type="classification"] .subject:after { content: ", "; padding-right: .3em; } .article-categories [attr-subj-group-type="classification"] .subject:last-of-type:after { display: none; } .history { background: #f6f9f3; font-size: .81em; border-left: 3px solid #3D913a; padding: .0em 1em; display: none; } .history .date { display: inline-block; margin-right: 1em; } .history .date .day:after { content: "-"; } .history [attr-date-type="received"]:before { content: "Recieved "; font-weight: bold; } .history [attr-date-type="rev-recd"]:before { content: "Revised "; font-weight: bold; } .history [attr-date-type="accepted"]:before { content: "Accepted "; font-weight: bold; } .affiliation-popover-content { color: #c1c1c1; text-align: left; top: 43px !important; cursor: text; } .affiliation-popover-content::before { right: calc(55% - 10px) !important; } .affiliation-popover-content label { margin-bottom: 0; width: .80em; margin-left: -.50em; text-align: right; } .affiliation-popover-content label { display: none; } .affiliation-popover-content .affiliation-separator { margin-bottom: .5em; } .author-keys { color: #221f1f; } .title { color: #333; font-size: 18px; margin-top: 2%; font-weight: 600; border-right: 5px solid transparent; border-bottom: 1px solid #3D913a; line-height: 1.3; } .authorpopup-ul { padding-left: 1em; } .authorpopup-ul .authorpopup-li a { color: #ddd !important; } .authorpopup-ul .authorpopup-li a:hover { color: #bebdbd !important; } .external-links a { color: #ddd !important; } .external-links a:hover { color: #bebdbd !important; } span.styled-content { white-space: nowrap; } .monospace { font-family: monospace !important; } .supporting { color: #2ecc71; } .supporting a { color: #2ecc71; } .mentioning { color: #c9cac9; } .mentioning a { color: #c9cac9; } .disputing { color: #f39c12; } .disputing a { color: #f39c12; } .scite_popup_badges { font-family: Roboto-bold; } .scite_popup_badges:hover { color: white; } .scite_popover_content { min-width: 200px; top: 48px; left: 0; } .custom-scite-tally { height: 23px; } .scite-badge .scite-tally { height: 24px; } .custom-scite-tally-heading { height: 27px; } .scite_ref_popover_content { min-width: 145px; top: 44px; left: 0; text-align: center; padding: 1em .5em; } .credit_taxonomy a { color: #ddd !important; } .credit_taxonomy a:hover { color: #f3f3f3 !important; } .credit_taxonomy b { color: #fff; } .authorpopup-sub-heading { color: #fff; font-weight: bold; } .comma:after { content: ","; margin-right: .25em; } .comma:last-of-type:after { display: none; } .en-lang .front, .en-lang .body, .en-lang .back { display: none; } .en-lang .sub-article .front, .en-lang .sub-article .body, .en-lang .sub-article .back { display: block; } .en .front, .en .body.back { display: block; } .en .sub-article .front, .en .sub-article .body, .en .sub-article .back { display: none; } .link-span { color: #3D913a; cursor: pointer; } .change-language-div { color: #3D913a; padding: 0em 1em; text-align: left; } .change-language-div .language-icon { font-size: 1.2em; margin-right: .5em; } .change-language-div .language-text { font-size: .95em; font-weight: bold; } .change-language-div .language-select { border: 0; background: #fff; font-size: .87em; } .change-language-div .language-select:hover { color: #3D913a; cursor: pointer; } [attr-fig-type="sketchfabUrl"] { overflow: scroll; } .custom { display: inline-block; } .article_main_content { color: #333; font-size: 24px; text-align: left; font-weight: 600; } .custom-abstract { word-break: keep-all; } .custom-abstract sec title { display: block; font-weight: bold; } .nmr-heading { padding: 2% 2% 2% 2%; font-weight: bold; color: #087ec2; cursor: pointer; } .nmr-heading:hover { color: #3D913a; } .nmr-heading .nmr-heading-content { border-radius: 1%; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; } .nmr-heading .nmr-heading-content .nmr-graph-icon { margin-right: .3em; font-size: 19px; } .fa-scite-loader { margin-left: .5em; color: #3D913a; } .scroll-to-top { font-family: Roboto-bold; font-size: .8em; color: #cecece; cursor: pointer; opacity: 0; -webkit-transition: opacity 2s ease-in-out; -moz-transition: opacity 2s ease-in-out; -ms-transition: opacity 2s ease-in-out; -o-transition: opacity 2s ease-in-out; } .scroll-to-top:hover .scroll-to-top-icon { color: #3D913a; } .fadein { opacity: 1; } .iframe-view-mode { font-family: roboto-bold; background: #29b2d4; text-align: center; color: white; font-size: .8em; } .funding-group { display: none; } .stage { padding-left: 1em; padding-top: 1em; padding-bottom: 1em; border-bottom: 2px dotted; } .review-stage-info { display: inline-block; } .forms_container { padding-left: 1em; font-size: 1em; padding-top: .5em; } .article_stage_heading1, .attachments_heading { color: black; } .article_stage_heading2 { font-size: .9em; color: #424242; } .stage-info-span { font-size: 1em; } .article_stage_details { font-size: .8em; width: -webkit-fit-content; width: -moz-fit-content; width: 100%; color: #636363; background: #f6f9f3; border-left: 3px solid #3D913a; border-radius: 3px; padding: .5em 1em .5em 1em; } .article_stage_details .review-history-fa { margin-right: 1em; margin-left: 1em; width: 1em; color: #3D913a; } .article_stage_details .review-stage-info-fa { margin-right: .4em; margin-left: 1em; width: 1em; color: #3D913a; } .article_stage_details .review-stage-info-title { color: #636363; } .stage-info-heading { margin: 0 1em .3em 0; color: #424242; font-size: 1.2em; } .forms_title { color: #424242; list-style-position: inside; list-style-type: square; display: list-item; } .forms_groups_title { padding-left: 1em; color: #424242; list-style-position: inside; list-style-type: square; display: list-item; } .forms_groups_title_number { color: #424242; } .forms_title_number { color: #424242; } .forms_groups_element_title { font-size: .8em; padding-left: 2em; font-family: 'Roboto-Regular'; color: black; } .forms_groups_element_value { font-size: .8em; padding-left: 2em; font-family: 'Roboto-Regular'; padding-bottom: 1em; } .forms_mail_title { font-size: .9em; padding-left: 1em; color: black; } .stage table tr:nth-child(odd) { background-color: #f2f2f2; } .forms_groups_table { padding-left: 1em; padding-top: .5em; } .attachments_content { padding-left: 1em; font-size: .8em; } .clip { padding-right: .5em; } .attachments { padding-bottom: 1em; } .width-overflow { overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } .see-more-link { color: #3D913a; cursor: pointer; font-size: .85em; text-decoration: underline; margin-left: 5px; } .see-more-content { display: none; } .stage .table .td { font-family: Roboto-regular; font-size: .85em; } .stage .fifty-percent-width { width: 60%; } .new-integrations-bar { display: inline-block; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; padding-right: 1em; } .custom-stencila-iframe { margin-top: 0px !important; margin-bottom: 0px !important; } .preformat { font-size: .8em; word-break: normal; font-family: Consolas, "Andale Mono WT", "Andale Mono", "Lucida Console", "Lucida Sans Typewriter", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Liberation Mono", "Nimbus Mono L", Monaco, "Courier New", Courier, monospace; word-break: normal; padding: 1em; border: 1px solid #f5f5f5; background: #ecf5e3; overflow-x: auto; color: rgba(0, 0, 0, 0.74); white-space: pre-wrap; } .no-marginLR { margin-left: 0; margin-right: 0; } .no-padding { padding: 0px; } #hotel_map { height: 400px; width: 100%; } .map { height: 400px; width: 100%; } .map-container { width: inherit; height: inherit; z-index: 1; } .map-footer { height: 36px; font-size: 1.2em; text-align: center; background: #142535; } .collab-expand { background-color: #f6f9f3; padding: 0.5em; border-radius: 0.5em; } .collab-accordian { font-size: .9em; padding: 0.5em 0em; color: #44913a; } .list[attr-list-type="number"] .list-item { margin-left: 1em; display: list-item; list-style: decimal; } .list[attr-list-type="bullet"] .list-item, .list[attr-list-type="disc"] .list-item { margin-left: 1em; display: list-item; list-style: disc; } .custom-bar { margin-top: 1em; overflow: auto; } .custom-bar .custom-bar-elem { margin-left: 2em; width: 142px; border-radius: 9px; background-color: #3D913a; border: 1px solid #d0d0d0; cursor: pointer; height: 49px; float: right; } .custom-bar .custom-bar-elem .custom-bar-elem-right { width: 80px; border-radius: 9px; text-align: center; font-family: roboto-bold; color: white; font-size: .8em; height: 100%; float: left; display: flex; align-items: center; line-height: 1.3; } .custom-bar .custom-bar-elem .custom-bar-elem-left { width: 50px; background-color: #f6f9f3; border-radius: 8px; text-align: center; height: 100%; float: left; color: #3D913a; } .custom-bar .custom-bar-elem .custom-bar-icons { line-height: 2em; font-size: 1.5em; } .custom-bar .custom-bar-elem:hover { background-color: #e5f5e3; } .custom-bar .custom-bar-elem:hover .custom-bar-elem-left { color: #9b9b9b; } .custom-bar .custom-bar-elem:hover .custom-bar-elem-right { color: #3D913a; } .mobile-menu { z-index: 100; position: fixed; bottom: 0; left: 0; width: 100%; background-color: #ffffff; display: flex; justify-content: space-around; padding: 0.25rem 1.2em; overflow: visible; box-shadow: 5px 5px 20px #ddd; border-top: 1px solid #f1f1f1; } .mobile-menu .icon { position: relative; } .mobile-menu .icon .dropdown { position: absolute; bottom: 41px; left: 50%; -webkit-transform: translateX(-50%); transform: translateX(-50%); background-color: #333; width: 120px; border-radius: 5px; display: none; } .mobile-menu .icon .dropdown a { display: block; color: #fff; font-size: 16px; text-decoration: none; padding: 5px; border-bottom: 1px solid #8a8a8a; } .mobile-menu .icon .dropdown :last-child { border-bottom: none; } .mobile-menu .icon .share-dropdown { position: absolute; bottom: 41px; left: 50%; -webkit-transform: translateX(-50%); transform: translateX(-50%); width: 120px; border-radius: 5px; display: none; } .mobile-menu .icon a { color: #4a4a4a; font-size: 20px; text-decoration: none; } .icon:hover .dropdown, .icon:hover .share-dropdown { display: block; } .dropdown:before, .share-dropdown:before { content: ""; display: block; position: absolute; top: 50%; left: 50%; -webkit-transform: translate(-50%, -50%); transform: translate(-50%, -50%); } .icon:first-child .dropdown { right: 0%; } @media (max-width: 600px) { .floating-buttons, .sidenav, .share-btn { display: none; } .mobile-bottom-bar { visibility: unset; } } @media (min-width: 600px) { .floating-buttons, .sidenav, .share-btn { display: none; } .mobile-bottom-bar { visibility: unset; } } @media (min-width: 768px) { .floating-buttons, .sidenav, .share-btn { display: none; } .mobile-bottom-bar { visibility: unset; } } @media (min-width: 992px) { .floating-buttons, .sidenav, .share-btn { display: block; } .mobile-bottom-bar { visibility: hidden; } } @media (min-width: 1200px) { .floating-buttons, .sidenav, .share-btn { display: block; } .mobile-bottom-bar { visibility: hidden; } } .sb-button, .sb-group{display:inline-flex;align-items:flex-start} .sb-group{flex-wrap:wrap} .sb-button{margin:.3125em} .sb-wrapper{font-size:inherit;cursor:pointer;position:relative;outline:0;min-width:4.125em;height:2.5em;border:none;border-radius:1px;padding:0;line-height:2.571em;background-color:transparent} .sb-wrapper .sb-count, .sb-wrapper .sb-icon, .sb-wrapper .sb-text{display:flex;align-items:center;justify-content:center;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none} .sb-wrapper .sb-inner{display:flex;flex:1;width:100%;height:100%} .sb-wrapper .sb-content{display:flex;height:100%;width:100%} .sb-wrapper .sb-text{padding:0 .7em;flex:1;height:100%;white-space:nowrap} .sb-wrapper .sb-icon{text-align:center;width:100%;height:100%;font-size:1.2em;min-width:2em} .sb-wrapper .sb-count{font-size:.9em;padding:0 .7em} .sb-wrapper .sb-count, .sb-wrapper .sb-text{font-weight:700} .sb-show-text .sb-icon{width:2em} .sb-show-count{min-width:5.333em} .MJXp-script {font-size: .8em} .MJXp-right {-webkit-transform-origin: right; -moz-transform-origin: right; -ms-transform-origin: right; -o-transform-origin: right; transform-origin: right} .MJXp-bold {font-weight: bold} .MJXp-italic {font-style: italic} .MJXp-scr {font-family: MathJax_Script,'Times New Roman',Times,STIXGeneral,serif} .MJXp-frak {font-family: MathJax_Fraktur,'Times New Roman',Times,STIXGeneral,serif} .MJXp-sf {font-family: MathJax_SansSerif,'Times New Roman',Times,STIXGeneral,serif} .MJXp-cal {font-family: MathJax_Caligraphic,'Times New Roman',Times,STIXGeneral,serif} .MJXp-mono {font-family: MathJax_Typewriter,'Times New Roman',Times,STIXGeneral,serif} .MJXp-largeop {font-size: 150%} .MJXp-largeop.MJXp-int {vertical-align: -.2em} .MJXp-math {display: inline-block; line-height: 1.2; text-indent: 0; font-family: 'Times New Roman',Times,STIXGeneral,serif; white-space: nowrap; border-collapse: collapse} .MJXp-display {display: block; text-align: center; margin: 1em 0} .MJXp-math span {display: inline-block} .MJXp-box {display: block!important; text-align: center} .MJXp-box:after {content: " "} .MJXp-rule {display: block!important; margin-top: .1em} .MJXp-char {display: block!important} .MJXp-mo {margin: 0 .15em} .MJXp-mfrac {margin: 0 .125em; vertical-align: .25em} .MJXp-denom {display: inline-table!important; width: 100%} .MJXp-denom > * {display: table-row!important} .MJXp-surd {vertical-align: top} .MJXp-surd > * {display: block!important} .MJXp-script-box > * {display: table!important; height: 50%} .MJXp-script-box > * > * {display: table-cell!important; vertical-align: top} .MJXp-script-box > *:last-child > * {vertical-align: bottom} .MJXp-script-box > * > * > * {display: block!important} .MJXp-mphantom {visibility: hidden} .MJXp-munderover, .MJXp-munder {display: inline-table!important} .MJXp-over {display: inline-block!important; text-align: center} .MJXp-over > * {display: block!important} .MJXp-munderover > *, .MJXp-munder > * {display: table-row!important} .MJXp-mtable {vertical-align: .25em; margin: 0 .125em} .MJXp-mtable > * {display: inline-table!important; vertical-align: middle} .MJXp-mtr {display: table-row!important} .MJXp-mtd {display: table-cell!important; text-align: center; padding: .5em 0 0 .5em} .MJXp-mtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-mlabeledtr {display: table-row!important} .MJXp-mlabeledtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mlabeledtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 1px 3px; font-style: normal; font-size: 90%} .MJXp-scale0 {-webkit-transform: scaleX(.0); -moz-transform: scaleX(.0); -ms-transform: scaleX(.0); -o-transform: scaleX(.0); transform: scaleX(.0)} .MJXp-scale1 {-webkit-transform: scaleX(.1); -moz-transform: scaleX(.1); -ms-transform: scaleX(.1); -o-transform: scaleX(.1); transform: scaleX(.1)} .MJXp-scale2 {-webkit-transform: scaleX(.2); -moz-transform: scaleX(.2); -ms-transform: scaleX(.2); -o-transform: scaleX(.2); transform: scaleX(.2)} .MJXp-scale3 {-webkit-transform: scaleX(.3); -moz-transform: scaleX(.3); -ms-transform: scaleX(.3); -o-transform: scaleX(.3); transform: scaleX(.3)} .MJXp-scale4 {-webkit-transform: scaleX(.4); -moz-transform: scaleX(.4); -ms-transform: scaleX(.4); -o-transform: scaleX(.4); transform: scaleX(.4)} .MJXp-scale5 {-webkit-transform: scaleX(.5); -moz-transform: scaleX(.5); -ms-transform: scaleX(.5); -o-transform: scaleX(.5); transform: scaleX(.5)} .MJXp-scale6 {-webkit-transform: scaleX(.6); -moz-transform: scaleX(.6); -ms-transform: scaleX(.6); -o-transform: scaleX(.6); transform: scaleX(.6)} .MJXp-scale7 {-webkit-transform: scaleX(.7); -moz-transform: scaleX(.7); -ms-transform: scaleX(.7); -o-transform: scaleX(.7); transform: scaleX(.7)} .MJXp-scale8 {-webkit-transform: scaleX(.8); -moz-transform: scaleX(.8); -ms-transform: scaleX(.8); -o-transform: scaleX(.8); transform: scaleX(.8)} .MJXp-scale9 {-webkit-transform: scaleX(.9); -moz-transform: scaleX(.9); -ms-transform: scaleX(.9); -o-transform: scaleX(.9); transform: scaleX(.9)} .MathJax_PHTML .noError {vertical-align: ; font-size: 90%; text-align: left; color: black; padding: 1px 3px; border: 1px solid} Google Tag Manager (noscript) End Google Tag Manager (noscript) Signup / Login English English Chinese Home About Articles Editorial Board Series All Series Asian citrus psyllid community annotation Vectors of human disease series Home (current) About Articles Editorial board Join our mailing list All Series Asian citrus psyllid community annotation Vectors of human disease series English English Chinese Signup / Login { "@context": " "@type": "ScholarlyArticle", "headline": "A phased, chromosome-scale genome of ‘Honeycrisp’ apple (Malus domestica)", "datePublished": "2022-09-18T22:00:00.000Z", "url": " "image": " "publisher": { "@type": "Organization", "name": "River Valley Technologies Ltd", "logo": { "@type": "ImageObject", "url": " } } } Published online : 19 September 2022 Article Outline Back to articles list Scroll to top Data Release A phased, chromosome-scale genome of ‘Honeycrisp’ apple ( Malus domestica ) Awais Khan Awais Khan Affiliation 1 Plant Pathology and Plant-Microbe Biology Section, Cornell University, Geneva, NY 14456, USA * Corresponding authors. E-mail: awais.khan@cornell.edu; loren.honaas@usda.gov Casrai/NISO CRediT roles Conceptualization (lead) Supervision (lead) Writing - original draft Writing - review editing Find this author on Google Scholar Find this author on PubMed * Sarah B. Carey Sarah B. Carey Affiliation 2 Department of Crop, Soil, and Environmental Sciences, Auburn University, Auburn, AL 36849, USA 3 HudsonAlpha Institute for Biotechnology, Huntsville, AL 35806, USA Casrai/NISO CRediT roles Formal analysis Methodology Writing - review editing Find this author on Google Scholar Find this author on PubMed Alicia Serrano Alicia Serrano Affiliation 1 Plant Pathology and Plant-Microbe Biology Section, Cornell University, Geneva, NY 14456, USA Casrai/NISO CRediT roles Writing - original draft (supporting) Writing - review editing (supporting) Find this author on Google Scholar Find this author on PubMed Huiting Zhang Huiting Zhang Affiliation 4 USDA ARS Tree Fruit Research Lab, Wenatchee, WA 98801, USA 5 Department of Horticulture, Washington State University, Pullman, WA, USA Casrai/NISO CRediT roles Data curation Formal analysis Writing - original draft (supporting) Writing - review editing Find this author on Google Scholar Find this author on PubMed Heidi Hargarten Heidi Hargarten Affiliation 4 USDA ARS Tree Fruit Research Lab, Wenatchee, WA 98801, USA Casrai/NISO CRediT roles Methodology (supporting) Writing - review editing (supporting) Find this author on Google Scholar Find this author on PubMed Haley Hale Haley Hale Affiliation 2 Department of Crop, Soil, and Environmental Sciences, Auburn University, Auburn, AL 36849, USA 3 HudsonAlpha Institute for Biotechnology, Huntsville, AL 35806, USA Casrai/NISO CRediT roles Formal analysis Writing - original draft (supporting) Find this author on Google Scholar Find this author on PubMed Alex Harkess Alex Harkess Affiliation 2 Department of Crop, Soil, and Environmental Sciences, Auburn University, Auburn, AL 36849, USA 3 HudsonAlpha Institute for Biotechnology, Huntsville, AL 35806, USA Casrai/NISO CRediT roles Data curation Formal analysis Supervision Writing - review editing Find this author on Google Scholar Find this author on PubMed Loren Honaas Loren Honaas Affiliation 4 USDA ARS Tree Fruit Research Lab, Wenatchee, WA 98801, USA * Corresponding authors. E-mail: awais.khan@cornell.edu; loren.honaas@usda.gov Casrai/NISO CRediT roles Conceptualization Supervision (lead) Writing - original draft (lead) Writing - review editing (lead) Find this author on Google Scholar Find this author on PubMed * DOI 10.46471/gigabyte.69 Views 1342 Downloads 193 Review History Cite this article as... Awais Khan, Sarah B. Carey, Alicia Serrano, Huiting Zhang, Heidi Hargarten, Haley Hale, Alex Harkess, Loren Honaas, A phased, chromosome-scale genome of ‘Honeycrisp’ apple ( Malus domestica ) , Gigabyte , 2022 Copy citation Article Information Journal title: Gigabyte Publisher name: GigaScience Press Publisher location: Sha Tin, New Territories, Hong Kong SAR Subject Areas Genetics and Genomics Agriculture Plant Genetics Received: 14 July 2022 Accepted: 14 September 2022 Published: 19 September 2022 Preprint submitted to: Copyright © The Author(s) 2022. Gigabyte Gigabyte 2709-4715 GigaScience Press Sha Tin, New Territories, Hong Kong SAR DRR-202207-01 69 10.46471/gigabyte.69 Data Release Genetics and Genomics Agriculture Plant Genetics A phased, chromosome-scale genome of ‘Honeycrisp’ apple ( Malus domestica ) A. Khan et al. A phased, chromosome-scale genome of ‘Honeycrisp’ apple ( Malus domestica ) Khan Awais 1 Conceptualization Supervision Writing - original draft Writing - review editing * Carey Sarah B. 2 3 Formal analysis Methodology Writing - review editing Serrano Alicia 1 Writing - original draft Writing - review editing Zhang Huiting 4 5 Data curation Formal analysis Writing - original draft Writing - review editing Hargarten Heidi 4 Methodology Writing - review editing Hale Haley 2 3 Formal analysis Writing - original draft Harkess Alex 2 3 Data curation Formal analysis Supervision Writing - review editing Honaas Loren 4 Conceptualization Supervision Writing - original draft Writing - review editing * 1 Plant Pathology and Plant-Microbe Biology Section , Cornell University , Geneva , NY 14456 , USA 2 Department of Crop, Soil, and Environmental Sciences , Auburn University , Auburn , AL 36849 , USA 3 HudsonAlpha Institute for Biotechnology , Huntsville , AL 35806 , USA 4 USDA ARS Tree Fruit Research Lab , Wenatchee , WA 98801 , USA 5 Department of Horticulture , Washington State University , Pullman, WA , USA * Corresponding authors. E-mail: awais.khan@cornell.edu ; loren.honaas@usda.gov 19 09 2022 2022 2022 1 15 14 07 2022 14 09 2022 © The Author(s) 2022. 2022 This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( ), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. The apple cultivar ‘Honeycrisp’ has superior fruit quality traits, cold hardiness, and disease resistance, making it a popular breeding parent. However, it suffers from several physiological disorders, production, and postharvest issues. Despite several available apple genome sequences, understanding of the genetic mechanisms underlying cultivar-specific traits remains lacking. Here, we present a highly contiguous, fully phased, chromosome-level genome of ‘Honeycrisp’ apples, using PacBio HiFi, Omni-C, and Illumina sequencing platforms, with two assembled haplomes of 674 Mbp and 660 Mbp, and contig N50 values of 32.8 Mbp and 31.6 Mbp, respectively. Overall, 47,563 and 48,655 protein-coding genes were annotated from each haplome, capturing 96.8–97.4% complete BUSCOs in the eudicot database. Gene family analysis reveals most ‘Honeycrisp’ genes are assigned into orthogroups shared with other genomes, with 121 ‘Honeycrisp’-specific orthogroups. This resource is valuable for understanding the genetic basis of important traits in apples and related Rosaceae species to enhance breeding efforts. Washington Tree Fruit Research Commission AP-19-103 L. Honaas United States Department of Agriculture, Agricultural Research Service L. Honaas New York State Department of Agriculture & Markets, Apple Research & Development Program (ARDP) A. Khan This research was funded by Washington Tree Fruit Research Commission (grant number: AP-19-103), LH; United States Department of Agriculture, Agricultural Research Service, LH; New York State Department of Agriculture & Markets, Apple Research & Development Program (ARDP), grant number: CM04068AQ, AK. Background Apples are the most consumed fruit in the United States [ 1 ] . The annual estimated total value of the US apple industry is $23 billion, with five cultivars alone accounting for two-thirds of production (in order of proportion): ‘Gala’, ‘Red Delicious’, ‘Honeycrisp’, ‘Granny Smith’, and ‘Fuji’ [ 2 ] . Of these, ‘Honeycrisp’ is by far the most valuable: it has roughly twice the value per pound of the next most valuable cultivar, ‘Fuji’ [ 3 ] . ‘Honeycrisp’ is appreciated by consumers, and therefore by the US apple industry, for its superior flavor and crisp, juicy texture. Importantly, properly stored ‘Honeycrisp’ fruit can be well-preserved for several months [ 4 , 5 ] . Additionally, this cultivar shows high levels of cold hardiness [ 6 ] and resistance to apple scab, the most economically important fungal disease of apples worldwide [ 7 ] . ‘Honeycrisp’ was bred at the University of Minnesota in the 1960s, where the aim was to obtain cold hardy cultivars with high-quality fruit; it was released in 1991 [ 8 ] (Figure 1 A). Recent genome-wide analysis (following the resolution of the ‘Honeycrisp’ pedigree [ 9 , 10 ] ) showed that the genetic background of ‘Honeycrisp’ is distinct from other important apple cultivars in the USA. This is highlighted by the success of ‘Honeycrisp’ as a source of interesting genetic diversity in apple breeding programs worldwide to enhance texture, storability, and improved disease resistance [ 5 , 7 , 9 , 11 , 12 ] . In fact, nine new cultivars derived from ‘Honeycrisp’ are already on the market. Figure 1. Physiology, and physiological disorders, of ‘Honeycrisp’ apple. (A) Healthy ‘Honeycrisp’ apples. (B) ‘Honeycrisp’ apples with symptoms of zonal leaf chlorosis. ‘Honeycrisp’ apples with symptoms of the fungal diseases (C) bitter rot pathogen complex ( Colletotrichum gloeosporiodes and C. acutatum ) and (D) black rot pathogen ( Botryosphaeria obtuse ). ‘Honeycrisp’ apples with the postharvest storage disorders (E) bitter pit, (F) soft scald, and (G) soggy breakdown. Although critical for sustainable apple production, disease resistance has historically been less important because the market has been dominated by modern cultivars bred primarily for fruit quality and intensive conventional production systems [ 13 ] . Most apple cultivars grown commercially in the USA are susceptible to fungal diseases such as apple scab. In temperate and humid regions around the world, frequent applications of fungicides are necessary, contributing significantly to production costs, and to negative human health and environmental impacts [ 14 ] . ‘Honeycrisp’ is resistant to apple scab and, importantly, the ability of the fruits of this cultivar to retain crispness and firmness during storage is one of its most outstanding traits [ 15 ] . However, other ‘Honeycrisp’ production issues present challenges for apple growers (Figure 1 E–G). ‘Honeycrisp’ needs a carefully designed nutrient management program during the growing season for optimal production and fruit quality, especially to limit the occurrence of the physiological disorder bitter pit [ 5 ] . ‘Honeycrisp’ trees also have greater tendency to develop zonal leaf chlorosis, which reduces photosynthetic capacity [ 16 ] . However, in the Pacific Northwest (PNW), where most ‘Honeycrisp’ apples in the USA are grown [ 17 ] because of the low disease pressure in this region, postharvest issues during long-term storage pose substantial challenges to producers. The total cullage of ‘Honeycrisp’ fruit is probably among the highest of apple cultivars. This is because of its susceptibility to various postharvest physiological disorders with poorly understood and complex etiologies. Such etiologies include bitter pit, soft scald, soggy breakdown, and CO 2 injury [ 18 – 21 ] . Postharvest technologies have been developed and deployed to mitigate these disorders [ 22 – 24 ] . However, factors affecting the efficacy of postharvest treatments include preharvest orchard management and at-harvest fruit maturity – key in the maintenance of postharvest apple fruit quality. Growers must balance the acquisition of certain fruit quality characteristics (e.g., size, color, flesh texture, and sugar content), while attempting to minimize risk for maturity-linked losses in quality that may occur in the supply chain [ 25 ] . This balancing act for maximizing at-harvest fruit quality and long-term cold storage potential in controlled atmospheres is especially difficult for ‘Honeycrisp’. Context To maximize both our understanding of genetic mechanisms driving important ‘Honeycrisp’ traits, and to assist tree fruit breeders, high quality genomes are required [ 26 ] . Indeed, in the last decade since ‘Golden Delicious’ was sequenced [ 27 ] , many genes and quantitative trait loci (QTL) linked to fruit disease resistance, quality traits, and abiotic stress tolerance in apples have been identified [ 7 , 28 , 29 ] . Recent high-quality genomes of ‘Gala’, the double haploid ‘Golden Delicious’, and the triploid ‘Hanfu’ provide genomic resources for apple genetics and breeding [ 27 , 30 , 31 ] . These studies have identified targeted genomic regions for the development of diagnostic molecular markers to breed disease-resistant apple cultivars with good fruit quality [ 32 ] . However, traditional apple breeding is still resource-intensive and a time-consuming process [ 11 , 29 , 32 ] . Substantial gaps remain in our knowledge of the genetic mechanisms involved in many important apple traits. Here, we report a phased, chromosome-level genome assembly of the ‘Honeycrisp’ apple cultivar generated from Pacific Biosciences (PacBio) HiFi and Dovetail Omni-C technologies, plus a high-quality annotation, thus providing one of the most contiguous and complete genome resources available for apples to date. Methods PacBio HiFi sequencing Cuttings of dormant wood were collected from ‘Honeycrisp’ trees growing in the experimental orchard at Cornell AgriTech (Geneva, NY, USA). The cuttings were placed in water in the greenhouse until leaves began to emerge from buds, and thereafter placed in the dark for 2 days. Young, dark-adapted leaves were collected and shipped on dry ice to the DNA Sequencing and Genotyping Center at the University of Delaware (DL, USA) for DNA extraction and Single Molecule Real Time (SMRT) Pacific BioSciences (PacBio) sequencing. High-molecular-weight (HMW) genomic DNA was extracted using a DNeasy Plant Mini Kit (Qiagen) according to the manufacturer’s protocol. HMW genomic DNA was sheared to 15 kilobase pair (Kbp) fragments, and the HiFi library was prepared using SMRTbell Express Template Prep Kit 2.0 and the DNA/Polymerase Binding Kit 2.0 (Pacific Biosciences) according to the manufacturer’s protocol. The sequencing library was size-selected using Sage Blue Pippin (Sage Sciences) to select fragment sizes of >10 Kbp to ensure removal of smaller fragments and adapter dimers. The library was sequenced on a PacBio Sequel II instrument in CCS/HiFi mode with two SMRT cells with 2 hours of pre-extension and 30-hour movie times. Read length distribution and quality of all HiFi reads was assessed using Pauvre v0.1923 [ 33 ] . To scaffold the genome using chromatin conformation sequencing, 1 g of flash-frozen young leaf material was harvested from ‘Honeycrisp’ trees at the Washington State University (WSU) Sunrise Research Orchard near Rock Island, WA USA and shipped to the HudsonAlpha Institute for Biotechnology in Huntsville, AL USA. The sequencing library was prepared using the Dovetail Genomics Omni-C kit and was sequenced on an Illumina NovaSeq 6000 with PE150 reads. A subset of 1 million read pairs was used as input for Phase Genomics hic_qc to validate the overall quality of the library [ 34 ] . Phased haplome assembly and scaffolding The expected genome size, heterozygosity, and percent of repeats was assessed by generating 21-mer sequences from the raw HiFi data with Jellyfish v2.3.0 (RRID: SCR_005491 ) [ 35 ] and GenomeScope 2.0 (RRID: SCR_017014 ) [ 36 , 37 ] . HiFi reads were assembled into contigs using hifiasm v0.16.1 (RRID: SCR_021069 ) [ 38 , 39 ] , with the Hi-C integration mode that incorporated Dovetail Omni-C reads for phasing. Both haplomes of the assembly were scaffolded into chromosomes using the Juicer pipeline v1.6 (RRID: SCR_017226 ) [ 40 ] , where the Omni-C reads were mapped separately to both hifiasm haplomes [ 39 , 41 ] with the parameter “-s none”. The Omni-C data was subset to ∼100× coverage and the 3D-DNA v201008 scaffolding pipeline [ 42 ] was run with options “–editor-saturation-centile 10 –editor-coarse-resolution 100000 –editor-coarse-region 400000 –editor-repeat-coverage 50”. Contact maps were manually edited using the Juicebox Assembly Tools (JBAT) v1.11.08 (RRID: SCR_021172 ) [ 40 ] to produce the expected 17 chromosomes per haplome. Contigs containing assembled telomeres were correctly oriented to the terminal ends by searching for the TTTAGGG repeat (or the reverse complement CCCTAAA) using the analyze_genome function of GENESPACE [ 43 ] . Chromosomes were numbered and oriented using haplome A of the ‘Gala’ assembly [ 27 ] . Genome quality and completeness was assessed using benchmarking universal single-copy gene orthologs (BUSCO v5.2.2 (RRID: SCR_015008 )) [ 44 ] with the “eudicots_odb10” database. Haplome completeness was also assessed using Merqury v1.3 [ 45 ] . Transcriptome sequencing To facilitate gene annotation, total RNA was isolated from various tissues harvested from ‘Honeycrisp’, ‘Red Delicious’, and ‘Granny Smith’ apple trees grown at the WSU Sunrise Research Orchard near Rock Island, WA USA; ‘Gala’ and ‘WA38’ apple trees grown at the WSU and USDA-ARS Columbia View Research Orchard near Orondo, WA USA; and ‘D’Anjou’ pear trees grown at the WSU Tree Fruit Research and Extension Center Research Orchard in Wenatchee, WA USA using a modified CTAB/Chloroform extraction [ 46 ] . Total RNA was assessed for quality (RNA integrity number (RIN) ≥ 8) and purity (A260/280 > 1.8). Sources for all RNA are available in Table 1 . Total RNA (2 μg) was used to construct Illumina TruSeq stranded libraries following manufacturers’ instructions. Libraries were sequenced on an Illumina NovaSeq 6000 with PE150 reads at the HudsonAlpha Institute for Biotechnology in Huntsville, AL USA. Table 1 Yield of Illumina transcriptome sequencing of fruit, leaves, and flower tissues of apples and pear generated and used for genome annotation in this study. Cultivar Tissue Reads Yield (Gbp) Yield P20 (Gbp) Average read length NCBI SRA Honeycrisp Fruitlet stage 1 45,773,784 13,823,682,768 13,069,822,280 142 SAMN29611971 Fruitlet stage 2 35,618,706 10,756,849,212 10,227,275,771 143 SAMN29611972 Budding leaves 81,448,971 24,597,589,242 22,769,634,770 139 SAMN29611973 Expanding leaves 35,381,039 10,685,073,778 9,971,308,535 141 SAMN29611974 Half-inch terminal buds 47,811,924 14,439,201,048 13,409,542,519 140 SAMN29611975 Flower buds 45,822,773 13,838,477,446 13,175,876,315 144 SAMN29611976 Open flowers 30,938,395 9,343,395,290 8,718,474,885 141 SAMN29611977 Gala Fruitlet stage 1 80,440,219 24,292,946,138 22,928,129,883 142 SAMN29611954 Fruitlet stage 2 32,475,136 9,807,491,072 9,284,944,973 143 SAMN29611955 Budding leaves 30,368,057 9,171,153,214 8,508,033,713 140 SAMN29611956 Expanding leaves 40,650,277 12,276,383,654 11,306,267,120 138 SAMN29611957 Roots from tissue culture 35,324,786 10,668,085,372 9,940,132,737 140 SAMN29611958 Quarter-inch terminal buds 37,532,631 11,334,854,562 10,634,379,784 141 SAMN29611959 Flower buds 39,636,821 11,970,319,942 11,141,652,382 140 SAMN29611960 Open flowers 34,363,075 10,377,648,650 9,775,838,818 142 SAMN29611961 Red Delicious Fruitlet stage 2 27,319,955 8,250,626,410 7,682,200,349 140 SAMN29611962 Granny Smith Fruitlet stage 1 29,426,606 8,886,835,012 8,335,731,187 141 SAMN29611963 Fruitlet stage 2 72,205,133 21,805,950,166 20,663,261,900 143 SAMN29611964 Budding leaves 57,244,195 17,287,746,890 16,179,280,911 141 SAMN29611965 Expanding leaves 40,798,422 12,321,123,444 11,499,303,808 140 SAMN29611966 Roots from tissue culture 32,493,822 9,813,134,244 9,207,784,729 141 SAMN29611967 Quarter-inch terminal buds 30,394,263 9,179,067,426 8,512,945,196 140 SAMN29611968 Flower buds 29,735,514 8,980,125,228 8,364,532,017 140 SAMN29611969 Open flowers 34,303,317 10,359,601,734 9,603,420,430 140 SAMN29611970 WA 38 Fruitlet stage 1 45,284,208 13,675,830,816 12,831,991,620 141 SAMN29611978 Fruitlet stage 2 25,486,256 7,696,849,312 7,261,195,330 142 SAMN29611979 Budding leaves 39,339,589 11,880,555,878 11,017,185,994 140 SAMN29611980 Expanding leaves 34,784,980 10,505,063,960 9,719,694,010 139 SAMN29611981 Roots from tissue culture 33,935,508 10,248,523,416 9,426,506,860 138 SAMN29611982 Quarter-inch terminal buds 88,677,165 26,780,503,830 24,913,194,030 140 SAMN29611983 Flower buds 23,170,354 6,997,446,908 6,588,921,074 142 SAMN29611984 Open flowers 35,274,250 10,652,823,500 9,941,466,644 141 SAMN29611985 D’Anjou Fruitlet stage 1 89,462,306 27,017,616,412 25,459,693,894 142 SAMN29611986 Fruitlet stage 2 48,481,031 14,641,271,362 13,921,844,851 143 SAMN29611987 Budding leaves 29,823,484 9,006,692,168 8,442,259,663 141 SAMN29611988 Expanding leaves 57,920,009 17,491,842,718 16,460,531,509 142 SAMN29611989 Quarter-inch terminal buds 40,966,825 12,371,981,150 11,476,090,088 140 SAMN29611990 Flower buds 29,183,231 8,813,335,762 8,264,473,671 141 SAMN29611991 Open flowers 32,128,369 9,702,767,438 8,996,878,963 140 SAMN29611992 Gbp: gigabase pairs; NCBI: National Center for Biotechnology Information; SRA: Sequence Read Archive. Repeat analysis and gene annotation Repetitive elements on both haplotypes were annotated using EDTA v2.0.0 [ 47 ] with flags “–genome, –anno 1, –sensitive=1”. To supplement ab initio gene predictions, extensive extrinsic gene annotation homology evidence is needed. Thus, we downloaded existing RNA-seq data for ‘Honeycrisp’ apples from the National Center for Biotechnology Information (NCBI) using Sequence Read Archive (SRA) toolkit v2.9.6-1 ( SRX3408575 , SRX5369275 , SRX5369276 , SRX5369290 , SRX5369299 , SRX5369300 , SRX5369302 , SRX8712695 and SRX8712718 ) [ 48 – 50 ] , and combined with the RNA-seq data generated for this project (described above). We de novo assembled these two sets of RNA transcripts separately using Trinity v2.13.2 (RRID: SCR_013048 ) [ 51 ] , where we used the flag–trimmomatic to filter the reads for quality. Because the newly generated RNA-seq data were strand-specific, for these we also used the flag “–SS_lib_type RF”. We identified open reading frames using TransDecoder v5.5.0 (RRID: SCR_017647 ) [ 52 ] . Gene annotation was performed using BRAKER2 v2.1.6 (RRID: SCR_018964 ) [ 53 ] , where we ran BRAKER2 twice, with RNA-seq data and protein databases run separately. For the RNA-seq run, we first filtered the data for adapters and quality using TRIMMOMATIC v0.39 (RRID: SCR_011848 ) [ 54 ] with leading and trailing values of 3, sliding window of 30, jump of 10, and a minimum remaining read length of 40. We next mapped these data to the genome using STAR v2.7.9a [ 55 ] and combined the BAM files using SAMtools (RRID: SCR_005227 ) [ 56 ] . For the homology-based annotation in BRAKER2, we used gene models from Malus domestica ‘Gala’ diploid v2, M. sieversii diploid v2 [ 27 ] , M. baccata v1 [ 57 ] . M. domestica ‘Golden Delicious’ double haploid v1 (GDDH13) [ 31 ] , Pyrus communis ‘Barlett’ double haploid v2 [ 58 ] , and our de novo assemblies, in addition to the viridiplantae OrthoDB (RRID: SCR_011980 ) [ 59 ] . We filtered the resulting AUGUSTUS [ 53 ] output for those that contained full hints (gene model support) and combined the two runs using TSEBRA v1.0.3 [ 60 ] . Finally, we removed any transcript/gene with ≥90% softmasking, i.e., mainly repeat sequences. Genome annotation completeness of our genome and other Malus genomes were assessed using BUSCO v5.2.2 (RRID: SCR_015008 ) [ 44 ] with the “eudicots_odb10” database for comparative purposes. The final ‘Honeycrisp’ gene sets from both haplomes were annotated with InterProScan v5.44–79.0 (RRID: SCR_005829 ) [ 61 , 62 ] , including a search against all the available InterPro databases and Gene Ontology (GO) [ 63 , 64 ] prediction. In addition, genes were searched against the 26Gv2.0 OrthoFinder v1.1.5 (RRID: SCR_017118 ) [ 65 ] gene family database using both BLASTp (RRID: SCR_001010 ) [ 66 ] and HMMscan (RRID: SCR_00530 ) 5 [ 67 ] classification methods with the GeneFamilyClassifier tool from PlantTribes 2 [ 68 ] . This analysis provided additional functional annotation information that includes gene counts of scaffold taxa, superclusters at multiple clustering stringencies, and functional annotations that were pulled from various public genomic databases. Comparative genomics Similarities in lengths and structural variations between the two haplomes were determined by running MUMmer v4.0 (RRID: SCR_018171 ) [ 69 ] and Assemblytics [ 70 ] . To identify the shared and unique gene families among Malus species and cultivars, genes from the six publicly available Malus genomes (Table 2 ) were integrated into the PlantTribes 2 gene model database (26Gv2.0) using the same method described above. The overlapping orthogroups (with at least 30 counts in the category) among the eight Malus annotations (including both haplomes from ‘Honeycrisp’) were calculated and visualized with an upset plot generated by TBtools v1.0986982 [ 71 ] . Table 2 Comparison of genomic features and assembly statistics of current assembly of ‘Honeycrisp’ genome and previously published genomes of apples. Genomes ‘Honeycrisp’ (reference: this work) ‘Gala’, M. sieversii , M. sylvestris (all Diploid) [ 72 ] HFTH1; ‘Hanfu’ (Triploid) [ 28 ] GDDH13; ‘Golden Delicious’ (Double haploid) [ 29 ] ‘Golden Delicious’ (Diploid) [ 24 ] Assembly Haploid genome size (Mbp) 660–674 666–679 658.9 651 742 Scaffold N50 (Kbp) 31.6–32.8 6.1–21.8 6.99 5.5 16 Complete BUSCO (%) 98.6–98.7 98.0–98.8 98.6 98.0 82.0 Annotation Protein-coding genes 47,563–48,655 44,691–44,847 44,677 42,140 57,386 Complete BUSCO (%) 96.8–97.4 94.6–95.4 93.6 96.1 68.0 Gene family Number of orthogroups in 26Gv2 10,351–10,367 10,044–10,115 9974 10,117 8824 BUSCO: Benchmarking Universal Single-Copy Orthologs; Kbp: kilobase pairs; Mbp: megabase pairs. Data validation and quality control A haplotype-phased chromosome-scale assembly In total, nearly 55× coverage of PacBio HiFi reads and nearly 200× coverage of Dovetail Omni-C reads (Table 3 ) was generated. This included 2,543,518 HiFi reads with an average length of 14,655 base pairs (bp) and ∼91% of reads ≥10,000 bp. Two phased haplomes, haplome A (HAP1) and haplome B (HAP2, these two sets of terms will henceforth be used interchangeably), were assembled and validated by inspection of the Omni-C contact maps (Figure 2 ). Both haplomes are highly contiguous and of similar size. HAP1 is 674 megabase pairs (Mbp) in length, contained in 473 contigs with a contig N 50 of 32.8 Mbp, whereas HAP2 is 660 Mbp in length, contained in 215 contigs with a contig N 50 of 31.6 Mbp (Table 4 ). No mis-joins requiring manual breaks were identified in the assemblies. For HAP1, a total of 13 joins were made to build the final assembly into 17 chromosomes, with 95.4% of the assembled sequence contained in the 17 pseudomolecules representing chromosomes. Nineteen joins were made for HAP2, with 98.2% of the assembled sequence in the 17 pseudomolecules. Based on the Merqury k -mer analysis (Figure 3 ), the HAP1 assembly had a k -mer completeness of 82.7% (quality value [QV] 64.5), the HAP2 assembly 83% (QV 66.7), and the combined assemblies were 98.6% (QV 65.5) (Table 4 ). BUSCO completeness of HAP1 was 98.6% and HAP2 98.7%, suggesting high genome completeness for both haplomes, comparable or superior to other high quality apple genome assemblies (Table 2 ). The two haplomes are structurally similar to each other (Figure 4 ). Compared with the assembly statistics of previously published apple genomes, the current ‘Honeycrisp’ assemblies are the most contiguous to date (Table 2 ). Table 3 Overview of PacBio HiFi and Omni-C sequencing data generated for the ‘Honeycrisp’ genome assembly. Library Sequencing Length (Nucleotides) Number of reads JNQN Omni-C 150 951,241,272 HiFi-1 PacBio HiFi 14,881 ∗ 1,088,992 HiFi-2 PacBio HiFi 14,429 ∗ 1,454,526 ∗ Average length. Table 4 Summary of ‘Honeycrisp’ genome assembly statistics. Assembly Length # Contigs Longest contig (bp) N50 L50 QV k -mer completeness (%) BUSCO (%) Honeycrisp Haplome A 674,476,353 473 55,653,390 32,818,622 9 64.5 82.7 98.6 Honeycrisp Haplome B 660,238,068 215 56,154,892 31,578,807 9 66.7 83 98.7 Combined 65.5 98.6 QV: quality value. Figure 2. Omni-C contact maps of the assembled chromosome-length scaffolds of 17 chromosomes. (A) Haplome A and (B) Haplome B of ‘Honeycrisp’ genome. Figure 3. Histogram of k -mer multiplicity of sequence reads. (A) Haplome A and (B) Haplome B of ‘Honeycrisp’ genome assemblies. k -mer multiplicity ( x -axis) is plotted against k -mer counts ( y -axis) to estimate the heterozygosity, copy numbers, sequencing depth, and completeness of a genome using Merqury v1.3 [ 45 ] . Colors in the plot represent the number of times each k -mer is found in the genome assembly. Figure 4. Synteny comparison of ‘Honeycrisp’ Haplome 1 (HAP1), ‘Honeycrisp’ Haplome 2 (HAP2) from this study, and ‘Gala’ [ 27 ] genomes. GENESPACE [ 43 ] was used for synteny comparison. Genome annotation The yield of Illumina transcriptome sequencing data of fruit, leaves, and flower tissues of apples and pear ranged from approximately 9 to 27 gigabase pairs (Gbp) in flowers and leaf buds respectively (Table 1 ). Nearly 62% of both haplomes were annotated as repetitive DNA, mostly comprised of long terminal repeat (LTR) retrotransposons (Table 5 ). A total of 47,563 genes were annotated in HAP1 and 48,655 in HAP2, slightly more than in other published Malus annotations (Table 2 ). Complete BUSCO scores of the protein annotations are 96.8% for HAP1 and 97.4% for HAP2, the highest completeness among all publicly available Malus genome annotations (Table 2 ). 72.85% and 68.88% of the predicted transcripts were annotated with Interpro terms, 68.58% and 64.94% with Pfam domains, and 51.04% and 48.76% with at least one GO term in HAP1 and HAP2, respectively. In the PlantTribes 2 classification, 91.11% and 85.50% of the predicted transcripts from HAP1 and HAP2, respectively, were assigned to pre-computed orthogroups. Table 5 Summary of repetitive element annotation in Haplome A and Haplome B of the ‘Honeycrisp’ genome assemblies. Class Haplome A (%) Haplome B (%) LTR Copia 9.73 9.60 Ty3 20.29 17.80 unknown 14.89 16.86 TIR CACTA 2.21 1.95 Mutator 4.16 4.25 PIF Harbinger 2.43 2.60 Tc1_Mariner 0.15 0.27 hAT 2.30 2.31 polinton – 0.01 nonLTR LINE_element 0.18 0.17 unknown 0.09 0.18 nonTIR helitron 2.95 3.18 repeat region 2.91 2.78 Total 62.43 61.97 As the number of plant genomes are being generated at an unprecedented speed, we developed the following gene naming convention to avoid potential ambiguity: Maldo.hc.v1a1.ch10A.g00001.t1 – where: Maldo means Malus domestica ; hc is the cultivar, ‘Honeycrisp’; v1a1 indicates the first assembly and first annotation of this genome; ch10A identifies the gene as annotated from chromosome 10 (versus from an unplaced scaffold, which will be indicated by “sc”) in haplome A (HAP1) (versus haplome B (HAP2)); g00001 is a five-digit gene identifier; and t1 represents a transcript number of the gene. Gene family analysis Gene family evaluation was performed using PlantTribes 2 and its 26Gv2-scaffold orthogroup database, which contains representative protein coding sequences from most major land plant lineages. A total of 11,263 unique orthogroups (OGs) were identified in all eight Malus annotations (including the two ‘Honeycrisp’ haplomes) investigated. ‘Honeycrisp’ transcripts were assigned to 10,351 and 10,367 orthogroups, similar to ‘Gala’ and GDDH13 (Table 2 and Figure 5 ). We further investigated orthogroups that are shared and unique in the eight Malus annotations. Most (7645) orthogroups are shared by all the genomes, and 9279 orthogroups were shared by both ‘Honeycrisp’ haplomes and five other genomes (Figure 5 ). This comparison indicates that the ‘Honeycrisp’ annotation captured genes in virtually all the Malus gene families. We also found 54 orthogroups unique to ‘Honeycrisp’ (i.e., shared by the two ‘Honeycrisp’ haplomes only) and 35 and 32 that are unique to each ‘Honeycrisp’ haplome (Figure 5 ). These orthogroups could provide valuable information in the molecular mechanisms underlying genotype-specific traits. Figure 5. The Honeycrisp genome captured a vast majority of Malus gene families. Black dots indicate presence of gene families and gray dots indicate absence. Yellow horizontal bars represent the number of orthogroups in each genome. The black vertical bars represent the number of orthogroups in each category. HC: ‘Honeycrisp’ (this work); GDDH13: Malus domestica GDDH13; Gala_hap: M. domestica ‘Gala’ haploid; M.si_hap: M. sieversii haploid; M.sy_hap: M. sylvestris haploid; HFTH: M. domestica HFTH1; GDv1: M. domestica Golden Delicious v1. Re-use potential This fully phased, high-quality, chromosome-scale genome of ‘Honeycrisp’ apple will add to the toolbox for apple genetic research and breeding. It will enable genetic mapping, identification of genes, and development of molecular markers linked to disease, pest resistance, abiotic stress tolerance and adaptation, as well as horticulturally relevant harvest and postharvest fruit quality traits for use in apple breeding programs. Ultimately, the addition of high-quality genomic resources for ‘Honeycrisp’ can lead to enhanced orchard and supply chain management for many other apple cultivars, promoting future sustainability of the pome fruit industry. Data Availability The whole genome sequence data generated in this study have been deposited at the NCBI database under BioProject ID PRJNA791346 . PacBio HiFi reads, and Hi-C reads are deposited in NCBI with the SRA accession number SAMN24287034 and SAMN29611953 , respectively. Transcriptomic data generated in this study for genome annotation are deposited in NCBI with SRA accession numbers from SAMN29611954 to SAMN29611992 . The Maldo.hc.v1a1 ‘Honeycrisp’ genome assembly, gene annotation, and functional annotation for both haplomes can be accessed via the GigaScience GigaDB repository [ 73 ] , and will be available in the Genomic Database for Rosaceae, which is currently in progress. Declarations List of abbreviations BLAST: Basic Local Alignment Search Tool; bp: Base Pair; BUSCO: Benchmarking Universal Single Copy Orthologs; Gb: gigabases; GO: Gene Ontology; HMW: High-molecular-weight; JBAT: Juicebox Assembly Tools; LTR: Long Terminal Repeat; NCBI: National Centre for Biotechnology Information; OG: Orthogroup; QV: Quality Value; RIN: RNA Integrity Number; SMRT: Single Molecule Real Time; TRF: Tandem Repeat Finder Ethical approval Not applicable. Consent for publication Not applicable. Competing Interests The authors declare that they have no competing interests. Funding This research was funded by Washington Tree Fruit Research Commission (grant number: AP-19-103), LH; United States Department of Agriculture, Agricultural Research Service, LH; New York State Department of Agriculture & Markets, Apple Research & Development Program (ARDP), grant number: CM04068AQ, AK. Authors’ Contribution AK, and LH conceptualized, designed, and managed the project. SBC, HZ, AH, and HH, constructed DNA and RNA, and RNA-seq libraries for sequencing. SBC, and HZ, performed genome and transcriptome sequence analysis and interpretation. SBC, HZ, AS, HH, AH, LH, and AK drafted, revised, and finalized the manuscript. All authors read and approved the final version. Acknowledgements We acknowledge Della Cobb-Smith and Jugpreet Singh for their help in sample collection and preparing genomic DNA extraction.
A team of researchers from Cornell University has sequenced the Honeycrisp apple genome, a boon for scientists and breeders working with this popular and economically important cultivar. Sequenced with state-of-the-art technologies, the genome—available on an open-source basis for anyone to access—provides a valuable resource for understanding the genetic basis of important traits in apples and other tree fruit species, which can be used to enhance breeding efforts, according to the paper. The U.S. apple industry is worth $23 billion annually, and Honeycrisp is its most valuable cultivar, bringing growers roughly twice the value per pound than the second-most valuable cultivar, Fuji. Due to its favorable traits, including crispness, flavor, cold-hardiness and resistance to apple scab fungal disease, breeders have used Honeycrisp as a parent in nine new cultivars on the market, including the Cornell University-developed Snapdragon. At the same time, growing Honeycrisp can be challenging. "Although it has many positive traits, it's one of the most difficult apple cultivars to grow in the production system in orchards; it suffers from many physiological and post-harvest issues," said Awais Khan, associate professor in the School of Integrative Plant Science at Cornell AgriTech and first and co-corresponding author of the paper, "A Phased, Chromosome-scale Genome of Honeycrisp Apple," published last month in the journal Gigabyte. For starters, Honeycrisp trees have difficulty getting enough nutrients on their own and require a specific nutrient management program for good yields and health, Khan said. Without such management, the trees commonly develop "zonal leaf chlorosis," where leaves yellow and curl due to carbohydrate and nutrient imbalances. Honeycrisp apples are also susceptible to disorders such as bitter pit, due to calcium imbalances, and bitter rot, a fungal infection. Such issues are fundamentally genetically controlled, though improper handling and post-harvest storage can make them worse. "If we don't know the genome and the genes in Honeycrisp, then we cannot specifically target and select for favorable traits and select out unfavorable traits through breeding," Khan said. Advances in genetic sequencing technology made it possible to sequence, assemble, and publish the Honeycrisp genome in a short time. In general, the apple genome, first sequenced with the Golden Delicious variety in 2010, is complex, large and heterozygous, meaning there are many versions of specific genes. There are also a lot of repeated sequences in the apple genome. In 2010, when the first apple genome was published, technologies could read only short fragments of DNA at a time, say, 150 letters. Scientists would then overlap sequences of perhaps 50 letters, and like a puzzle, they would use computational programs and algorithms to match the end of one reading with the start of another. This allowed them to piece together longer strings of DNA to identify entire genes and eventually the genome. But one problem with this method is that repeated elements can confuse the process. In this study, the researchers used a combination of current sequencing technologies—called PacBio HiFi, Omni-C and Illumina—that translated long reads of genetic sequences. "We can sequence the whole larger fragment of the DNA sequence continuously, so we don't have these big challenges of computational biology or bioinformatics to assemble and find the overlapping sequences," Khan said. The long-read sequencing also helped them tease apart the apple's diploid genome; like humans, apples have two sets of chromosomes, one from each parent. The new technologies allowed the researchers to sequence two single sets of chromosomes, which in future work may be used to differentiate between specific genetic contributions of each parent. Using these advanced methods, the Honeycrisp genome covered 97% of all the protein-coding genes. By comparison, the 2010 Golden Delicious genome assembly only covered 68% of the genes. This research is a collaboration between Cornell University, Alex Harkess at the HudsonAlpha Institute for Biotechnology and Auburn University, and Loren Honaas, at the U.S. Department of Agriculture Agricultural Research Service (USDA-ARS) Tree Fruit Research Lab in Wenatchee, Washington.
10.46471/gigabyte.69
Physics
Deep Space Atomic Clock moves toward increased spacecraft autonomy
E. A. Burt et al, Demonstration of a trapped-ion atomic clock in space, Nature (2021). DOI: 10.1038/s41586-021-03571-7 Journal information: Nature
http://dx.doi.org/10.1038/s41586-021-03571-7
https://phys.org/news/2021-07-deep-space-atomic-clock-spacecraft.html
Abstract Atomic clocks, which lock the frequency of an oscillator to the extremely stable quantized energy levels of atoms, are essential for navigation applications such as deep space exploration 1 and global navigation satellite systems 2 , and are useful tools with which to address questions in fundamental physics 3 , 4 , 5 , 6 . Such satellite systems use precise measurement of signal propagation times determined by atomic clocks, together with propagation speed, to calculate position. Although space atomic clocks with low instability are an enabling technology for global navigation, they have not yet been applied to deep space navigation and have seen only limited application to space-based fundamental physics, owing to performance constraints imposed by the rigours of space operation 7 . Methods of electromagnetically trapping and cooling ions have revolutionized atomic clock performance 8 , 9 , 10 , 11 , 12 , 13 . Terrestrial trapped-ion clocks operating in the optical domain have achieved orders-of-magnitude improvements in performance over their predecessors and have become a key component in national metrology laboratory research programmes 13 , but transporting this new technology into space has remained challenging. Here we show the results from a trapped-ion atomic clock operating in space. On the ground, NASA’s Deep Space Atomic Clock demonstrated a short-term fractional frequency stability of 1.5 × 10 −13 / τ 1/2 (where τ is the averaging time) 14 . Launched in 2019, the clock has operated for more than 12 months in space and demonstrated there a long-term stability of 3 × 10 −15 at 23 days (no drift removal), and an estimated drift of 3.0(0.7) × 10 −16 per day. Each of these exceeds current space clock performance by up to an order of magnitude 15 , 16 , 17 . The Deep Space Atomic Clock is particularly amenable to the space environment because of its low sensitivity to variations in radiation, temperature and magnetic fields. This level of space clock performance will enable one-way navigation in which signal delay times are measured in situ, making near-real-time navigation of deep space probes possible 18 . Main All space clocks in use today employ atomic beams or gas cells to confine atoms 15 , 16 , 17 . These clocks have short-term stabilities ranging from 1 × 10 −12 / τ 1/2 to 10 × 10 −12 / τ 1/2 , whereas long-term (greater than a day) stability, usually characterized by drift, is between 1 × 10 −15 and 10 × 10 −15 per day. Long-term clock autonomy, highly desirable for deep space and global navigation satellite system (GNSS) applications 19 , is limited by drift, which in cell clocks is usually caused by wall collisions. Trapped-ion clocks 20 solve this by electromagnetically confining atoms, thereby eliminating wall collisions. Recently, a different type of beam clock was demonstrated in space with an inferred short-term stability of 3 × 10 −13 / τ 1/2 (ref. 21 ), but no long-term stability was reported. Because of the advantages to clock performance provided by atom trapping, many primary atomic clocks used in national metrology laboratories use some form of trapping 12 , 13 . Motivated by the low instability of trapped-ion clocks and by their potentially very small size, NASA’s Jet Propulsion Laboratory (JPL) embarked on a series of mercury ion clock development projects aimed at ultimately producing a space-qualified version. Demonstrations in the laboratory achieved a stability of 2 × 10 −14 / τ 1/2 (ref. 10 ) and drifts as low as 2.7 × 10 −17 per day (no drift removal) 11 . The approach followed at JPL did not use lasers, cryogenics or microwave cavities, enabling the technology to be robust and relatively small, and to consume less than 50 W of power 10 , 22 , 23 , 24 . Following this success, in 2011 NASA’s Space Technology Mission Directorate initiated the Deep Space Atomic Clock (DSAC) mission to demonstrate a trapped-ion atomic clock in space with the goal of 2 × 10 −13 / τ 1/2 short-term stability and a non-drift-removed 3 × 10 −15 stability at one day. This technology demonstration mission is designed to address the unique needs of deep space navigation 25 , but given the clock’s low size, mass and power, it is also well suited for GNSS applications in Earth orbit. The DSAC payload, consisting of a trapped-ion clock and its 1 × 10 −13 class ultrastable local oscillator (LO) together with a Global Positioning System (GPS) receiver to aid comparisons with clocks on the ground, was launched in June 2019 aboard the General Atomics' Orbital Test Bed spacecraft into a 720-km, 100-min Earth orbit (Fig. 1 ) and first powered on in space in August 2019. The 2-year DSAC mission is designed to demonstrate clock operability, characterize performance of the technology in space and carry out several navigation experiments to demonstrate its utility in that domain. Fig. 1: The Deep Space Atomic Clock launch. Left to right: the DSAC clock (credit: JPL); the DSAC instrument, including GPS receiver and ultrastable oscillator (USO), integrated in the spacecraft payload (credit: General Atomics); and the launch of Space-X Falcon Heavy STP-2 carrying DSAC into space (credit: NASA). Full size image DSAC trapped-ion clock The DSAC ion clock, payload configuration and ground testing, as well as its navigational uses and some scientific applications, have been previously described 14 , 26 . Here we summarize these and then focus on results obtained after launch. At the heart of the DSAC clock are two linear radiofrequency Paul ion traps 27 , 28 , one having four rods (the quadrupole or load trap), the other having sixteen rods (the multipole trap). Figure 2 shows a schematic cross-section of the vacuum chamber, revealing the traps. A radiofrequency voltage is applied across alternating trap rod pairs, with the field strength decreasing linearly towards the trap axis. Ions seek the weakest field and are thereby trapped in the centre. The other two images in Fig. 2 show contour plots of field strength for the load trap and multipole trap in a plane perpendicular to this axis. In the load trap, the non-zero time-average force on the ions from strong field to weak field regions gives rise to an effective quadratic potential proportional to E 2 , where E is the electric field magnitude. Ions in the trapping region execute oscillatory motion at about 50 kHz in the radial direction (axially, ions are confined by d.c. voltage endcaps and move ballistically). The DSAC clock confines a cloud of up to 10 7 ions. The inward trapping force is partially balanced by Coulomb repulsion pushing ions outwards. As a result, the ion cloud occupies a region that extends radially about 1 mm, such that the spatial- and ensemble-averaged radiofrequency field experienced by the trapped ions is non-zero. This region of confinement is smaller than the interrogation wavelength, so the first-order Doppler shift, proportional to − v / c (where v is ion velocity and c is the speed of light), is eliminated 29 . While ⟨ v ⟩ = 0 for trapped ions, ⟨ v 2 ⟩ is of order 10 5 m 2 s −2 , resulting in a relativistic second-order Doppler shift of about 1 × 10 −12 in the load trap. The main advantage of the multipole trap is that the radial potential created by this configuration approaches that of a square well (see the contour plot on the right side of Fig. 2 ). Ions in the multipole trap experience a lower ⟨ E 2 ⟩ than in the load trap, which results in a lower ⟨ v 2 ⟩ and virtually eliminates the ion-number-dependent second-order Doppler shifts, a primary systematic effect in the clock 28 . Fig. 2: Clock vacuum chamber and traps. The centre image is a cross-section schematic of a DSAC-like vacuum chamber (light grey), showing the trap rods for the load trap used to initially load ions, prepare their initial internal state and read out their final state (on the left in the centre image) and for the multipole trap (dark grey on the right in the centre image) where sensitive microwave interrogation of the clock transition occurs. The optical window assemblies where state preparation light is brought in and signal light is detected are shown in cyan and magenta. A representation of the trapped-ion cloud lying on the load trap axis is shown in purple. The image on the left shows a contour plot of the amplitude of the quadrupole radiofrequency field applied to the load trap rods in a plane perpendicular to the rods. Dark blue corresponds to low field and red to high field. The four symmetric blue rectangles correspond to the trap rods. Ions are ‘weak-field seeking’, so the blue circle in the centre is the low-field region where the ions are trapped. Ions experience a radiofrequency amplitude that varies with the radial distance from the centre. In addition to thermal motion, ions oscillate at the trap radiofrequency, a process called ‘micro-motion’ that results in a second-order Doppler frequency shift in the clock transition. The image on the right shows a contour plot of the radiofrequency field amplitude in the multipole trap with the same colour designations. Note that the centre trapping region in the multipole trap (dark blue) corresponds to a considerably larger fraction of the volume inside the trap rods than for the load trap. This is representative of its near square-well potential. In the multipole trap, ions move quasi-ballistically with very little micro-motion until they come close to the trap rods, such that the time-averaged micro-motion and second-order Doppler shift are much smaller in the multipole trap. Full size image To construct a clock based on the frequency associated with the difference between quantized atomic energy levels, a synthesized frequency referenced to an LO is used to interrogate the atoms or ions (see ‘Clock operation sequence’ in Methods). In the case of Hg + , this is 40.5 GHz driving the S 1/2 , F = 0, m F = 0 to S 1/2 , F = 1, m F = 0 magnetic-field-insensitive hyperfine clock transition (see Extended Data Fig. 2 ). If the synthesized LO frequency is correct, the atomic response (‘signal’) will be large, otherwise it will be small. This response is used to form a correction that is fed back to the LO, thereby closing the loop and transferring the atomic energy-level stability to the output of the LO (DSAC actually uses a more complex LO architecture) 30 . A 194-nm light source is used to prepare the initial state of the ions and to read out the state after microwave interrogation (see ‘Clock operation sequence’ in Methods for details). Collisions with a neon buffer gas keep trapped ions near room temperature so that thermal excitations do not eject them from the trap. During DSAC clock testing, a small neon depletion mechanism was discovered, which rendered clock operation in the multipole trap no longer viable 14 . Although operation in the load trap was still possible, the quadrupole load trap design in this version of the clock is not optimized for clock operation. As a result, neither the short-term nor the long-term stability in the load trap will be as good as is possible in the multipole trap, but long-term stability in the load trap was still more than sufficient to satisfy the mission requirement of 2 × 10 −14 at one day, and load-trap-based clock operation has been the primary mission configuration. On the ground, the DSAC clock demonstrated a stability of 2 × 10 −13 / τ 1/2 operating in the multipole trap with a maser as the LO and a stability of 2 × 10 −15 at one day 14 . Clock operation in space Before running the clock continuously, it was thoroughly characterized in the space environment. The largest frequency shifts in the DSAC clock are due to magnetic effects, ion-number-dependent relativistic Doppler effects, and the effects of background gas collisions. Overall sensitivity to temperature changes, including electronic sensitivities, enters through these three fundamental paths. Here we only summarize these characterizations. See the Methods section for details on how each measurement was made. During each orbit, the magnetic field has a peak-to-peak variation of about 25 μT, over 100 times as much as in a typical laboratory setting. The combination of the low sensitivity of the Hg + clock transition to magnetic shifts (about 4 times lower than caesium, for example) and the three layers of magnetic shielding reduce associated frequency shifts to below the clock noise at the orbital period time. Variations in the clock frequency due to trace gas evolution are expected to be low (see ‘Background pressure evolution’ in Methods). In part, this is due to the high-temperature bakeout applied during clock preparation. Measurements placed a limit on frequency variations due to this effect of well below 1 × 10 −15 on a day timescale. Residual relativistic second-order Doppler effects in the load trap depend primarily on the number of ions trapped. The total effect is about −50 mHz (about −1 × 10 −12 ) from an empty to full trap (see ‘Thermal and Doppler effects’ in Methods). Therefore, an ion number stability of 0.1% is required to support clock stability at the 10 −15 level. This level of ion number stability is normally achieved after ion loading equilibrates with the residual trap loss mechanisms. The total temperature sensitivity of the DSAC instrument is a combination of several factors, including relativistic Doppler effects, electronics sensitivities, and stray temperature-dependent magnetic effects that have not yet been fully characterized (hereafter we will reserve ‘total temperature sensitivity’ to refer to this combination). When it was possible to disentangle these, the fundamental sensitivity of the clock was measured to be as low as −2.3(1.1) × 10 −15 °C −1 with no active thermal stabilization on the ion portion of the clock or electronics (the LO is temperature-stabilized, but this only affects short-term performance). This point is notable, as rarely, if ever, are space clocks able to operate without some form of temperature control. Orbital transits through South Atlantic Anomaly expose the clock to a high radiation environment. This results in increased photomultiplier tube (PMT) counts and small changes in the LO frequency drift rate, but the DSAC control algorithm is designed to filter these out, so there is no observed impact on clock output (see Methods ). Clock performance in space At clock start up, the LO frequency can be far off the correct value, and it must be put on frequency before the clock can function normally—a process called LO acquisition 22 . Once the LO is acquired, the clock requires about 12 h to reach thermal equilibrium. During this time, the trapped-ion number also slowly approaches its equilibrium value. The expected short-term performance for the clock operating in the repurposed load trap is 7 × 10 −13 / τ 1/2 greater than the instability achieved in the optimized multipole trap, but still below the flight measurement system noise (see ‘Stability measurement system noise: GPS’ in Methods), for averaging times of less than a day. For all averaging times, the measurement system places at least an upper bound on clock performance 31 . Figure 3 shows 52 days of continuous frequency offsets for the DSAC clock relative to the USNO master clock using GPS. Frequency offsets include both clock noise and GPS measurement system noise. The USNO master clock consists of a hydrogen maser at the US Naval Observatory that is steered to Coordinated Universal Time (UTC). For 4 days of the data, when the USNO clock was not available, the reference clock from the Royal Observatory of Belgium was used instead (see Methods ). Clock drift is estimated using a linear least-squares fit to the non-drift-removed frequency offsets from the reference, which gives a slope of +3.0(0.7) × 10 −16 per day. Fig. 3: Clock frequency offsets in space. Frequency offsets versus time of the DSAC instrument in space relative to the USNO master clock, which is a hydrogen maser steered to UTC. In these data, DSAC is operating in the re-purposed load trap. The black line is a least-squares fit to a straight line, which shows a drift of +3.0(0.7) × 10 −16 per day, an order of magnitude lower than other space clocks. Full size image Figure 4 shows the measured Allan deviation 32 for the same run. The deviation from white frequency noise (proportional to 1/ τ 1/2 ) after 10 5 s is consistent with slow thermal effects related to variations in the host spacecraft orientation relative to the Sun (see Methods ) but can also be caused by the reference clock instability. The flicker noise floor of the clock and measurement system is not known, so flicker noise may also contribute on these timescales. A frequency stability of 3 × 10 −15 corresponding to a time deviation of less than 4 ns is shown at 23 days (about 2 × 10 6 s). Over many separate runs, the clock stability at one day varied between 3 × 10 −15 and 5 × 10 −15 , depending on the temperature variations for a given run. In addition to being taken in the lower-performing load trap, this performance level is even more notable because the data were taken without temperature control of the ion trap and associated electronics during a time when the baseplate temperature varied by 9 °C. It is important to note that the noise floor of the measurement system is not precisely known for operation in space where one platform is moving and is exposed to a wide-ranging thermal environment, but it is possible that it is contributing at the level of the long-term performance observed (see ‘The stability measurement system: GPS’ in Methods). Thus, the measurements stated here contain noise from the clocks on either end of the comparison as well as from the measurement system and so are an upper bound for the DSAC clock alone 31 . This long-term performance, already more than an order of magnitude better than existing space clocks, will enable one-way navigation, allowing control of deep space satellites in near real-time 18 as well as new planetary science measurements 26 . Fig. 4: Clock stability in space. Allan deviation of the DSAC clock relative to the USNO master clock (including GPS measurement system noise) while operating in the repurposed load trap (solid black line, with dashed black lines indicating 68% confidence intervals). The USNO master clock is a hydrogen maser periodically steered to UTC. The low Allan deviation at long times is indicative of very low drift and demonstrates the applicability of DSAC technology to applications requiring autonomy, such as GNSS and deep space navigation. Temperature servo loops that control the lamp and getter temperatures were not necessary and were disabled, so that the data shown are with no active thermal control. Also shown is a simulation of expected clock performances including USO noise, aliasing, control loop effects and ion clock signal noise as well as environmental perturbations 30 with environmental disturbances at the orbital period (solid blue) and without (dashed blue). The difference between the blue and black lines shows that the black line is measurement-system limited for averaging times less than a day (about 86,000 s). Clock operation parameters used here were not fully optimized. A simulation of expected clock performance with more optimal parameters is shown in red. For comparison, the green trace shows operation achieved on the ground with the optimized multipole trap and a hydrogen maser acting as the LO. Operation with a maser LO reduces control loop noise in the short term and aliasing noise 33 on medium to long timescale (the 1-s value is degraded by the 20-MHz user output synthesizer; Extended Data Fig. 1 ), but even with a crystal LO, operation in the multipole trap would have resulted in better performance than shown in the red trace. Note that the black trace is an overlapping Allan deviation, while all other traces are non-overlapping. Full size image Conclusions We have demonstrated a trapped-ion atomic clock operating in space. Clock operation was fully characterized over a 9-month period, and long-term operation will continue through the remainder of the 2-year mission. On the ground, the clock’s short-term stability measured in the multipole trap of the two-trap system was between 1 × 10 −13 and 2 × 10 −13 at 1 second and 2 × 10 −13 / τ 1/2 , and in space the short-term stability in the repurposed load trap was estimated to be 7 × 10 −13 / τ 1/2 . In space, a stability of 3 × 10 −15 to 5 × 10 −15 was measured at 1 day, and 3 × 10 −15 (no drift removal) was measured at 23 days, equivalent to a time deviation of less than 4 ns. In addition, an uncorrected drift rate of +3.0(0.7) × 10 −16 per day was also observed, demonstrating this technology’s utility for applications requiring autonomous operation. Both long-term observations were made in the presence of a 9 °C variation in temperature and no temperature control on the ion trap and associated electronics. With clock sensitivities to environmental effects measured on the ground, operation in the more extreme environment of space was observed. In all cases, clock frequency shifts due to orbital environmental effects were near or below the noise floor of the measurement system and that of the estimated Allan deviation of the clock itself. Radiation effects due to orbital passes through the South Atlantic Anomaly were also studied and found not to limit clock operation. Work is in progress to extend the lifetime of this technology past the current expected 3–5 years, out to 10 years or more. These efforts will be focused on the ultraviolet light source and the vacuum chamber pressure, including the buffer gas and mercury vapour partial pressures. Infusion of this technology into future space-flight missions is currently being explored, with likely applications to one-way navigation and planetary science. Methods Clock operation sequence Extended Data Fig. 1 shows a system-level block diagram of the DSAC payload. A 1 × 10 −13 class USO serves as the LO providing 10 MHz to the DSAC 40.5-GHz synthesizer that generates the clock frequency and to a user output synthesizer at 20.456 MHz used as the reference for the GPS receiver. The 40.5-GHz microwave signal is directed to the ion system via a Ka-band waveguide and a Ka-band horn to interrogate the trapped ions. The trap is enclosed in an ultrahigh-vacuum system that is surrounded by three mu-metal magnetic shields. Also attached to the ion subsystem is the ultraviolet light subsystem, which consists of the plasma discharge lamp (as ultraviolet light source), optics and a PMT detector. The plasma discharge lamp is electrodeless, consisting of a fused silica envelope surrounded by a coil resonator. The trapped ions used in the clock are 199 Hg + , but the lamp plasma is comprised of 202 Hg + , which has a spectral line resonant with the S 1/2 , F = 1 to P 1/2 , F = 1 transition, but not the S 1/2 , F = 0 to P 1/2 , F = 1 transition (see the level diagram in the next section). Thus, the discharge light can be used to optically pump 199 Hg + clock ions into the F = 0 ground state. After microwave interrogation from the S 1/2 , F = 0 to S 1/2 , F = 1, the same discharge light can be used for state detection: if the microwave frequency were on resonance with the clock transition, 199 Hg + ions that were pumped into the S 1/2 , F = 0 state would make the transition to the S 1/2 , F = 1 state and would be resonant with the discharge light, resulting in a detectable fluorescence signal. Otherwise, they will stay in the F = 0 state, producing no fluorescence. The detected fluorescence is directed to the clock controller where software determines each frequency correction to be applied to the 40.5-GHz clock synthesizer and the user output synthesizer. The ultrahigh-vacuum chamber includes a quadrupole load trap where ions are loaded and prepared, and a multipole trap where atom interrogation is performed. The DSAC clock also allows for atom interrogation and full clock operation to take place in the load trap, though at a reduced performance level. Ions are loaded into the trap from a background neutral mercury vapour ionized by electrons from a heated LaB 6 filament held at a negative high voltage relative to the trap region. Clock electronics include radiofrequency drivers for the two traps, an electron emitter driver, the clock controller, and a stable current source that drives coils surrounding the vacuum chamber to establish the clock quantization axis. The ultraviolet light source is enclosed in a hermetically sealed O 2 back-filled cell. Bias fields (d.c.) are applied to the load trap and multipole trap rods to affect transfer (‘shuttling’) of ions between the two traps. All telemetry, including clock signal counts, background counts, frequency corrections and many temperatures, is relayed to the ground daily. Atomic energy-level diagram Extended Data Fig. 2 shows the 199 Hg( ii ) level structure along with the various transitions used in the clock. The S 1/2 , F = 0, m F = 0 to S 1/2 , F = 1, m F = 0 field-insensitive hyperfine transition forms the basis of the clock. It is optimally driven by a microwave field at 40.5 GHz polarized parallel to the quantization axis. In DSAC, the microwave launcher establishes a field that is at 45° relative to this axis, so it has a component both perpendicular and parallel. The perpendicular component can be used to drive the magnetically sensitive neighbouring lines for diagnostic purposes, but during normal clock operation these lines are detuned 140 kHz and do not perturb the clock. The S 1/2 , F = 1 to P 1/2 , F = 1 optical transition is driven by the plasma discharge source just described at 194 nm and is used to optically pump ions into the initial state and to read out the state after microwave interrogation. The stability measurement system: GPS It is possible to determine most of the clock performance characteristics from its own telemetry. For instance, with a knowledge of the signal-to-noise ratio (SNR) and atomic line Q of the clock transition, along with clock operation parameters, one can determine its short-term stability directly by using: $${\sigma }_{y}(\tau )=\frac{1}{{{\rm{\pi }}{\rm{SNR}}}_{1/2}\,Q}\sqrt{\frac{{t}_{{\rm{c}}}}{\tau }}$$ (1) where τ is the averaging time, t c is the clock cycle time, SNR 1/2 is the signal-to-noise ratio at the half-height of the resonance line, and Q = f 0 /Δ f where f 0 is the ion resonance frequency and Δ f is the full-width at half-maximum (FWHM) of the resonance line shape. A more complete description of the expected Allan deviation would need to include aliasing noise 33 , which depends on the level of LO noise and the particular clock cycle used. Modelling indicates that this noise term contributes 5 × 10 −13 / τ 1/2 for the LO noise and clock parameters used during the runs presented in this paper. If aliasing is a white frequency noise process, it can be added appropriately with expression (1). SNR 1/2 is easily estimated using the periodically measured resonance signal size and by assuming shot-noise-limited performance on the detection system. The Q is determined by the known resonance frequency and the interrogation time (Rabi time), which determines the FWHM. In the repurposed load trap used for the space demonstration, with t c = 12.5 s, f 0 = 40.5 GHz, SNR 1/2 = 23 and a Rabi interrogation time of 2 s, the estimated short-term stability is 7 × 10 −13 / τ 1/2 (including aliasing noise 33 ). However, to determine the clock’s long-term stability requires a comparison to a second clock that is more stable on the timescale of interest. Rather than carry a second clock on board the spacecraft, the DSAC payload includes a GPS receiver providing both code and carrier phase measurements, thereby enabling high-precision GPS frequency transfer 34 between DSAC and any clock on the ground also connected to a GPS receiver, in particular those providing a realization of UTC. To process GPS data, we use the GIPSY-OASIS code to undertake 30-h arc precise point positioning with ambiguity resolution 35 . This determines the offset x ( t ) in coordinate time between the DSAC clock and the reference clock used in JPL’s rapid orbit and clock products. Subsequently we convert these 30-h x ( t ) datasets to 24-h daily x ( t ) datasets. After this, we convert x ( t ) to proper time using the method described next for a dataset that can span any time range, typically many days. In principle, this enables comparison to the most stable clocks in the world, as all national metrology laboratories use this method. However, compared with conventional GPS frequency comparisons, our measurement system must contend with several additional challenges such as the low-Earth-orbit motion of one platform and an uncontrolled GPS receiver thermal environment. Studies of these and other factors have shown 31 that, relative to the clock, our measurement system has a higher noise level in the short term. For averaging times greater than a day, it is possible to achieve a measurement system stability below 10 −15 using GPS frequency transfer 36 . We have not done a thorough analysis of long-term noise for our measurement system in particular. Given the orbital considerations, together with the fact that the reference clock is a hydrogen maser, it is likely that measurement system noise for long averaging times is above 10 −15 and therefore may contribute to estimated clock noise on these timescales as well. For averaging times for which the noise of the DSAC clock is below the measurement noise floor, it is only possible to place an upper bound on clock stability performance. To perform the clock comparison, the GPS receiver must be carefully configured for operation in space. In particular, for the DSAC mission, it was necessary to calibrate the carrier phase and pseudo-range measurements for the receiver’s temperature sensitivity. In addition, DSAC telemetry must be corrected for relativistic variations: the gravitational redshift and Doppler shifts. To aid each comparison, the receiver is dynamically transformed to the geoid 37 by using the following equation: $${\tau }_{{\rm{s}}}=\int {\rm{d}}t\left[1+\frac{\varPhi (r)-{\varPhi }_{0}}{{c}^{2}}-\frac{{v}^{2}}{2{c}^{2}}\right]$$ (2) where τ s is the spacecraft proper time, Φ ( r ) is the Newtonian gravitational potential at radius r from the geoid, Φ 0 is the gravitational potential on the Earth’s geoid, c is the speed of light and v is the speed of the spacecraft in the Earth-centred inertial reference frame. Furthermore, to obtain accurate solutions, the second-order terms in a multipole expansion of the gravitational potential must be retained so that $$\varPhi (r)=-\frac{GM}{r}\left[1-{J}_{2}{\left(\frac{{a}_{1}}{r}\right)}^{2}\frac{(3{z}^{2}-{r}^{2})}{2{r}^{2}}\right]$$ (3) where J 2 parameterizes the quadrupole contribution to the gravitational potential, a 1 is the equatorial radius of the Earth and z is the third Cartesian component of the position vector r . Without the additional J 2 terms in the potential, clock comparisons to the ground are limited by GPS errors. This was also demonstrated on the GRACE satellites as described in ref. 37 . A careful analysis of expected DSAC measurement system noise was carried out 31 that characterized the impact of the known error sources. The most important of these include: GPS measurement noise, GPS orbit and clock estimate errors, multipath errors, instabilities in the ground reference clock, and thermal calibration errors of the GPS receiver phase. Also included, but not as important, were uncertainties in modelling spacecraft drag accelerations, spacecraft solar pressure accelerations, and the phase centre knowledge of the GPS antenna with respect to the spacecraft centre of mass. Extended Data Fig. 3 shows the measured overlapping Allan deviation with and without corrections for relativistic and GPS receiver thermal effects. Also superimposed is the simulated expected clock performance up to a day of averaging with the clock parameters used during the run, with and without effects from orbital thermal and magnetic disturbances. As shown in Extended Data Fig. 3 , measurement system noise is about twice as large as the expected clock performance in the short term. For averaging times greater than a day, clock environmental effects are expected to contribute near the level of the black line. As described earlier in this section, measurement system noise may also contribute near this level for these averaging times. Therefore, the data shown can only be taken as an upper bound on actual clock performance. Magnetic effects An environmental aspect that must be managed by most space clocks is variations in the ambient magnetic field. The quantization axis of the DSAC clock is determined by a coil inside the magnetic shields and a stable current source that generates a field of about 10 μT. The stability of this current source is at the 10 ppm level, resulting in random field variations of 0.1 nT and clock variations due to this internal current source noise well under 10 −15 . Sensitivity to external magnetic variations is reduced by three-layer mu-metal shielding. The sensitivity due to external magnetic field variations in the weakest shielding direction around the load trap was measured on the ground to be 7 × 10 −16 μT −1 . (The sensitivity in the optimized multipole trap region is over an order of magnitude smaller.) At the DSAC orbit of approximately 720 km, the clock sees variations in Earth’s magnetic field of 25 μT, a range that is over 100 times the variation typically seen in the laboratory. There are three mitigating factors that reduce the effects of these large magnetic field variations for the DSAC clock: (1) at an atomic response of 9.7 mHz μT −2 , the mercury clock transition has a very low sensitivity to magnetic variations (about 4 times lower than caesium, for example); (2) the use of high-performance magnetic shielding with a shielding effectiveness of up to 20,000 (although not as good in the load trap at only about 5,000, owing to shield holes to accommodate optics); and (3) the variations occur at the orbital period of about 6,000 s, at which time their contribution to the clock Allan deviation is below the noise floor of the clock. Extended Data Fig. 4 shows magnetic field variations near the clock as a function of time, as measured by a magnetometer on the spacecraft. In the graph, the component of the field in the weakest shielding direction is shown. Extended Data Fig. 5 shows the simulated Allan deviation of frequency shifts calculated in the worst case (weakest shielding direction always aligned with maximum field direction) from the measured field variations. Superimposed is the expected DSAC clock noise floor for operation in the multipole trap without LO aliasing effects 33 (2 × 10 −13 / τ 1/2 ) and for the load trap (7 × 10 −13 / τ 1/2 ). For both modes of operation, the magnetic effect (assuming the worst-case magnetic shielding direction) is just at, or is below, the noise floor. Thermal and Doppler effects During pre-launch evaluations, it is usually possible to vary one parameter while holding others constant. In space, this is rarely possible. The overall thermal sensitivity of the clock and the sensitivity to variations in the number of ions trapped—the number-dependent second-order Doppler effect d f / f = − v 2 /(2 c 2 )—are an example of effects that are difficult to decouple. For the latter, as the ion cloud changes size, the time-averaged trap radiofrequency field amplitude as seen by the ions varies, leading to changes in the ensemble-averaged ion velocity v and the second-order Doppler shift 38 . Over a particular 4-day period, the clock temperature varied by about 5 °C, and the number of trapped ions was intentionally varied by about 50% with the electron emitter turned off. The frequency shift coefficients due to these are easily separated by exploiting the different functional forms of their variation (one was monotonic while the other was not). With the large change in trapped-ion number here, the change in the associated second-order Doppler effect dominates temperature-related effects. The trapped-ion number is not known directly but is approximately proportional to clock signal size. Clock signal size is determined by periodically interrupting clock operation to take the difference between the clock signal fluorescence on resonance and off. Individual signal size measurements are usually shot-noise-limited in the total number of PMT counts, which includes background light in addition to clock signal fluorescence. However, long-term variations in lamp output due to temperature changes will cause variations in this metric that are independent of ion number. Thus, the signal size can only be used to represent the ion number during times when the lamp temperature is stable or by adjusting the signal size to account for the known impact of temperature on lamp output. Extended Data Fig. 6 shows a least-squares second-order polynomial fit of frequency offsets plotted against signal size. Theoretically, the dependence should be linear, and the deviation from linearity at higher signal sizes shown is due to larger ion clouds extending beyond the optical interrogation region. Larger ion clouds are also subject to additional radiofrequency heating. Thus, the linear component of the fit, effectively extrapolating back to an empty trap, gives the most reliable representation of the second-order Doppler shift coefficient. In experimental units, this is −5.6(1.2) × 10 −17 per PMT count of clock signal size. For a current clock signal of 22,000 counts, this corresponds to a total effect of about −1 × 10 −12 from an empty to full trap. To maintain 1 × 10 −15 stability at a day requires constraining the trapped-ion number to a fraction of a per cent. The residuals to the polynomial fit used to obtain the Doppler sensitivity have a systematic trend correlated with the variation in temperature, which was not monotonic. When the fit residuals are plotted against the temperature, a linear trend is apparent (see Extended Data Fig. 7 ). The linear slope gives the total temperature sensitivity (as in the main text, we reserve the term ‘total temperature sensitivity’ of the clock to indicate the combination of relativistic Doppler effects, electronics sensitivities and stray temperature-dependent magnetic effects) in this case as −2.3(1.1) × 10 −15 °C −1 . This is close to the thermal Doppler limit 39 and demonstrates how this technology is able to achieve a high level of stability in a varying thermal environment, even with no temperature control. The most important thermal variations occur at the orbital period of 6,000 s, but there are other thermal timescales as well. For instance, on a roughly daily cadence, the orbital ground track passes over regions that have different albedos, such as the Sahara Desert. Another temperature variation at an approximate daily period comes from certain spacecraft housekeeping activities. These can require an increase in spacecraft power consumption, which leads to variations in the spacecraft temperature. Finally, the solar illumination of the spacecraft varies on a roughly month timescale as the solar elevation angle, or beta angle, with respect to the orbital plane changes (the beta angle is defined as the angle between a line from the Earth to the Sun and the orbital plane.) This combined with occasional spacecraft re-orientations to keep the main spacecraft radiator in shadow leads to an approximate 17-day period in the temperature as seen by the ions in this dataset. Extended Data Fig. 8 shows the temperature in the load trap, the most relevant temperature for current clock operation, as a function of time. The DSAC demonstration mission currently operates with no thermal control. The instrument internal temperature floats with the changing environment and lags with a several hour time constant. Extended Data Fig. 8a shows 52 days of data and an overall approximately periodic temperature trend associated with the variation in the spacecraft beta angle. The average peak-to-peak variation is about 4 °C with an average period of about 17 days. Extended Data Fig. 8b is a 5-day subset and shows an approximate daily sinusoid in temperature with a peak-to-peak variation of 0.5 °C. Finally, Extended Data Fig. 8c is a 1-day subset showing the orbital effect with a peak-to-peak variation of about 0.3 °C. The total temperature sensitivity of the measurement is due to a combination of several factors, not all of which are constant over time or fundamental to the technology. In addition to the ions themselves, these effects include temperature sensitivities of the synthesizer, GPS receiver and USO. In the dataset shown in Extended Data Fig. 8 , the average measured total temperature sensitivity was about 1.3 × 10 −15 °C −1 . Coupled with the measured temperature variations, this sensitivity is unable to explain the peaks in the Allan deviation curve at either ~3 × 10 5 s or ~1 × 10 6 s or the general flattening above 1 × 10 5 s. However, a more detailed analysis accounting for variations in the total temperature sensitivity, overall drift and other noise sources such as the measurement system and possibly the reference clock can explain these features. The 1-day sinusoidal temperature variation coupled with this sensitivity would result in an Allan deviation peak of 0.36 × 0.5 °C × 1.3 × 10 −15 °C −1 ≈ 2 × 10 −16 occurring at 0.37 × 1 day ≈ 3 × 10 4 s (ref. 30 ). Similarly, the orbital variation would cause a peak of ~1 × 10 −16 occurring at 2,200 s. Both are below the noise floor of the measurement. The small orbital peak in the measured Allan deviation in Extended Data Fig. 3 and in Fig. 4 is consistent with a peak in the simulated results shown in those figures. The simulation is based on measured sensitivities and measured orbital disturbance sizes. The resultant peak is dominated by USO temperature sensitivity with some contribution from temperature sensitivity to the differential synthesizer phase, which only manifests itself as a frequency shift when the temperature has a non-zero time derivative 30 . The USO is discussed in more detail below. The simulated Allan deviation in Extended Data Fig. 3 includes all known orbital clock disturbances (ions, synthesizers and USO, but not GPS) and shows that these disturbances are expected to be near or below the estimated noise floor. Background pressure evolution Frequency shifts due to background gas collisions generally do not limit clock operation if the vacuum chamber is properly prepared 40 . In the getter-pumped DSAC clock, the evolution of the CH 4 and H 2 partial pressures due to outgassing is measured on the ground to be well below 1 × 10 −8 Pa per day. Noble gases are inactive and not pumped by the getter, but their frequency shift coefficients are several orders of magnitude below those of H 2 and CH 4 (refs. 41 , 42 ) and insignificant on the scale considered here. H 2 is efficiently pumped by the in situ getter with a very high pumping speed, but CH 4 is only weakly interacting and pumped very slowly by ‘cracking’ into C and H 2 (both readily pumped) by a hot filament 43 . Owing to the high H 2 pumping speed, its partial pressure is likely to be very stable in the long term (>1 day). However, given the weak pumping speed of CH 4 , it is possible that this could have a long-term evolution leading to a clock frequency drift. Here we place a limit on this variation. With a measured gas shift coefficient of −2.6 × 10 −7 Pa −1 for CH 4 (refs. 41 , 42 ), the corresponding frequency shift should be less than 3.5 × 10 −15 per day. Extended Data Fig. 9 shows data taken when the trapped-ion number was very stable (>3 weeks after a power cycle, the clock frequency variations due to the number-dependent second-order Doppler shift are <3 × 10 −16 per day). The next dominant effect is due to temperature variations, which can be reliably distinguished from a linear drift because the temperature has a turnover (it is not monotonic). The data are corrected for this measured temperature effect and then fitted to a straight line. The fitted slope is 2.7(5.3) × 10 −21 s −1 , giving an upper bound on clock frequency variations due to trace gas evolution of 4.6 × 10 −16 per day, which improves on the upper bound on CH 4 pressure variation to <1.7 × 10 −9 Pa per day 44 (note that variations in trace-gas partial pressure due to temperature-driven outgassing rates would already have been absorbed into the overall temperature effect.) This is also consistent with the low overall clock drift observed for the longest operational run (see Fig. 3 ). USO sensitivities While the USO has its own temperature sensitivity, changes in the USO frequency due to temperature are, by design, highly suppressed by the clock control loop. Linear USO drift could result in an offset on the clock output frequency, but has no effect on the clock Allan deviation. However, nonlinear changes, such as a jump in the linear rate or a sinusoidal variation with temperature, will be present in both the output frequency and the Allan deviation. As these effects can limit clock performance, a USO drift compensation algorithm 8 is built into the clock control loop that reduces the impact of sudden changes in linear USO drift (rare) by a factor of 4 to 5 when calculated for current operational parameters. Of biggest concern to DSAC are orbital variations in USO temperature, and the full control algorithm with drift compensation is calculated to reduce the impact of these by 0.017 for current operational parameters, enough to prevent the prominent 2.8 × 10 −12 peak-to-peak USO orbital disturbance from affecting clock performance beyond the small residual (USO-dominated) peak seen in the simulated curve of Extended Data Fig. 3 . Radiation effects and the South Atlantic Anomaly When the clock was first turned on in space, an important observation was how it responded to the radiation doses received when passing through the South Atlantic Anomaly (SAA) 44 . This radiation is primarily high-energy protons with energies up to 100 MeV (ref. 45 ). This radiation affects PMT counts and USO drift rate, both at levels that the control algorithm successfully mitigates. Radiation-induced excess PMT counts is a well-established phenomenon and expected 46 . What is not as well-known is the quantitative response of a particular PMT to a given radiation environment and how it might affect the clock output. The two sources of detected PMT counts are ultraviolet light photons and radiation flux. Ultraviolet photons originate from the ultraviolet light source (or ion fluorescence stimulated by that source). The light source has two modes: a high-intensity bright mode that is used to prepare and read out the internal atomic states, and a low-intensity dim mode used during microwave interrogation. For the purpose of characterizing the radiation effect, sensitivity to it can be maximized by collecting data for the 8.1-s period in each clock cycle during which the light source is dim, thereby minimizing ultraviolet photon counts. Extended Data Fig. 10 shows the detected signal as a function of time during passes through the SAA while the light source is in the dim mode. During the dim mode, the normal ultraviolet signal is 49,000 counts in 8.1 s. The peaks in the figure are caused by radiation flux corresponding to SAA transits and have a maximum slope of 80 counts s −1 . Normal clock operation consists of alternately interrogating the ions on either side of the clock transition and then differencing the ultraviolet fluorescence obtained during state readout when the ultraviolet light source is in the bright state. Normally, if the microwave interrogation is on frequency, the difference will be zero, otherwise not. Therefore, the varying detected counts caused by the radiation flux can appear to the clock algorithm as a drift. To quantitatively determine the impact of this radiation-induced slope on the clock when the ultraviolet source is in bright mode, we must multiply the maximum slope by the ratio of bright/dim detection times (3.5/8.1), which gives a maximum radiation-induced variation of 35 counts s −1 . The normal clock transition frequency sensitivity at half-height is 84,000 counts Hz −1 , so a radiation-induced excess of 35 counts s −1 could be interpreted as a 0.2-mHz or 5 × 10 −15 fractional frequency shift per second. The clock control algorithm uses a windowing that compares the brightness on alternating sides of the resonance over the previous three cycles 30 , in such a way that effects due to varying PMT counts, whether they are caused by radiation or the lamp itself, are highly suppressed. Higher-order impact is ruled out through modelling and through the lack of unexplained orbital peaks in the Allan deviation of the clock output. The other observed impact of SAA transits has been on the USO drift rate. The accumulated USO response to radiation is slower, so explicit orbital variations are not resolved. Instead, a slow diurnal-like variation on the USO drift rate is observed. For the half-day that SAA passes occur, the USO has a higher drift rate than for the half-day that SAA passes do not occur. These two rates are very repeatable from one day to the next, resulting in a diurnal-like sinusoidal modulation on top of the average drift rate. This large 10 −11 scale disturbance is filtered by the control algorithm to a level below the clock noise floor, largely because of its slow nature. Data availability Full data for Fig. 4 and Extended Data Fig. 3 are available from the corresponding author on reasonable request.
Spacecraft that venture beyond our Moon rely on communication with ground stations on Earth to figure out where they are and where they're going. NASA's Deep Space Atomic Clock is working toward giving those far-flung explorers more autonomy when navigating. In a new paper published today in the journal Nature, the mission reports progress in their work to improve the ability of space-based atomic clocks to measure time consistently over long periods. Known as stability, this feature also impacts the operation of GPS satellites that help people navigate on Earth, so this work also has the potential to increase the autonomy of next-generation GPS spacecraft. To calculate the trajectory of a distant spacecraft, engineers send signals from the spacecraft to Earth and back. They use refrigerator-size atomic clocks on the ground to log the timing of those signals, which is essential for precisely measuring the spacecraft's position. But for robots on Mars or more distant destinations, waiting for the signals to make the trip can quickly add up to tens of minutes or even hours. If those spacecraft carried atomic clocks, they could calculate their own position and direction, but the clocks would have to be highly stable. GPS satellites carry atomic clocks to help us get to our destinations on Earth, but those clocks require updates several times a day to maintain the necessary level of stability. Deep space missions would require more stable space-based clocks. Managed by NASA's Jet Propulsion Laboratory in Southern California, the Deep Space Atomic Clock has been operating aboard General Atomic's Orbital Test Bed spacecraft since June 2019. The new study reports that the mission team has set a new record for long-term atomic clock stability in space, reaching more than 10 times the stability of current space-based atomic clocks, including those on GPS satellites. When every nanosecond counts All atomic clocks have some degree of instability that leads to an offset in the clock's time versus the actual time. If not corrected, the offset, while miniscule, increases rapidly, and with spacecraft navigation, even a tiny offset could have drastic effects. One of the key goals of the Deep Space Atomic Clock mission was to measure the clock's stability over longer and longer periods, to see how it changes with time. In the new paper, the team reports a level of stability that leads to a time deviation of less than four nanoseconds after more than 20 days of operation. "As a general rule, an uncertainty of one nanosecond in time corresponds to a distance uncertainty of about one foot," said Eric Burt, an atomic clock physicist for the mission at JPL and co-author of the new paper. "Some GPS clocks must be updated several times a day to maintain this level of stability, and that means GPS is highly dependent on communication with the ground. The Deep Space Atomic Clock pushes this out to a week or more, thus potentially giving an application like GPS much more autonomy." The stability and subsequent time delay reported in the new paper is about five times better than what the team reported in the spring of 2020. This does not represent an improvement in the clock itself, but in the team's measurement of the clock's stability. Longer operating periods and almost a full year of additional data made it possible to improve the precision of their measurement. The Deep Space Atomic Clock mission will conclude in August, but NASA announced that work on this technology continues: the Deep Space Atomic Clock-2, an improved version of the cutting-edge timekeeper, will fly on the VERITAS (short for Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy) mission to Venus. Like its predecessor, the new space clock is a technology demonstration, meaning its goal is to advance in-space capabilities by developing instruments, hardware, software, or the like that doesn't currently exist. Built by JPL and funded by NASA's Space Technology Mission Directorate (STMD), the ultra-precise clock signal generated with this technology could help enable autonomous spacecraft navigation and enhance radio science observations on future missions. "NASA's selection of Deep Space Atomic Clock-2 on VERITAS speaks to this technology's promise," said Todd Ely, Deep Space Atomic Clock principal investigator and project manager at JPL. "On VERITAS, we aim to put this next generation space clock through its paces and demonstrate its potential for deep space navigation and science."
10.1038/s41586-021-03571-7
Biology
The kombucha culture
Alexander May et al, Kombucha: a novel model system for cooperation and conflict in a complex multi-species microbial ecosystem, PeerJ (2019). DOI: 10.7717/peerj.7565 Journal information: PeerJ
http://dx.doi.org/10.7717/peerj.7565
https://phys.org/news/2019-09-kombucha-culture.html
Abstract Kombucha, a fermented tea beverage with an acidic and effervescent taste, is composed of a multispecies microbial ecosystem with complex interactions that are characterized by both cooperation and conflict. In kombucha, a complex community of bacteria and yeast initiates the fermentation of a starter tea (usually black or green tea with sugar), producing a biofilm that covers the liquid over several weeks. This happens through several fermentative phases that are characterized by cooperation and competition among the microbes within the kombucha solution. Yeast produce invertase as a public good that enables both yeast and bacteria to metabolize sugars. Bacteria produce a surface biofilm which may act as a public good providing protection from invaders, storage for resources, and greater access to oxygen for microbes embedded within it. The ethanol and acid produced during the fermentative process (by yeast and bacteria, respectively) may also help to protect the system from invasion by microbial competitors from the environment. Thus, kombucha can serve as a model system for addressing important questions about the evolution of cooperation and conflict in diverse multispecies systems. Further, it has the potential to be artificially selected to specialize it for particular human uses, including the development of antimicrobial ecosystems and novel materials. Finally, kombucha is easily-propagated, non-toxic, and inexpensive, making it an excellent system for scientific inquiry and citizen science. Cite this as May A, Narayanan S, Alcock J, Varsani A, Maley C, Aktipis A. 2019 . Kombucha: a novel model system for cooperation and conflict in a complex multi-species microbial ecosystem . PeerJ 7 : e7565 Main article text Introduction Kombucha is a traditional tea beverage fermented by a symbiotic community of acetic acid bacteria (AAB) ( Acetobacteraceae ) and osmophilic yeast ( De Filippis et al., 2018 ). While the origins of the beverage are uncertain, records of the drink are found in early 19th century Russia ( Dufresne & Farnworth, 2000 ). There is variation on the specifics of kombucha fermentation, but the typical process proceeds as follows: black or green tea is brewed for at least 5 min, supplemented with sucrose (5–10% (w/v)), cooled to room temperature (20 °C), and then inoculated with kombucha liquid (usually 10–20% (v/v)) from a previous batch ( Jayabalan et al., 2014 ). A mature bacterial cellulose (BC) biofilm from a previously brewed kombucha culture (often called a “mother” or SCOBY, for Symbiotic Community of Bacteria and Yeast) is typically placed on top of the solution and allowed to ferment for 10–14 days. The presence of a carbon source in the solution, typically sucrose, initiates a cascade of metabolic processes that generates a carbonated and slightly acidic drink at the end of the primary fermentation cycle. One of the more striking aspects of the system is the floating cellulose pellicle that forms in tandem with fermentation; this biofilm is produced by the bacteria and encapsulates a microbial community within it ( Marsh et al., 2014 ). The dominant bacterial genus in the system is Komagataeibacter (formerly Gluconacetobacter , and prior to that, Acetobacter , Yamada et al., 2012 ) ( Marsh et al., 2014 ; Chakravorty et al., 2016 ) with numerous species identified within various kombucha cultures. These include Komagataeibacter xylinus ( Reva et al., 2015 ; De Filippis et al., 2018 ), Komagataeibacter intermedius ( Dos Santos et al., 2015 ; Reva et al., 2015 ; Gaggìa et al., 2019 ), Komagataeibacter rhaeticus ( Machado et al., 2016 ; Semjonovs et al., 2017 ; Gaggìa et al., 2019 ), Komagataeibacter saccharivorans ( Reva et al., 2015 ; De Filippis et al., 2018 ), and Komagataeibacter kombuchae ( Reva et al., 2015 ). Another AAB genus often found in kombucha cultures is Gluconobacter ( Reva et al., 2015 ; Chakravorty et al., 2016 ; Gaggìa et al., 2019 ). The yeast species in the system are even more variable, and can include yeast in the genera Zygosaccharomyces, Candida, Torulaspora, Pichia, Brettanomyces/Dekkera, Schizosaccharomyces , and Saccharomyces ( Mayser et al., 1995 ; Teoh, Heard & Cox, 2004 ; Marsh et al., 2014 ; Jayabalan et al., 2014 ; Reva et al., 2015 ). The microbial profiles of kombucha seem to vary partly based on geographical origin ( Mayser et al., 1995 ; Marsh et al., 2014 ), and the composition of the kombucha changes over time as it progresses through fermentation ( Marsh et al., 2014 ). This process involves the enzymatic cleavage of sucrose and the subsequent processing of its monomer components into ethanol, acids, cellulose, and carbon dioxide ( Jayabalan et al., 2014 ; Chakravorty et al., 2016 ). Brewed kombucha has antimicrobial properties that persist even with neutralization of the solution to pH 7 and thermal denaturation at 80 °C for 30 min ( Sreeramulu, Zhu & Knol, 2000 ). In this manuscript, we describe the potential uses of kombucha as a model system for studying cooperative and competitive interactions, and we also discuss the potential human uses of kombucha and kombucha-generated biofilms for nutrition, human health and industrial applications. Survey methodology This review was assembled via electronic searches on various platforms, including Google Scholar, Web of Science, and PubMed. In order to gain a background understanding of kombucha as a scientific model, searches were focused on the terminology adjacent to it and other fermented foods: “kombucha”, “kombucha tea,” “tea fungus,” “fermented tea,” “fermentation,” “SCOBY,” “biofilm,” “pellicle,” Examples of other fermented foods were examined as well, particularly those already used as model systems, such as sourdough and yogurt. Further development of the literature collection used keywords associated with microbial ecology and social interactions between microbes, such as: “cooperation,” “conflict,” “symbiosis,” “syntrophy,” “cheater.” Searches evolved to include aspects of the four phases of development known to occur during kombucha fermentation: invertase production, ethanol fermentation, ethanol oxidation and acidification, and biofilm formation; many of these sources primarily used other organism models as references. Each of these sources provided a wealth of data far beyond the scope of this paper, and thus were culled down to focus on the microorganisms reported within the kombucha community. As the kombucha scientific literature remains in its infancy, resources frequently led outside of the core focus of the paper and thus were integrated only as required. Microbial model systems help researchers address important theoretical and applied questions There is a precedent for using model microbial systems for studying the evolution of cooperation, conflict and social behavior. Evolutionary biologists have used microbial systems to tackle ecology questions that would otherwise be unfeasible at a macroscale, such as exploring the selection pressures required for the evolution of cellular cooperation in the form of multicellularity ( Strassmann, Zhu & Queller, 2000 ; Ratcliff et al., 2012 ), as well as adaptive radiation, diversification, and population fragmentation ( Rainey & Travisano, 1998 ; Habets et al., 2006 ). Microbial model systems can be easily altered via genetic modifications and experimental evolution, and naturally produce many intriguing behavioral patterns—such as foraging, dispersal, collective assembly via biofilms, production of antagonistic chemicals, and quorum sensing ( West et al., 2006 ), many of which resemble social processes among animals. Microbial systems have also been used to address phenomena that were originally developed to explain human interactions, such as the Prisoner’s Dilemma ( Greig & Travisano, 2004 ) and the Tragedy of the Commons ( MacLean, 2008 ). However, most existing microbial model systems consist of systems with single species or only a few species. Artificial microbial systems can also be limited in their applicability to real world phenomena and their scalability to larger ecological questions ( Jessup et al., 2004 ). This is in contrast to natural microbial systems, which are highly diverse and within which multispecies cooperation is likely to be complex. Despite advances in the ability of microbiologists to bring previously “unculturable” microbes into the lab ( Vartoukian, Palmer & Wade, 2010 ) and the use of metagenomics and other “omics” approaches to analyze complex communities ( Franzosa et al., 2015 ), the complexity of natural systems makes them difficult to reproduce and study in the laboratory ( Wolfe & Dutton, 2015 ). We promote the idea that a compromise between natural and artificial systems can be found in fermented foods. They provide the ease of culturing microorganisms typical of simpler artificial systems, but include the diversity and complexity of natural systems. As a result, fermented foods and beverages may balance the advantages and disadvantages of natural and artificial systems. Methods for propagating many fermented cultures are well-characterized due to their long history of cultivation and domestication. Genetic information exists for many of the microbes that are key to these fermentation processes and they have predictable cycles of development and succession, allowing for highly reproducible results despite their relative complexity ( Wolfe & Dutton, 2015 ). During decades or centuries of domestication, artificial selection also likely favored fermented foods with cultures that are resistant to invasion by pathogenic species. The microbes in fermented foods produce factors that control the growth of potential invaders ( Steinkraus, 1997 ) and thus help to stabilize the microbial population within the system. Researchers have found that kombucha does indeed have antimicrobial properties, including activity against many human pathogens ( Greenwalt, Ledford & Steinkraus, 1998 ; Sreeramulu, Zhu & Knol, 2000 ). These features of kombucha make it a tractable model of interspecies dynamics that has implications for food preservation and for human health. In this review, we provide a general overview of the cooperative and competitive interactions that occur during kombucha fermentation. For example, yeast produce the invertase enzyme which acts as a public good, breaking down sucrose that can then be used by both yeast and bacteria. The bacteria produce cellulose that becomes the pellicle, which may also act as a public good, protecting the liquid culture from colonization by competitors, delaying desiccation, and possibly acting as a resource store. All of these features of kombucha make it a promising model system for studying multispecies cooperation. Some of these characteristics point to other potential human uses for kombucha and kombucha-derived products, which we discuss at the end of the paper. The social biochemistry of kombucha Kombucha brewing begins with a solution of “sweet tea” (typically 5–10% (w/v) sucrose dissolved in brewed tea) and a small amount of kombucha starter culture (typically 10–20% liquid (v/v) and 2.5% biofilm (w/v)) from a previously fermented batch ( Jayabalan et al., 2014 ). The teas used as substrates for kombucha are variable. Black and green teas are most commonly used, but are far from the only substrates tested ( Jayabalan et al., 2014 ; Villarreal-Soto et al., 2018 ). Other substrates include oolong, jasmine, and mulberry teas ( Talawat et al., 2006 ), rooibos ( Gaggìa et al., 2019 ), coconut water ( Watawana et al., 2016 ), and teas produced from various medicinal herbs ( Battikh, Bakhrouf & Ammar, 2012 ; Velićanski, Cvetković & Markov, 2013 ). Regardless of the initial substrate composition, the starter culture itself provides the main microbial inoculum into the solution. However, airborne or other environmental microbes may interact with or become part of the solution and potentially contribute to the microbial community. While the microbes ferment the substrate, enzymes produced by the yeast cleave sucrose into glucose and fructose and convert these monomers into ethanol and carbon dioxide. Next, bacterial enzymes oxidize ethanol, generating acetic acid that results in an low pH environment. The bacteria also produce cellulose which leads to biofilm formation ( Jayabalan et al., 2014 ; Chakravorty et al., 2016 ) (see Fig. 1 ). Below we provide details about these metabolic processes and the microbial social interactions that occur over the course of kombucha fermentation (see Table 1 ). Figure 1: Kombucha metabolism and microbial interactions. (A) Kombucha is brewed by adding tea and table sugar to a small amount of kombucha starter which contains yeast and bacteria. These microbes begin to break down the sugar, leading to a metabolic cascade that ends with a bubbly, acidic and slightly alcoholic beverage. (B) During the process of fermentation, cooperative and competitive interactions occur among microbes. The production of the public good invertase by yeast, the removal of waste products through metabolization of alcohol and the generation of the cellulose pellicle by bacteria are potentially cooperative functions. Antimicrobial metabolites, low pH, and the generation of a physical barrier inhibit the growth of competitors. Download full-size image DOI: 10.7717/peerj.7565/fig-1 Table 1: Over the course of kombucha fermentation, microbes cooperate and compete. Many of these processes lead to products that have potential human uses as antiseptics and biomaterials. Stage of fermentation Competitive interactions Cooperative interactions Human uses Yeast produce invertase Possible competition over invertase Yeast producing invertase as a public good Invert sugar, various fermentations Yeast ferment sugars into ethanol Yeast inhibiting competitors with ethanol Bacteria using ethanol as nutrient Ethanol as an antiseptic and intoxicant Bacteria oxidize ethanol to produce acetic acid Bacteria inhibiting competitors with acidification Bacteria metabolizing ethanol as an energy source Acid as an antiseptic Bacteria produce biofilm Bacteria physically blocking competitors and creating anoxic environment in liquid Spatially structuring kin, possible resource storage and protection from invading pathogens Biomaterial, possibly one which protects from invasion by pathogens DOI: 10.7717/peerj.7565/table-1 At the beginning of the kombucha fermentation process, yeast produce invertase which cleaves the disaccharide sucrose to its monosaccharide components, glucose and fructose. This phase appears to be the first opportunity for resource interaction between the microorganisms, as the freely liberated monomers are accessible to any microbe as a carbon source. Approximately 99% of the monosaccharides generated by Saccharomyces sp. invertase diffuse into the environment before the producing yeast can import them ( Gore, Youk & Van Oudenaarden, 2009 ). Thus, neighboring cells receive the vast majority of the monomers produced by invertase secreted by a focal cell, suggesting that the production of invertase (and the resultant products) fits the classic definition of a non-excludable public good. While the invertase produced by yeast appears to be a cooperative good, some yeast do not actually produce it (so called “cheaters”). Interestingly, a study with Saccharomyces cerevisiae has shown that yeast phenotypes that produce invertase are found in higher frequency than non-producer phenotypes when in the presence of Escherichia coli ( Celiker & Gore, 2012 ). In other words, when yeast are grown in co-culture with bacteria, cooperative invertase-producing yeast outperform cheaters that do not produce invertase. Celiker & Gore (2012) suggest that the rapid depletion of resources by bacteria leads to a scarcity of sugar in the environment, which in turn increases the frequency of the invertase-producing strain (since they are able to capture about 1% of the sugars they produce, while non-invertase-producers are completely starved). Interestingly, during kombucha brewing, bacteria rapidly transform many of these sugars into the cellulose pellicle (see section below on resource storage). Thus it may be the case that bacteria—by removing sugars from the solution and putting them into the pellicle—change the selective pressures within the kombucha solution so as to favor invertase-producing strains of yeast. After the yeast cleave sucrose into its component monomers using invertase, the yeast begin consuming these sugars and producing ethanol. Ethanol can be harmful to both yeast and bacteria, primarily via modifications to cellular membrane structure, function, and integrity (for a review on microbial tolerance to alcohols, see Liu & Qureshi, 2009 ). High levels of alcohol can threaten the viability of the microbes within kombucha. Potentially harmful levels of ethanol are reduced by bacteria that oxidize it and excrete acetic acid, thereby lowering the overall pH of the fermenting kombucha. These AAB that are part of the kombucha are obligate aerobes, meaning that they need access to oxygen to ferment ( Saichana et al., 2015 ). In static conditions, production of the surface biofilm may increase access to oxygen for the microbes that are found within it, including yeast that are embedded in the cellulosic matrix. This may be another example of cooperation between yeast and bacteria that occurs during fermentation of the kombucha. Some strains of Dekkera/Brettanomyces can also produce acetic acid in the presence of oxygen ( Ciani & Ferraro, 1997 ); however, it is yet unclear what proportion of final acid is contributed by the yeast within kombucha. The dominant organic acids within the community are acetic acid, gluconic acid, and glucuronic acid ( Jayabalan et al., 2014 ; De Filippis et al., 2018 ; Gaggìa et al., 2019 ), but additional acids have been detected and quantified; these include lactic acid ( Jayabalan, Marimuthu & Swaminathan, 2007 ; Malbaša, Lončar & Djurić, 2008 ), citric acid ( Jayabalan, Marimuthu & Swaminathan, 2007 ), malic acid, tartaric acid ( Srihari & Satyanarayana, 2012 ) and a host of others (see Jayabalan et al. (2014) for a comprehensive list). The variety and abundance of acids produced in kombucha raises the question: is the acid produced simply a waste product or does it provide some benefit for the microbes that produce it? In general, low pH in the solution can select for microbes that are tolerant to acid, while potential competitors and invaders are excluded or inhibited. Acid tolerant yeast in kombucha (such as Dekkera/Brettanomyces ) are able to survive and even thrive within acidic conditions ( Blomqvist, 2011 ; Steensels et al., 2015 ) that are deleterious to other yeast genera. Similarly, AAB that are present in kombucha (such as Komagataeibacter ) are highly acid-tolerant, while other bacteria are far less tolerant and cannot survive in high acid conditions ( Trček, Mira & Jarboe, 2015 ). The ability of the kombucha community to generate and tolerate these acidic conditions may provide an overall benefit in terms of protecting the system from invasion by competitor microbes. Indeed, kombucha has been found to be able to inhibit pathogens in vitro and part of this effect (though not all of it) has been ascribed to its acidic character ( Greenwalt, Ledford & Steinkraus, 1998 ; Sreeramulu, Zhu & Knol, 2000 ). The bacteria-produced biofilm might provide protection from invasion and allow resource storage The most conspicuous facet of the kombucha symbiosis is the SCOBY (though the microbial community exists in the liquid culture as well as in the cellulosic biofilm). The biofilm initially forms a thin layer on the top of the liquid, as small bacteria-produced cellulose filaments rise to the top of the solution and aggregate together. The biofilm becomes larger and stronger with subsequent fermentations, often forming multiple pancake-like layers that are connected with filaments. Komagataeibacter xylinus , regarded as a model species for BC production ( Ross, Mayer & Benziman, 1991 ), has been characterized as a core contributor to the cellulose production in some kombucha cultures ( Reva et al., 2015 ; De Filippis et al., 2018 ) and has been previously described as the dominant species in regards to BC production in kombucha ( Marsh et al., 2014 ). However, recent studies have illustrated an even wider diversity of Komagataeibacter spp. in kombucha than previously known. Interestingly, De Filippis et al. (2018) found that Komagataeibacter xylinus dominates during fermentation in green or black tea at 20 °C, while Komagataeibacter saccharivorans has a growth advantage at 30 °C. As previously mentioned, other members of this genus have also been identified in kombucha—these include Komagataeibacter intermedius and Komagataeibacter rhaeticus which were found to be abundant in green and black teas at 27 °C, while Gluconobacter entanii was identified nearly exclusively in kombucha fermented with rooibos teas ( Gaggìa et al., 2019 ). From these studies, it is clear that the environment has an impact on the composition of community members; there is no apparent “canonical” species composition across all substrates and all culture conditions. Accordingly, species of Acetobacter ( Sievers et al., 1995 ; Chen & Liu, 2000 ; Dutta & Gachhui, 2006 ; Zhang, Zhang & Xin, 2011 ), Gluconacetobacter spp. ( Yang et al., 2008 ; Trovatti et al., 2011 ) and Lactobacillus spp. ( Wu, Gai & Ji, 2004 ; Zhang, Zhang & Xin, 2011 ) have also been found in kombucha cultures. For aerobic species that produce BC, such as Komagataeibacter xylinus , agitated bioreactors increase cell growth and cellulose yield by improving the oxygen transfer rate ( Reiniati, Hrymak & Margaritis, 2017 ); however, rather than forming a surface pellicle, agitation produces spherical or asterisk-like particles of cellulose in the culture media ( Bi et al., 2014 ; Singhsa, Narain & Manuspiya, 2018 ). The optimal dissolved oxygen concentration for BC yield for a strain of this bacteria was reported to be 10% in fed-batch cultures ( Hwang et al., 1999 ), as greater oxygen concentrations result in a shift toward gluconic acid production and reduced cell viability, while lower concentrations reduce cell growth ( Lee et al., 2014 ). Additionally, a variety of carbon sources have been tested and shown to influence the production of BC, with glucose, sucrose, fructose, mannitol, molasses, and various other organic wastes or extracts serving as substrates (for comprehensive reviews on species, strains, carbon sources, culture times, and cellulose yields, see Chawla et al., 2009 ; Shah et al., 2013 ; Jozala et al., 2016 ). While glucose is the ideal biosynthetic building block for cellulose production ( Jozala et al., 2016 ), Komagataeibacter xylinus can convert it into gluconic acid via glucose dehydrogenase, which has a detrimental effect on cellulose production ( Kuo et al., 2016 ). Interestingly, the addition of 1% (v/v) ethanol to a culture medium containing Gluconacetobacter hansenii inhibited cell growth—but resulted in an increase of BC production and a decline of non-cellulose producing mutants ( Park, Jung & Park, 2003 ). It is unclear whether the yeast-produced ethanol in kombucha communities could perform a similar role with the bacterial partners, but this suggests an avenue for further exploration. While the majority of focus has understandably been concentrated on the bacteria-produced biofilm, yeast are known to produce biofilms as well, particularly when a part of mixed species assemblages ( Kawarai et al., 2007 ; Furukawa et al., 2010 ; León-Romero et al., 2016 ). It is entirely possible that yeast are contributing to the biofilm structure in kombucha as well. Dekkera/Brettanomyces have been shown to produce biofilms with surface adherence properties directly affected by pH and sugar concentrations ( Joseph et al., 2007 ). These yeasts could also produce biofilms at different rates based on ploidy status ( Ishchuk et al., 2016 ). These factors could account for some of the non-uniformity observed in the biofilms which grow as multiple layers with strands suspended down (see Fig. 2 ). Figure 2: Typical appearance of kombucha biofilm. At the top of the image is the multi-species biofilm which is made up of Komagataeibacter hansenii, Dekkera bruxellensis, Dekkera anomala , and Schizosaccharomyces pombe . Often, pendulous “strands” of material are seen dangling from the underside of the biofilm as they are in this image. The liquid underneath the biofilm is tea undergoing fermentation. Download full-size image DOI: 10.7717/peerj.7565/fig-2 In addition to the physical thickness of the biofilm, the extracellular polymeric substances (EPS) of the matrix can inhibit the diffusion of antibiotics or invading cells ( Stewart, 1996 ; Mah & O’Toole, 2001 ). The presence of this pellicle likely makes it more difficult for microbes landing on the surface to access free sugars that are within the kombucha solution. Williams & Cannon (1989) performed a battery of experiments to study the environmental role of the pellicle, including: using in vitro UV irradiation to show that the pellicle decreases the bacteria’s susceptibility to UV rays compared to non-pellicle controls, using apple slices as a substrate to show that the pellicle works to retain moisture in the environment, and that pellicle-forming bacteria strains are able to outgrow other, unspecified wild strains of bacteria and molds. Additional experimental work is needed to investigate whether the pellicle provides this protective function in kombucha. Another possible benefit that the biofilm may provide is the storage of resources ( Jefferson, 2004 ). The biofilm is made of EPS (produced by microbes) which acts as a reservoir of carbon ( Flemming & Wingender, 2010 ). It can also include polysaccharides like levan which may function as a storage molecule ( Limoli, Jones & Wozniak, 2015 ), as supported by studies on Pseudomonas syringae ( Laue et al., 2006 ). This might allow it to function as a resource storage system that can only be accessed by the kombucha-associated bacteria and yeast inside the solution if/when sugars become unavailable (e.g., if the system is not being fed fresh, sugar-rich tea). However, further research is necessary to determine whether the pellicle is systematically broken down and used as a resource source during “starvation” of the kombucha. In addition—and related to this hypothesis—it could be that the removal of sugars from the solution by the bacteria creates an environment that favors cooperative invertase-producing yeast over cheater strains (see the section above on invertase; Celiker & Gore, 2012 ). In this way, the biofilm provides a resource storage function and also may modify the selective pressures on yeast, favoring cooperation. If the biofilm does indeed change the evolutionary dynamics within the system in ways that inhibit cheaters, this would be an intriguing parallel with certain processes that happen in multicellular bodies that encourage multicellular cooperation and inhibit cellular cheating ( Aktipis et al., 2015 ). Kombucha as a model system for studying cooperation Kombucha is characterized by many different social processes including public good production and cooperation to exclude competitors. This makes it a useful model system for understanding the evolution of cooperation, both in general terms and more specifically in the context of cooperation in multispecies microbial systems. Some ostensibly mutually-beneficial relationships may instead have evolved as indirect exploitation of each partner’s waste products or represent by-products of otherwise selfish traits ( West, Griffin & Gardner, 2007 ). There has also been discussion about whether strategies that permit the evolution of cooperation in social groups, such as cheater detection and cheater punishment, occur in microbial communities ( Travisano & Velicer, 2004 ). Kombucha may be a good model system to test for cheater control systems in microbes and also to investigate the evolution of microbial traits that benefit other microbes. Also, there is important work to be done to understand how multi-species cooperative systems evolve. Kombucha may be a tractable system in which to study this process. Theoretical models suggest that when cooperators get the benefits of interacting with one another (through a process called behavioral assortment), cooperation becomes more viable, regardless of whether individuals are related or even the same species ( Fletcher & Doebeli, 2009 ). Behavioral assortment occurs when cooperators have more interactions with one another than with non-cooperators—and it is a general principle that can select for the evolution of altruism in diverse systems ( Fletcher & Doebeli, 2009 ). Behavioral assortment is also at the heart of why strategies like “Walking Away” from non-cooperators selects for cooperation in both partnerships ( Aktipis, 2004 ) and groups ( Aktipis, 2011 ). Kombucha could provide a model system in which to experimentally test these models and uncover the mechanisms that influence the evolution of cooperation in multi-species systems. A substantial body of work has focused on social interactions within biofilms, which run the gamut from cooperative to competitive (for a review, see Nadell, Drescher & Foster, 2016 ). As the vast majority of biological systems in the natural world exist as multi-species communities, it is important for biofilm researchers to use models that reflect this reality ( Tan et al., 2017 ). The kombucha SCOBY offers an easily-reproducible platform to explore questions about synergy and antagonism in multispecies interactions. While advances in various “omics” technologies have allowed greater interrogation of such biofilms ( Tan et al., 2017 ; Burmølle et al., 2014 ), technical challenges remain; these include tracking and maintaining various community members in mixed assemblages ( Elias & Banin, 2012 ). We argue that the well-characterized organisms often found in the kombucha community could reduce the difficulties in such endeavors. Specifically, the substrates for cell growth are well-established (see previous sections on teas and carbon sources), there are selective media recipes designed to isolate various organisms in acidic or fermented conditions ( Beuchat, 1993 ; Makdesi & Beuchat, 1996 ; Sharafi, Rasooli & Beheshti-Maal, 2010 ; Morneau, Zuehlke & Edwards, 2011 ), and the organisms are already well-adapted to in vitro -like conditions. Kombucha can also provide a model system for investigating the evolution of cooperation among hosts and their microbes. There is growing interest in host-microbiome interactions ( Rosenberg & Zilber-Rosenberg, 2016 ) and the evolutionary consequences of cooperation and conflict among hosts and their microbes ( Alcock, Maley & Aktipis, 2014 ; Wasielewski, Alcock & Aktipis, 2016 ; Foster et al., 2017 ). Kombucha might provide a model system for certain aspects of the eukaryote-bacteria interactions that occur between hosts and their microbiomes. Both kombucha and host-microbiome interactions involve a close association between bacterial and eukaryotic cells. In kombucha, eukaryotes are represented by yeast genera, which can include Zygosaccharomyces, Candida, Torulaspora, Pichia, Brettanomyces/Dekkera, Schizosaccharomyces , and Saccharomyces ( Mayser et al., 1995 ; Teoh, Heard & Cox, 2004 ; Marsh et al., 2014 ; Jayabalan et al., 2014 ; Reva et al., 2015 ). There is a long history of using yeast as a model system for human disease and health ( Botstein, Chervitz & Cherry, 1997 ), which suggests that it may be viable to use kombucha as a model system for human health issues that involve interactions of the host eukaryotic cells with bacteria. Fermented foods are diverse microbial ecosystems Kombucha is not the only fermented food that is a potentially useful model system for studying multispecies cooperation. Cheese rinds are diverse biofilms formed by bacteria and fungi communities which are influenced by the various processing and aging steps associated with cheese production ( Wolfe et al., 2014 ). Despite the collection of samples across 10 countries and 137 cheese types, Wolfe et al. (2014) show that these communities were dominated by 14 bacterial and 10 fungal genera and demonstrate highly reproducible successional dynamics across vast geographic distances. The adoption of cheese as a model system has produced numerous intriguing findings, such as widespread horizontal gene transfer in rind-associated bacteria ( Bonham, Wolfe & Dutton, 2017 ) and fungi ( Ropars et al., 2015 ), characterization of a suite of “domestication” genes in a common industrial milk fermenting bacterium ( Passerini et al., 2010 ), the rapid experimental evolution of “domesticated” Penicillium mutants during serial propagation on in vitro cheese agar ( Bodinaku et al., 2019 ), and the application of lactococcal -produced bacteriocins to influence the populations of starter bacteria and pathogenic contaminants ( Guinane et al., 2005 ). Another fermented food that has been established as a useful model system is sourdough. The production of sourdough involves an association between lactic acid bacteria and yeast using flour (typically wheat or rye) as a carbon and energy source ( Gobbetti, 1998 ). Mature sourdough is the end product of a series of acidifying fermentation steps initiated by a diverse assemblage of lactic acid bacteria (both facultative and obligate heterofermentative as well as homofermentative species), aerobic gram-positive and Gram-negative bacteria, Enterobacteriaceae , yeasts, and molds—that eventually results in the dominance of a few obligate heterofermentative species of Lactobaccillus (sometimes Leuconostoc ) and yeast ( Minervini et al., 2014 ). Commonly isolated species of Lactobacillius bacteria include Lactobacilli sanfranciscensis, L. fermentum, L. plantarum, L. brevis , L. rossiae , and other members of the same genus (see Huys, Daniel & De Vuyst, 2013 ; Minervini et al., 2014 ), while yeast species are more diverse and commonly include Saccharomyces cerevisiae, Candida humilis, Kazachstania exigua, Pichia kudriavzevii, Wickerhamomyces anomalus , and Torulaspora delbrueckii ( De Vuyst et al., 2016 ). The bacteria preferentially hydrolyze maltose and liberate glucose into the medium, allowing its use by neighboring microbes (particularly maltose-negative species of yeast) and stabilizing the cooperative interaction ( De Vuyst & Neysens, 2005 ). Much like kombucha, the nature and quality of the substrate (flour for sourdough) and the fermenting conditions have a direct impact on the final community structure that develops, though the various methods used to sample, isolate, and identify the microbes preclude a conclusive link between a sourdough and its microbial consortia ( De Vuyst et al., 2014 ). All of this variation affects the genera and species involved at various stages of fermentation, which in turn affects the final product ( De Vuyst & Neysens, 2005 ). Kefir may also be a useful fermented food for studying multispecies cooperation. It is a fermented milk beverage propagated by a starter “grain” composed of a complex but stable community of lactic acid bacteria, AAB, and yeast ( Simova et al., 2002 ). As in kombucha and sourdough, the yeast component of the kefir community varies, with Saccharomyces and Candida as the genera most frequently identified, but it is also known to contain Kluyveromyces ( Farnworth, 2005 ). The kefir system appears to be characterized by cooperation as well since yeast provide the bacteria with growth-promoting compounds (amino acids, vitamins) during the early stages of fermentation, and the resulting bacterial products can be exploited as a source of energy for the yeast ( Loretan et al., 1998 ; Viljoen, 2001 ). These are just some of the fermented foods that allow opportunities for examining cooperation and competition in microbial communities—more generally, they may provide useful and tractable experimental systems for studying these social and evolutionary dynamics. Kombucha and other fermented foods may provide benefits to humans Fermented foods have many potential benefits to humans and so understanding the dynamics of cooperation and conflict among microbes in systems like kombucha can have important applications as well. In developing countries they already provide an important source of protein and vitamins ( Steinkraus, 1997 ). The microbes in fermented foods also help protect the food from microorganisms that might otherwise spoil it—maintaining the nutritional quality of the food and helping to keep it safe for human consumption for long periods of time ( Steinkraus, 1997 ). It is possible that kombucha may provide similar benefits. In addition, due to the ability of kombucha to inhibit pathogen growth via acidity ( Greenwalt, Ledford & Steinkraus, 1998 ) and other mechanisms ( Sreeramulu, Zhu & Knol, 2000 ; Bhattacharya et al., 2016 ; Shahbazi et al., 2018 ), kombucha and its constituents are excellent candidates for developing novel agents to control pathogens and food-spoilage microbes. This is a possibility we are currently exploring in our laboratory. It is not yet known how kombucha influences human health, although tea polyphenols have been shown elsewhere to confer health benefits ( Dufresne & Farnworth, 2000 ) including potentially decreasing cancer risk ( Ann Beltz et al., 2006 ). During kombucha fermentation, it was shown that the concentration of polyphenols first decreases and then spikes at day 12, leading to higher levels of polyphenols than were originally in the solution, possibly due to the release of additional catechins or enzymes by cell lysis ( Jayabalan, Marimuthu & Swaminathan, 2007 ). However, work by Gaggìa et al. (2019) has shown that polyphenol content increases at first and then decreases over further fermentation time. The type of tea and microbial population (and their interactions with each other) seem to directly influence the level of polyphenols. It is possible that these compounds in kombucha may have some positive effect on health, but more studies are needed to identify these potential influences on health and the mechanisms underlying them. Cellulose biofilms produced by AAB—like the SCOBY in kombucha—have been developed into useful materials for medical and textile purposes. For example, cellulose biofilms have been developed for medical dressings ( Lin et al., 2013 ), skin tissue repair ( Fu, Zhang & Yang, 2013 ), incorporation into composite materials ( Shah et al., 2013 ), and even clothing ( Lee & Ghalachyan, 2015 ). These examples suggest that biofilms produced by kombucha fermentation can be used in a variety of beneficial products. The role of viruses in fermented foods is currently unknown An additional potential player in the cooperative and competitive dynamics inside kombucha and other fermented foods is viruses. The role of viruses in fermented foods has been largely unstudied due to challenges related to culturing, but recent advances have been made with the widespread adoption of metagenomic technologies ( Park et al., 2011 ). Analysis of kimchi, sauerkraut, and fermented shrimp indicate that these foods contain less complex viral communities than environmental samples, possibly due to their limited microbial hosts ( Park et al., 2011 ). Jung et al. (2011) found evidence that the phage burden of the dominant bacteria during the late stages of kimchi fermentation has a direct impact on bacterial abundance and accordingly affected the resulting community dynamics. In sauerkraut, the succession of host bacteria—and their subsequent effects on fermentation—may be directly influenced by the activity of their associated phages ( Lu et al., 2003 ). Phages have not yet been studied in kombucha fermentation. But, given their presence in other fermented foods, it is likely that they play a role in kombucha. Not all viruses are harmful—in fact, a growing body of research shows that many viruses may sometimes benefit their hosts ( Roossinck, 2011 , 2015 ). An intriguing topic for future study is the possibility that phages and other viruses maintain conditions that permit multispecies cooperation in kombucha and other fermented foods. Viruses have been shown to slow down microbial cell cycles ( Nascimento, Costa & Parkhouse, 2012 ) and alter metabolism and the composition of metabolites that are produced ( Sanchez & Lagunoff, 2015 ). These features may potentially stabilize microbial ecosystems during food fermentation, perhaps inhibiting the proliferation of some microbes while allowing others. Some viruses also have the capacity to restore intestinal morphology and mucosal immunity of germfree or antibiotic-treated mice without causing disease ( Kernbauer, Ding & Cadwell, 2014 ), suggesting that they may also have a positive effect on host epithelial cells during digestion. It is possible that viruses contribute to the stability and viability of the ecological systems of fermented foods and the fermentation processes in the gut microbiome. To the extent that viruses stabilize microbial communities, their role in maintaining a microbial ecosystem favorable for viral growth may provide an evolutionarily beneficial strategy for those viral strains. The role of viruses in multispecies cooperative interactions has been underexplored and is worthy of future research. Conclusions Kombucha is a fermented tea that is brewed by combining sweet tea with a small amount of kombucha starter which contains both yeast and bacteria. Over the course of fermentation, the yeast and bacteria cooperate in many ways—some of which are known and some which require further characterization—to metabolize resources and keep out invading microbes. Kombucha offers a unique opportunity for exploring general questions about the evolution of cooperation and also for exploring more specific questions about cooperation in complex multispecies systems. The social lives of the microbes within the community—particularly how they exchange resources, signal potential partners, or even deter so-called “free-riders”—are exciting directions for future work. There are also many potential applications of kombucha for human nutrition, material development, and even for controlling the growth of harmful microbes. Additional Information and Declarations Competing Interests The authors declare that they have no competing interests. Author Contributions Alexander May prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. Shrinath Narayanan authored or reviewed drafts of the paper, approved the final draft. Joe Alcock approved the final draft, extensive discussions and emails. Arvind Varsani contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. Carlo Maley contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. Athena Aktipis contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. Data Availability The following information was supplied regarding data availability: The research in this article did not generate any data or code as it was collected from the literature and represents a review of existing material synthesized into a new context. Funding Research reported in this article was supported by the National Cancer Institute of the National Institutes of Health under Award Number U54CA217376 and John Templeton Foundation Grant Number 46724. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements Acknowledgements The authors wish to thank the Microbiome and Behavior Project members and the Cooperation and Conflict lab for discussion and insights during the development of this article.
In today's health-conscious community, kombucha is all the rave. Its appeal comes from its accessibility and alleged health benefits, which range from introducing probiotics to killing deleterious bacteria in the human body. But as is the case for many things in science, there is more to kombucha than meets the eye—literally. The microscopic microbes inhabiting this fermented concoction could offer insight into how microbial communities interact, more specifically on how symbiotic relationships form within complex microbial models. Athena Aktipis, an assistant professor in the Department of Psychology and associate faculty in the Biodesign Center for Biocomputing, Security and Society, was a fan of kombucha herself, before delving deeper. "Honestly, I started working on kombucha because I really liked the taste of it. I started brewing it in my kitchen for my own consumption. After brewing it for a couple of months, I would come home from work and just stare at it, asking, 'how do you work.' Being a scientist, I got on google scholar to learn more, but I didn't find much." In response to this, Aktipis teamed up with other researchers to take all the pieces of the puzzle she had found in pre-existing literature and put them together see the bigger picture on how kombucha operates and how the different species of microbes interact and cooperate within. Alexander May, a prior researcher in Aktipis's lab, led the efforts to expand on this knowledge in a review paper published in PeerJ (The Journal of Life and Environmental Sciences). Arvind Varsani, an associate professor in the Biodesign Center for Fundamental and Applied Microbiomics and associate faculty in the Biodesign Center for Mechanisms of Evolution, and Carlo Maley, faculty in the School of Life Sciences and an associate professor in the Biodesign Center for Biocomputing, Security and Society, served as collaborators on the review paper. The paper deconstructed each component of the microbial system, offering insight into how the microbes interact and what resources they utilize as a by-product of the fermentative processes used for making kombucha. "We think kombucha is important as a model system because it's an easy-to-grow microbial community that can potentially answer interesting questions about cooperation between different species," May said. "Microbes (including the bacteria and yeast in kombucha) actually have a lot of complex social behaviors that scientists are only really starting to learn about. We think that by understanding what's going on at the small scale, we can get clues as to what happens at the larger scale and see if the same patterns even hold true in human societies. People have been eating fermented foods like kombucha for centuries, but it's only recently that scientists have started digging into the systems themselves to understand how and why they can benefit humans." Kombucha is made by first introducing sucrose to black or green tea, followed by the addition of kombucha liquid from a previous batch. A biofilm, also from a previous batch of kombucha, is then placed on top of the liquid, and the concoction is allowed to ferment for 10 to 14 days. Although this may seem nothing more than a straightforward recipe for making a tasty refreshment, these fermentative steps actually sustain a wide variety of microbes, illustrating various ecological concepts we usually only see in real-time with non-microscopic organisms. For example, the yeast found in the kombucha liquid produces invertase, an enzyme that bacteria and yeast use to metabolize sugars, as a public good. Similarly, bacteria produce a biofilm at the top of the batch that protects microbes from outside invaders, provides oxygen and offers space for the storage of resources. Ethanol and acid, the by-products of fermentation, also keep invaders at bay. However, there are many systems that have various microbial species at play, so why pick kombucha to illustrate these relationships? "There is an ease of management that comes from it (because it is easy to make), but it also has a sort of complexity because it contains so many species, and in that way, it is similar to a microbial system you would see in nature," Aktipis said. "It is at this really nice boundary between simplicity and complexity." Kombucha has proven to be an efficient way to study interspecies interactions on the microbial scale, but it has much more to offer. From this study, Aktipis and collaborators are working on using kombucha as a model to develop interventions for bettering human health. It has been recently brought into light how important the human microbiome is—the balance of microbes in our bodies, which we have co-evolved with since the dawn of humankind, is pivotal to human health. Throwing that balance off could have significant adverse effects, but treating humans with microbes could restore that balance. "Right now, we are trying to develop kombucha as a system that could allow us to create new antimicrobial products, which are based on multiple species," Aktipis said. "Whereas drugs are used to kill organisms, we want to ask, 'how can we cultivate a diverse microbial community that can outcompete pathogens.'" To do this, researchers are taking kombucha and introducing new invaders or removing some chemical or microbial component. "We are trying to figure out which parts of cooperation in kombucha are most important," Aktipis said. "This paper is the tip of the iceberg of a whole research program we are designing." These researchers are the first to look into kombucha as a model system—Aktipis says this is a good reminder to be aware of the world around and to never stop asking 'why.' "There is this tendency in science to only look at things that are already being studied. A lot of what we did with this kombucha project is coming back to the importance of observation, observing the natural world. I think that's been a little bit lost, and it's also much more fun to be aware of your world and to try to understand it."
10.7717/peerj.7565
Biology
Engineered bacteria show promise for sustainable biofuel industry, researchers say
Junya Kato et al. Metabolic engineering of Moorella thermoacetica for thermophilic bioconversion of gaseous substrates to a volatile chemical, AMB Express (2021). DOI: 10.1186/s13568-021-01220-w
http://dx.doi.org/10.1186/s13568-021-01220-w
https://phys.org/news/2021-05-bacteria-sustainable-biofuel-industry.html
Abstract Gas fermentation is one of the promising bioprocesses to convert CO 2 or syngas to important chemicals. Thermophilic gas fermentation of volatile chemicals has the potential for the development of consolidated bioprocesses that can simultaneously separate products during fermentation. This study reports the production of acetone from CO 2 and H 2 , CO, or syngas by introducing the acetone production pathway using acetyl–coenzyme A (Ac-CoA) and acetate produced via the Wood–Ljungdahl pathway in Moorella thermoacetica . Reducing the carbon flux from Ac-CoA to acetate through genetic engineering successfully enhanced acetone productivity, which varied on the basis of the gas composition. The highest acetone productivity was obtained with CO–H 2 , while autotrophic growth collapsed with CO 2 –H 2 . By adding H 2 to CO, the acetone productivity from the same amount of carbon source increased compared to CO gas only, and the maximum specific acetone production rate also increased from 0.04 to 0.09 g-acetone/g-dry cell/h. Our development of the engineered thermophilic acetogen M. thermoacetica , which grows at a temperature higher than the boiling point of acetone (58 °C), would pave the way for developing a consolidated process with simplified and cost-effective recovery via condensation following gas fermentation. Introduction Metabolic engineering of thermophilic microorganisms has several benefits compared to mesophilic microorganisms, such as a lower contamination risk, less energy required for cooling the fermentation system, and a faster production rate due to the advantageous thermodynamics (Sonnleitner and Fiechter 1983 ). One potential application is bioreactive distillation, which involves simultaneously fermenting volatile chemicals and collecting them by distillation (Zeldes et al. 2018 ). The bioreactive distillation has fewer steps and a lower cost of purification of target chemicals compared to the conventional fermentation processes. In addition, maintenance of the low concentration of fermentation products prevents the exposure of the chemicals to bacteria that inhibits bacterial growth and metabolism. Acetone is a volatile chemical used as an industrial solvent and a precursor of important downstream products (Anbarasan et al. 2012 ; Peters et al. 2015 ). Currently, industrial acetone production depends on petrochemical phenol production via the cumene process. Although the cumene process is cost-effective and widely used, it has the risk of shortage of nonrenewable fossil feedstock. Therefore, there is a demand for an alternative process, and acetone production by bioconversion from renewable feedstock is one option. Historically, acetone production by bioconversion has been studied as acetone-butanol-ethanol (ABE) fermentation. However, the focus is on alcohol production, and much effort is made for inhibiting acetone production as a by-product (Jiang et al. 2009 ; Luo et al. 2016 ; Xu et al. 2015 ). Metabolic engineering enables the development of acetone-producing strains such as Escherichia coli as the host (Bermejo et al. 1998 ). A few studies have reported thermophilic acetone fermentation from carbohydrates (Shaw et al. 2015 ; Straub et al. 2020 ). In addition, metabolic engineering of Synechocystis sp. PCC 6803, Acetobacterium woodii , and Clostridium ljungdahlii makes acetone production possible from CO 2 or CO gas as the carbon source (Banerjee et al. 2014 ; Hoffmeister et al. 2016 ; Zhou et al. 2012 ) by mesophilic organisms. CO 2 fixation by mixotrophy improves conversion of organic compounds to acetone, as shown by an engineered strain of C. ljungdahlii (Jones et al. 2016 ). Among the various bioconversion applications, gas fermentation utilizing anaerobic acetogenic bacteria (acetogen) is attracting increasing attention. Gas fermentation is economically and environmentally friendly because inexpensive gaseous waste feedstock, such as steel mill waste gas or syngas primarily comprising CO and H 2 , can be used (Claassens et al. 2016 ; Durre and Eikmanns 2015 ; Liew et al. 2016 ). Whereas practical applications still demand higher productivity and cost-effective processes, the combination of bioreactive distillation as the purification process with gas fermentation can reduce waste and cost, in addition to engineering the metabolism of acetogens. The economic feasibility of acetone production from syngas by bioreactive distillation has been evaluated using hypothetical systems with engineered thermophilic strains of Moorella thermoacetica (Redl et al. 2017 ). The bioreactive distillation also has advantages to remove acetone that has inhibitory effect on the cell growth and could maintain high cell density without need to replace culture medium if appropriate bioreactors are used. However, the detailed metabolic design and construction of the thermophilic strains for gas fermentation, their availability, and, therefore, experimental data to support the system are missing. This study genetically engineered the thermophilic homoacetogen M. thermoacetica to produce acetone from gaseous substrates at high temperature. We also developed a strategy to increase the carbon flux to acetone by genetic engineering and evaluated the productivity from CO 2 –H 2 , CO and CO–H 2 as a model syngas. To our knowledge, this is the first study to provide strains for thermophilic gas fermentation of acetone. Materials and methods Bacterial strains and growth conditions We used M. thermoacetica ATCC 39073 and its derivatives in this study (Table 1 ). Modified ATCC1754 PETC medium comprising 1.0 g of NH 4 Cl, 0.1 g of KCl, 0.2 g of MgSO 4 ·7H 2 O, 0.8 g of NaCl, 0.1 g of KH 2 PO 4 , 0.02 g of CaCl 2 ·2H 2 O, 2.0 g of NaHCO 3 , 10 mL of trace elements, 10 mL of Wolfe’s vitamin solution (Tanner 1989 ), and 1.0 mg of resazurin/L of deionized water was used as the basal medium (Tanner et al. 1993 ). The pH was adjusted to 6.9. The medium was prepared anaerobically by boiling and cooling under a N 2 –CO 2 (80:20) mixed-gas atmosphere. After cooling, the medium was dispensed to 125-mL serum bottles under a N 2 –CO 2 mixed-gas atmosphere. The serum bottles were crimp-sealed and autoclaved. Table 1 Strains and plasmids used in this study Full size table Before starting culture, we added yeast extract and l -cysteine·HCl·H 2 O to reach a final concentration of 1.0 and 1.2 g/L, respectively. 2.0 g/L of fructose was added for routine cultivation and to examine acetone production from sugar. The final volume was adjusted to 50 mL. To add gas substrates, we replaced the headspace of the serum bottles by CO 2 –H 2 (20:80) (0.1 MPa), or we added CO (0.04 MPa) and additional H 2 (0.04 MPa) after replacing the headspace of the serum bottles with N 2 gas at atmospheric pressure. The temperature was maintained at 55℃ with shaking at 180 rpm. Plasmid construction We constructed two plasmids, pHM17 and pHM5, to introduce the thermophilic acetone operon into the pyrF or the pduL2 region of the chromosome in M. thermoacetica (Table 1 ). We synthesized the thermophilic acetone operon under the constitutive glyceraldehyde-3-phosphate dehydrogenase (G3PD) promoter after codon optimization of the four genes encoding acetone biosynthetic enzymes for expression in M. thermoacetica (GenScript). The genes constituting the thermophilic acetone operon were ctfA (Tmel_1136) and ctfB (Tmel_1135) from Thermosipho melanesiensis , thl (TTE0549) from Caldanaerobacter subterraneus subsp. tengcongensis , and adc (CA_P0165) from C. acetobutylicum . The open reading frames coding these four genes were driven by the constitutive G3PD promoter (Kita et al. 2013 ), and the gene order was determined on the basis of the biochemical information about the enzymes: activity, stability, and complex formation (Zeldes et al. 2018 ). Each gene was separated by an intergeneic spacer with a ribosome-binding site, and the DNA fragment synthesized was amplified by polymerase chain reaction (PCR) using KOD plus ver.2 (Toyobo Co., Ltd., Osaka, Japan) and this synthetic operon was inserted into the plasmids with a pyrF marker in either the pyrF or the pduL2 region using the In-Fusion HD cloning kit (Clontech Laboratories, TaKaRa Bio, Shiga, Japan). We used pK18-ldh (Kita et al. 2013 ) or pK18-Δ pduL2 :: ldh (Iwasaki et al. 2017 ) as a template to amplify the plasmids (Table 1 ). Table 2 lists the primers used for PCR. We used JK50 and JK51 to amplify the insert and JK52 and JK53 to amplify plasmid backbones. Finally, we cloned the constructed plasmids in E. coli HST08 and confirmed the DNA sequences using Sanger sequencing. Table 2 PCR primers used in this study Full size table Transformation and selection of mutants We performed the genetic transformation of M. thermoacetica , as previously described (Kita et al. 2013 ). All procedures were performed under aerobic conditions, except for cell growth. Briefly, we cultured the M. thermoacetica Δ pyrF mutant to the mid-log phase in basal medium supplemented with 2 g/L of fructose as a carbon source and 10 µg/mL of uracil instead of yeast extract, and harvested it by centrifugation. Next, the cells were washed twice with 272 mM sucrose solution and used for electroporation with methylated DNA in the E. coli TOP10-harboring plasmid pBAD-M1281. The transformed cells were then cultured at 55 °C for 24–48 h with a low uracil concentration of 1 µg/mL before inoculation to the modified ATCC1754 PETC medium containing agar without uracil or yeast extract in roll tubes (Hungate 1969 ). The roll tubes were cultured at 55 °C, and the colonies were subcultured to confirm the insertion of the thermophilic acetone operon by using PCR. We used the primer set JK226 and JK227 to amplify the pyrF region and 1181-up-F and 1181-up-R to amplify the pduL2 region. The constructed strain with higher acetone productivity has been deposited to NITE (NITE AP-03217). Analytical methods We sampled and analyzed 1 mL of the culture medium at each time point and calculated the dry cell weight using the optical density (OD) at 600 nm (1 g [dry cell weight]/L = 0.383 OD) (Iwasaki et al. 2017 ). The culture supernatant was analyzed for the amount of fructose, formate, acetate, and acetone using high-performance liquid chromatography (HPLC) (LC-2000 Plus HPLC; Jasco, Tokyo, Japan) equipped with a refractive index detector (RI-2031 Plus; Jasco), a Shodex RSpak KC-811 column (Showa Denko, Kanagawa, Japan), and a Shodex RSpak KC-G guard column (Showa Denko) at 60 °C. Ultrapure water containing 0.1% (v/v) phosphoric acid was used as the mobile phase at a flow rate of 0.7 mL/min, and crotonate was used as an internal standard (Miura et al. 2014 ). The gas composition in the headspace of the serum bottles was analyzed by using GC-8A gas chromatography (Shimadzu, Kyoto, Japan) equipped with a thermal conductivity detector and a stainless steel column packed with activated carbon at 70 °C. Argon was used as the carrier gas (Miura et al. 2014 ). The amount of dissolved carbonate in the culture medium was measured by using a total organic carbon analyzer (TOC-L; Shimadzu). Nucleic acid sequences The nucleic acid sequences of the synthetic acetone operon have been deposited to GenBank (accession number MW436696). Results Design and construction of genetically engineered M. thermoacetica strains for thermophilic acetone production Moorella thermoacetica grows at 45 °C–65 °C. A pathway for thermophilic acetone production, which functions up to 70 °C, has been proposed with enzyme candidates (Zeldes et al. 2018 ). This pathway converted two molecules of acetyl-CoA (Ac-CoA) to acetoacetyl-CoA (Acac-CoA) as the start reaction by thiolase (Thl), followed by two reactions that produced acetoacetate (Acac) and acetone (Fig. 1 a). When Acac was produced by CoA transferase (CtfAB), one molecule of acetate was required to receive a CoA molecule from Acac-CoA. M. thermoacetica provides both Ac-CoA and acetate that are used as substrates in this pathway on sugars or gaseous substrates. In M. thermoacetica , Ac-CoA is an intermediate to produce acetate as the end metabolite. We selected thermophilic enzymes and designed the acetone biosynthesis operon (Fig. 1 b), and the synthetic thermophilic acetone operon was successfully introduced into the wild-type (WT) background of M. thermoacetica (Fig. 1 c and d). Next, we cultured the pyrF::acetone strain in basal medium supplemented with fructose at 55 °C (optimum growth temperature). Acetone was successfully produced and released into the culture supernatant, indicating functional expression of the enzymes. Consistent with the absence of homologous genes encoding secondary alcohol dehydrogenase in the genome of M. thermoacetica , the produced acetone was not converted to isopropanol unlike the case of some acetogens (Hoffmeister et al. 2016 ; Kopke et al. 2014 ; Pierce et al. 2008 ). However, we detected a large amount of acetate (about three times more than acetone) in the culture supernatant, indicating that Ac-CoA is mostly converted to acetate (Fig. 2 a, b and e). Fig. 1 Design and construction of acetone-producing Moorella thermoacetica strains. a Acetone production pathway. Two molecules of Ac-CoA are converted to one molecule of acetone via three reactions using one molecule of acetate. The reactions release a CoA molecule, an Ac-CoA molecule, and a CO 2 molecule, in addition to an acetone molecule. Acetate pathway from Ac-CoA is also shown. Ac-CoA is converted to acetyl phosphate by phosphotransacetylase that is encoded by pduL1 as well as pduL2 , followed by conversion to acetate. b Schematic representation of the synthetic acetone production operon. Genes and promoters are shown by block and fine arrows, respectively. c , e Schematic representations of the introduction of the synthetic thermophilic acetone operon by homologous recombination into the pyrF ( c ) and the pduL2 ( e ) region. The gray boxes highlight DNA regions used for recombination, and the line arrows represent primers used for PCR. The primer set to amplify the pyrF region is JK226 and JK227 and that for the pduL2 region is 1181-up-F and 1181-dw-R. d , f Verification of the presence of the thermophilic acetone operon in the pyrF ( d ) and the pduL2 ( f ) region. The genomic region of each gene was amplified by PCR, and the size shift due to the insertion was confirmed. The size of the PCR product of the pyrF region shifted from 0.5 to 4.8 kb by introducing the thermophilic acetone operon and the selection marker ( d ). Similarly, the PCR product of the pduL2 region shifted from 1.0 to 4.9 kb ( f ) Full size image Fig. 2 Acetone and acetate production from fructose by the recombinant Moorella thermoacetica strains. a , c Dry cell weight according to the OD for the pyrF::acetone strain ( a ) and pduL2::acetone strain ( c ). b , d Concentration of fructose and excreted metabolites in the culture supernatant measured by HPLC for the pyrF::acetone strain ( b ) and the pduL2::acetone strain ( d ). Data represent the mean with SDs of three biological replicates. Most error bars are smaller than the symbols of data plots. e The acetone and acetate productivity per 1 mol of fructose is shown with black (acetone) and gray (acetate) bars. The productivity was calculated based on the measurement after complete consumption of the supplemented fructose. The parental strain, ATCC 39073 (wild type), which does not produce acetone is shown for comparison. Data represent the mean with SDs of three biological replicates Full size image Deletion of pduL2 and preservation of pduL1 lead to higher acetone production The introduction of the thermophilic acetone operon did not cause high acetone production by M. thermoacetica . We hypothesized that the Thl responsible for the first reaction could not capture Ac-CoA because of the abundant phosphotransacetylase activity by PduL1 and PduL2 in M. thermoacetica . PduL2 showed more than a tenfold lower Michaelis constant ( K m = 0.04 mM) against Ac-CoA compared to PduL1 ( K m = 0.49 mM), while Thl showed a K m value of 0.27 mM (Breitkopf et al. 2016 ; Loder et al. 2015 ). Although we did not measure PduL1, PduL2, Thl, and Ac-CoA levels in the cells, the low K m value of PduL2 might explain the abundant acetate production in the pyrF::acetone strain. To test this hypothesis, we knocked out pduL2 and measured the acetone production. We introduced the thermophilic acetone operon to replace the pduL2 coding region, which enabled us to delete pduL2 and introduce acetone biosynthetic genes at the same time (Fig. 1 e and f). We cultured the pduL2::acetone strain in basal medium with fructose and found a significant increase in acetone production and decrease in acetate production, resulting in 1.0 ± 0.02 mol-acetone/mol-fructose and 0.45 ± 0.03 mol-acetate/mol-fructose (Fig. 2 c–e). The acetone–acetate ratio was 0.35 ± 0.03 in the case of the pyrF::acetone strain, but increased to 2.23 ± 0.21 in the pduL2::acetone strain. Acetone production was dominant over acetate production, and thus, we successfully directed more Ac-CoA pool to the acetone pathway. Thermophilic acetone production from CO 2 –H 2 We aimed to produce acetone from gaseous substrates with high productivity by using the pduL2::acetone strain. CO 2 and H 2 are the best-studied form of substrates for autotrophic acetogenesis. First, we tested CO 2 as a carbon source and H 2 as an energy source. To set up the culture, the bacterial strain was grown in basal medium supplemented with fructose, and we used this culture to inoculate fresh medium with CO 2 –H 2 in the headspace of the vial for the pre-culture. This step was performed for bacterial cells to completely consume fructose, followed by adaptation to CO 2 –H 2 metabolism. We inoculated fresh medium supplemented with CO 2 –H 2 by using the adapted cells and recorded the culture profile. There was almost no growth during 254 h of cultivation time (Fig. 3 a). Excreted metabolites accumulated over time (Fig. 3 b), indicating that the cells were metabolically active. Acetone was successfully produced in CO 2 –H 2 , reaching 1.8 ± 0.08 mM in the culture supernatant after 254 h. Acetate production reached 3.3 ± 0.09 mM, which was dominant over acetone production, although the pduL2::acetone strain was engineered to have a higher carbon flux to acetone in the culture supplemented with fructose. In addition, formate, which is an intermediate in the Wood–Ljungdahl pathway (WLP), also accumulated in the culture supernatant, reaching 1.2 ± 0.12 mM, indicating that the metabolic flow of the WLP is affected. Fig. 3 Growth and metabolite profile of the pduL2::acetone strain in CO 2 –H 2 as the substrate. a Dry cell weight according to the OD. b Concentration of the excreted metabolites in the culture supernatant measured by HPLC. Data represent the mean with SDs of three biological replicates. Most error bars are smaller than the symbols of data plots Full size image Thermophilic acetone production from CO or syngas The pduL2::acetone strain showed no growth in CO 2 –H 2 , so we tested a more energetically favorable gas, CO, for acetone production with autotrophic growth. M. thermoacetica uses CO as the energy source and shows a higher biomass than in H 2 because of higher adenosine triphosphate (ATP) generation (Hu et al. 2016 ; Kerby and Zeikus 1983 ). To initiate the culture, we adapted the pduL2::acetone strain to CO in the same way as CO 2 –H 2 , especially because CO inhibits M. thermoacetica growth without adaptation (Kerby and Zeikus 1983 ). The bacterial cells proliferated using CO as a carbon and energy source, in contrast to CO 2 –H 2 as observed by an obvious increase of the cellular biomass (Fig. 4 a). We also observed acetone and acetate production, and their maximum concentration was 1.1 ± 0.04 and 4.2 ± 0.08 mM, respectively (Fig. 4 b). The acetone–acetate ratio was 0.27 ± 0.01, which was again acetate dominant. No formate production was observed in contrast to CO 2 –H 2 , indicating that the metabolic flow of WLP was not affected. Thus, the pduL2::acetone strain autotrophically grew and produced acetone in CO. Fig. 4 Growth and metabolite profile of the pduL2::acetone strain in CO and CO–H 2 . a , c Dry cell weight according to the OD. b , d Excreted metabolites measured by HPLC. Profiles of CO ( a , b ) and CO–H 2 ( c , d ) are shown, respectively. The values shown are means of three biological replicates, and error bars represent the SD. Some error bars showing a small error range overlapped with symbols of data plots Full size image Syngas mainly comprises CO and H 2 and applicable substrates for sustainable gas fermentation. We tested a 1:1 CO and H 2 mixture as a model case. The culture was set up in the same way as in CO. Whereas the culture profile showed almost the same biomass and growth rate as in CO without H 2 (Fig. 4 c), the acetone productivity significantly improved from 1.1 ± 0.04 to 3.3 ± 0.10 mM (Fig. 4 b and d) from the same amount of CO, and the increment was higher compared to acetate (from 4.2 ± 0.08 to only 7.0 ± 0.10 mM), indicating enhanced carbon flux to the acetone production pathway. The acetone–acetate ratio increased from 0.27 ± 0.01 to 0.47 ± 0.01 and was still acetate dominant but was higher compared to with CO. In addition, there was no formate accumulation. Therefore, by adding H 2 , the acetone productivity from the same amount of carbon source increased, and the maximum specific acetone production rate also increased from 0.04 ± 0.003 to 0.09 ± 0.005 g-acetone/g-dry cell/h. The growth and metabolite profiles of the autotrophic acetone production in CO-containing gases can be compared by the fermentation parameters summarized in Table 3 . The electrons derived from H 2 seemed invested to acetone and acetate production rather than cellular biomass, because H 2 supplementation did not affect cell growth. It was also indicated that the electron input was directed to acetone rather than acetate. Table 3 Fermentation parameters for acetone production from CO-containing gases Full size table Discussion Acetone production by introduction of a thermophilic acetone biosynthetic operon in M. thermoacetica showed that the selected proteins were functionally expressed. However, when WT background was used as the host, the end product was acetate dominant. To increase the acetone productivity over acetate, we made use of a unique feature of M. thermoacetica that two functional phosphotransacetylase genes ( pduL1 and pduL2 ) are involved in acetate production. In the pyrF::acetone strain, three enzymes, PduL1 and PduL2 for acetate production and Thl for acetone production, compete to process Ac-CoA (Fig. 1 a). PduL2 shows a lower K m against Ac-CoA compared to PduL1 and Thl, and was likely to cause dominant acetate production. The removal of pduL2 successfully enhanced acetone production in the pduL2::acetone strain. As a result, the acetone–acetate ratio significantly increased to be comparable to the engineered C. ljungdahlii using lactose-inducible promoter for expression of the acetone synthesis enzymes under fructose or CO fermentation growth conditions (Banerjee et al. 2014 ). The acetone production ratio of our strain from CO was further increased by adding H 2 as discussed below. Previously, the effect of pduL2 knockout was also seen in our report that partial disruption of the acetate production pathway by pduL2 knockout enhanced lactate production in the metabolically engineered strains to produce lactate (Iwasaki et al. 2017 ). The production of lactate, which is provided by a reduction of pyruvate, was significantly enhanced by eliminating pduL2 because of the increased available Ac-CoA pool and therefore pyruvate, while pduL1 disruption had a marginal effect. It is also useful to control the metabolic flow toward acetate by eliminating only pduL2 to maintain the autotrophy on syngas. Acetate not only acts as a substrate for acetone synthesis but also sustains sufficient net ATP production by substrate-level phosphorylation. The pduL2::acetone strain maintains autotrophy in CO-containing gases, while autotrophic growth collapses in CO 2 –H 2 . The autotrophy of acetogens is energetically at the limit of thermodynamics (Schuchmann and Muller 2014 ). When M. thermoacetica grows in CO 2 –H 2 , the net ATP production would be calculated to be only 0.5 mol-ATP/mol-acetate (Schuchmann and Muller 2014 , and see also Online Resource Additional file 1 : Figure S1): {\text{2CO}}_{{2}} + {\text{ 4H}}_{{2}} \to {\text{ Acetate }} + {\text{ 2H}}_{{2}} {\text{O }}\left( { + 0.{\text{5 ATP}}} \right) {\text{2CO}}_{{2}} + {\text{ 4H}}_{{2}} \to {\text{ Acetate }} + {\text{ 2H}}_{{2}} {\text{O }}\left( { + 0.{\text{5 ATP}}} \right) The positive ATP level is possible only when acetate is produced, because the ATP level is –0.5 at the point of Ac-CoA production (–0.5 mol-ATP/mol-Ac-CoA). The consumed ATP is complemented by acetate production, which yields 1 mol-ATP/mol-acetate. Because acetone production uses both Ac-CoA and acetate as substrates, diverging Ac-CoA to the acetone pathway lowers ATP production derived from substrate-level phosphorylation. When acetone is produced at the maximum efficiency from H 2 and CO 2 , incorporating all the produced acetate, the net ATP is zero (Additional file 1 : Fig. S1): {\text{3CO}}_{{2}} + {\text{ 8H}}_{{2}} \to {\text{ Acetone }} + {\text{ 5H}}_{{2}} {\text{O }}\left( { + 0{\text{ ATP}}} \right) {\text{3CO}}_{{2}} + {\text{ 8H}}_{{2}} \to {\text{ Acetone }} + {\text{ 5H}}_{{2}} {\text{O }}\left( { + 0{\text{ ATP}}} \right) The pduL2::acetone strain did not grow in CO 2 –H 2 (Fig. 3 a), which can be explained by the low net ATP production. In fact, formate accumulation was observed in the metabolite analysis (Fig. 3 b), indicating ATP shortage. In the WLP, formate is produced by the reduced nicotinamide adenine dinucleotide phosphate (NADPH)-dependent reduction of CO 2 in M. thermoacetica . This formate is then ligated to tetrahydrofolate (THF) via an ATP-dependent reaction (Schuchmann and Muller 2014 ). Therefore, low net ATP production causes ATP shortage and formate accumulation. In addition to formate, acetone and acetate are produced by the pduL2::acetone strain, indicating that the cells were metabolically active but not able to grow. Acetate production is linked to ATP production by substrate-level phosphorylation, and the ATP produced is used for cellular maintenance and formate ligation to THF. In the case of the engineered A. woodii producing acetone from CO 2 –H 2 , highly abundant acetate compared to acetone was produced to provide sufficient ATP production and maintain its autotrophic growth (Hoffmeister et al. 2016 ). ATP shortage is a challenge for the autotrophic acetone production with low level of acetate as the byproduct in CO 2 –H 2 . In contrast, when a CO-containing gas is used, acetone production occurs as follows (Additional file 1 : Fig. S1): {\text{8CO }} + {\text{ 3H}}_{{2}} {\text{O }} \to {\text{ Acetone }} + {\text{ 5CO}}_{{2}} \left( { + {\text{2 ATP}}} \right) {\text{8CO }} + {\text{ 3H}}_{{2}} {\text{O }} \to {\text{ Acetone }} + {\text{ 5CO}}_{{2}} \left( { + {\text{2 ATP}}} \right) In addition, when H 2 is supplied (Additional file 1 : Fig. S1), {\text{3CO }} + {\text{ 5H}}_{{2}} \to {\text{ Acetone }} + {\text{ 2H}}_{{2}} {\text{O }}\left( { + 0.{\text{75 ATP}}} \right). {\text{3CO }} + {\text{ 5H}}_{{2}} \to {\text{ Acetone }} + {\text{ 2H}}_{{2}} {\text{O }}\left( { + 0.{\text{75 ATP}}} \right). In both cases, the net ATP is positive to sustain autotrophic growth. It has been discussed that when acetate is not formed from Ac-CoA to divert metabolic pathway, the WLP would be severely ATP limited (Fast and Papoutsakis 2012 ). However, the acetone pathway utilizes acetate that is formed from Ac-CoA (Fig. 1 a), which is advantageous to supply ATP. Applying H 2 enhances the acetone production per consumed CO from 0.13 mol-acetone/mol-CO without H 2 to 0.33 mol-acetone/mol-CO with H 2 . In theory, acetone production should be 2.5 times higher with H 2 supplementation, leaving no acetate as a by-product, when the reaction proceeds at the maximum efficiency. Our experiment with the pduL2::acetone strain showed that H 2 supplementation significantly improved acetone production to ~ 2.5 times higher (Fig. 4 ) compared to only CO supplementation, although the amount of acetate also increased to ~ 1.7 times higher. The remaining acetate not incorporated into the acetone pathway indicates that acetone productivity could be potentially improved in both CO and CO–H 2 by tuning the final amount of acetate to zero without losing autotrophic growth. One explanation for the abundant acetate that remained in our gas fermentation is due to the limit of enzymatic reactions, such as the CoA transferase that transfers CoA from Acac-CoA to acetate. An increase in acetate concentration is required to start solventogenesis in C. acetobutylicum , because CoA transferase shows a high K m of 1200 mM against acetate, while it shows a low K m of ~ 7–56 µM against Acac-CoA (Wiesenborn et al. 1989 ). Although we did not analyze the enzymatic properties of CoA transferase from T. melanesiensis , it is conceivable that the enzyme has a high K m against acetate and that the acetate concentration is a limiting factor. In fact, culture on fructose provided much higher concentration of acetate (Fig. 2 d). Further examination and optimization of the selected enzymes would contribute to higher productivity, in addition to the experiments such as utilization of bioreactors to provide abundant substrates to reach high concentrations of the products including acetate. It is also possible that PduL1, which is responsible for remained production of acetate in the pduL2::acetone strain, was expressed higher on the gaseous substrates. This is because when the acetone production rate was compared between fructose culture and CO–H 2 culture, both showed similar rates (0.12 ± 0.01 g-acetone/g-dry cell/h on fructose and 0.09 ± 0.00 g-acetone/g-dry cell/h on CO–H 2 , respectively, calculated from Figs. 2 d and 4 d). In other words, ATP would not be limiting factor in CO–H 2 culture, owing to the sufficient production of acetate linked to ATP production. This level of acetate might be necessary for the autotrophic acetone production at this rate. Otherwise, repression of PduL1 expression or replacement of the enzyme itself with its homologue with larger K m value would be able to reduce acetate production and increase acetone productivity. Finally, yet importantly, acetone production by engineering acetogenic metabolism has the benefit of redox balance, in addition to the use of Ac-CoA and acetate as substrates. In many cases of redox balance by native and engineered metabolism, unused electrons in the metabolic pathways are dedicated (or disposed of) to the reactions for end products. The redox imbalance is often a cause of low yield of end products and poor bacterial growth. The acetone pathway from Ac-CoA requires no reducing energy, so the redox balance in acetone production is difficult by using, for example, an E. coli system under anaerobic conditions because of the absence of reactions for unused electrons (Bermejo et al. 1998 ). However, the WLP produces acetate as the sole end product via Ac-CoA with redox balance, requiring no redox reactions from Ac-CoA through acetate. Therefore, it is beneficial to use the WLP to produce acetone with regard to the redox balance as well. In this report, we successfully engineered a thermophilic acetogen M. thermoacetica for autotrophic acetone production from syngas. Acetone productivity improves by partial deletion of the production pathway for acetate used as a substrate as well as for energy conservation. M. thermoacetica grows at a temperature higher than the boiling point of acetone (58 °C); therefore, thermophilic processes of gas fermentation producing volatile chemicals could be built and evaluated. Although further study would be needed to improve the productivity for realization of the industrial applications, the gas fermentation process can be simpler and more cost-effective than before by incorporating a purification process by distillation of the acetone produced from gaseous substrates. Availability of data and materials All data collected or analyzed during this study are included in this published article.
Acetone, a volatile solvent used for everything from removing nail polish and cleaning textiles to manufacturing plastics, could get a sustainability boost from a new strain of bacteria engineered by a research team based in Japan. They published the details of the heat-loving, acetone-producing bacteria called Moorella thermoacetica on April 23 in AMB Express. Acetone is typically produced through the widely used cumene method, which is cost-effective but not sustainable. The process, developed in 1942, involves converting two non-renewable resources into acetone and phenol, another chemical that helps manufacture a number of materials, including plastics. More environmentally friendly options exist—including gas fermentation, a bioprocess that converts carbon dioxide, monoxide and hydrogen into chemicals and fuels—but they tend to be cumbersome and costly, according to Yutaka Nakashimada, professor in the Graduate School of Integrated Sciences for Life, Hiroshima University, who led the research. One of the major expenses is the downstream processing, which involves separating out the desired chemicals from the other materials. "We thought the key is a simultaneous separation of the product from the ongoing fermentation," Nakashimada said. "Our choice was to produce volatile chemicals by using a group of bacteria thriving at high temperatures." The bacteria, M. thermoacetica, eat the gaseous feedstocks of hydrogen, carbon dioxide and monoxide—which can be procured from renewable resources—to produce acetone. Since they grow at a temperature higher than the boiling point of acetone, the acetone produced is a gas that evaporates and can be distilled as the bacteria make it. It streamlines the traditional system into a simultaneous process. "Our development of the engineered bacteria could pave the way for developing a consolidated process with simplified and cost-effective recovery via condensation following gas fermentation on a large scale suitable for industrial production," said paper co-first author Junya Kato, specially appointed assistant professor in the Graduate School of Integrated Sciences for Life, Hiroshima University. To develop this productive bacteria strain, the researchers genetically engineered bacteria with modified metabolism processes. "To our knowledge, this is the first study to provide strains of bacteria that thrive at high temperatures for gas fermentation of acetone," Kato said. "Although further study would be needed to improve the productivity for realization of the industrial applications, the gas fermentation process can be simpler and more cost-effective than before." The researchers plan to scale their work and study the productivity of their bacteria in industrial conditions. "We may need to genetically engineer the metabolism of the strain further," Nakashimada said. "Our ultimate goal is the industrialization of the gas fermentation of the 'gas-to-gas' process that is simpler and lower-cost."
10.1186/s13568-021-01220-w
Chemistry
Miniscule amounts of impurities in vacuum greatly affecting OLED lifetime
Hiroshi Fujimoto et al, Influence of vacuum chamber impurities on the lifetime of organic light-emitting diodes, Scientific Reports (2016). DOI: 10.1038/srep38482 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep38482
https://phys.org/news/2016-12-miniscule-amounts-impurities-vacuum-greatly.html
Abstract We evaluated the influence of impurities in the vacuum chamber used for the fabrication of organic light-emitting diodes on the lifetime of the fabricated devices and found a correlation between lifetime and the device fabrication time. The contact angle of the ITO substrates stored the chamber under vacuum were used to evaluate chamber cleanliness. Liquid chromatography-mass spectrometry was performed on Si wafers stored in the vacuum chamber before device fabrication to examine the impurities in the chamber. Surprisingly, despite the chamber and evaporation sources being at room temperature, a variety of materials were detected, including previously deposited materials and plasticizers from the vacuum chamber components. We show that the impurities, and not differences in water content, in the chamber were the source of lifetime variations even when the duration of exposure to impurities only varied before and after deposition of the emitter layer. These results suggest that the impurities floating in the vacuum chamber significantly impact lifetime values and reproducibility. Introduction Advances in carbon-based semiconductors have organic electronics close to realizing a wide variety of lightweight, low-cost, flexible, and energy-efficient devices including light-emitting diodes, solar cells, and transistors that will have an enormous impact on our daily lives. Organic light-emitting diodes (OLEDs), the most advanced of these technologies, already form the basis for a new generation of commercially viable smart phone displays and large-screen televisions 1 and are being developed for flexible lighting and display panels, which cannot be realized with inorganic LED or LCD technology 2 , 3 . Although progress has been rapid regarding many facets of the performance, processing, and development of organic electronics, difficulty obtaining reproducible operational lifetimes for organic devices fabricated with the same structure and materials, even when made in the same laboratory, is a problem that continues to mystify researchers. Here we show that small amounts of impurities floating in the chamber used to fabricate vacuum-deposited OLEDs are a source of large variations in operational lifetime even when the time spent in the chamber differs by only a short amount. Our results suggest that reproducible and long lifetimes can be obtained in OLEDs and other organic devices that include a vacuum-deposition step, such as solar cells and transistors, without the use of ultra-high vacuum chambers by controlling the device fabrication and cleanliness of the chamber. Furthermore, the strong influence of miniscule amounts of impurities on lifetime found here highlights a major challenge facing organic devices fabricated by spin-coating, inkjet, and other printing methods, for which elimination of impurities is more difficult. Practical OLEDs rely on organic materials with high electroluminescence efficiencies and good stability under electrical operation. Internal quantum efficiencies ( η int ) of 100% have already been achieved through the development of second-generation emitter materials based on phosphorescence 4 and, more recently, third-generation emitters based on thermally-activated delayed fluorescence (TADF), with TADF emitters eliminating the need for the rare metals used in phosphorescent materials 5 , 6 . A long lifetime is also critical for commercial OLEDs, so research to develop durable device structures, reduce driving current by improving light out-coupling efficiency 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , and understand degradation mechanisms is in progress worldwide 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 . The lifetime of OLEDs, which combine a large number of organic materials, must be evaluated with good reproducibility to advance this area, but experimentally measured lifetimes often vary depending on conditions related to things such as the vacuum chamber, location, date, and operator. The difference between lifetimes measured for devices fabricated in academic laboratories and for those being mass produced can be particularly large. However, very few studies discuss why lifetimes for OLEDs fabricated with the same structure and materials can vary so widely. The degradation mechanisms affecting the lifetime of OLEDs can be either intrinsic or extrinsic. Degradation mechanisms intrinsic to the organic materials or device structure include interfacial degradation such as the organic/cathode or organic/anode interface 17 , 18 , trap formation 19 , and polaron-exciton interactions 20 . Chemical reactions and Joule heating have also been reported to cause bulk degradation 21 , 22 , 23 . Thus, researchers have been actively developing materials and device structures to prevent such intrinsic degradation. Impurities on the indium tin oxide (ITO) substrate surface are one source of extrinsic degradation that can lower lifetime, and the characteristics and lifetime of OLEDs can be improved by exposure to UV or O 2 plasma, which decompose organic impurities on the ITO surface 26 , 27 , 28 . Contamination from the outgassing of sealing resin 29 and halogen impurities in the organic material 30 , 31 are another two sources of degradation that have been identified. Furthermore, residual water in the vacuum chamber during fabrication 24 , 25 , which particularly affects the interface between the hole transport layer and the emission layer, has been shown to reduce lifetime by participating in degradation-inducing electrochemical reactions with the organic materials. However, the influence of impurities other that water in the deposition chamber is still poorly studied even though the possibility of such impurities being in the chamber is high. Impurities in the vacuum chamber can arise from a variety of sources that are often overlooked. Although the development oil-free vacuum pumps eliminated oil pollution as a problem, many key components of the vacuum chamber still require vacuum grease, which can outgas when heated such as by the evaporation sources, and residual traces of the oil used during the processing the stainless steel parts of a chamber can also be released in vacuum 32 , 33 , 34 . Most chambers rely on some polymer-based parts such as O-rings and insulating resins, which have outgassing rates much higher than those of metal components, and such polymer materials can outgas unreacted components ( e.g. , plasticizers and curing agents), decomposition products, and absorbed gases 35 . Even with the multitude of potential contamination sources in the vacuum chamber, the influence of such impurities on OLED lifetime has yet to be studied in detail. To fill this gap and clarify the factors that affect lifetime reproducibility in OLEDs, we investigated the impurities in an OLED vacuum chamber and their impact on the lifetime of highly efficient TADF-based OLEDs. We find that the length of time the devices spend in the chamber during fabrication can greatly affect the lifetime even though similar initial characteristics are obtained for the devices. These results indicate that longer lifetimes in organic devices can be achieved by shortening the device fabrication time. To determine if impurities in the vacuum chamber could be the cause of the variations, we analyze the impurities adsorbed on substrates kept in the chamber for various durations using contact angle, liquid chromatography-mass spectrometry (LC-MS), and wafer thermal desorption gas chromatograph mass spectrometry (WTD-GC-MS) and find a correlation between the number of chamber impurities and the device lifetime. Results Relationship between lifetime and device fabrication time We first evaluated the batch-to-batch reproducibility of OLED characteristics and lifetime by measuring devices with the same structure fabricated in 16 separate batches over a span of four months ( Group I ). Figure 1(a) shows the external quantum efficiencies ( η ext ) and lifetimes of the TADF devices in the chronological order of their deposition. The η ext were measured at 1000 cd/m 2 , and the lifetime (LT90) is the time it took for the luminance to drop to 90% of the initial (1000 cd/m 2 ) when operated at a constant current. Table 1 summarizes the initial characteristics at 1000 cd/m 2 and lifetime of these devices ( Group I ), and Supplementary Fig. S1 shows typical J - V - η ext characteristics measured for the OLEDs. Table 1 Initial characteristics and lifetime of the devices. Full size table Figure 1 Initial characteristics and lifetime of Group I devices. ( a ) The η ext and lifetime of the TADF devices made in separate batches numbered in chronological order. ( b ) Lifetime as a function of device fabrication time, which is the time from the beginning of the HAT-CN deposition until the end of the Al deposition. Values are all averaged for two to four devices per batch. Full size image Although the initial characteristics of the devices can be well reproduced from batch to batch, the lifetime showed a large variance with values ranging from 76 h to 173 h. Since the lifetimes in this case do not correlate with the η ext , the lifetime variation cannot be explained by a difference in the current density at 1000 cd/m 2 . On the other hand, a correlation between device fabrication time, defined here as the total time from the beginning of the HAT-CN deposition until the end of the Al deposition, and lifetime could be found, as shown in Fig. 1(b) with lifetime decreasing as the device fabrication time increases. Variations in the device fabrication time arise from the time needed to adjust the deposition rate of each layer before beginning to deposit the layer on the substrate. Therefore, the device fabrication time is a factor that must be considered and controlled when trying to compare lifetimes. Because we think that organic impurities in the chamber may be affecting the lifetime, we surveyed the impurities that deposit on clean Si wafers kept in the vacuum chamber for different durations. The evaporation sources were kept at room temperature while the Si wafers were stored in the chamber. On the surface of a wafer stored in the chamber for 30 min, 13 materials were detected using LC-MS. Storing a Si wafer in the chamber for 15 h resulted in the detection of an additional 36 materials, for a total of 49 detected materials, and the ion counts increased for 12 of the 13 materials also detected for the 30-min wafer, indicating an increase in the deposited amount of those materials. Thus, the substrate can be contaminated just by being in the chamber, and the amount of these impurities mixed in the device will increase with the device fabrication time. Furthermore, the contact angles measured on three occasions over the period of the experiments for ITO substrates that were loaded in the vacuum chamber for 30 min before loading the device substrates increased from 5° before loading to 19–25° after storage in the chamber. These results suggest that the organic impurities detected by LC-MS deposited in high enough amounts to raise the contact angle. In light of these device, LC-MS, and contact angle results, we suspected that the impurities in the vacuum chamber are an important factor contributing to batch-to-batch variations in lifetime. Dependence of lifetime on impurities and water in vacuum chamber In addition to the organic impurities, water in the vacuum chamber is well known to decrease lifetime 24 , 25 and could lead to a correlation between device fabrication time and lifetime. Therefore, we next tried to experimentally isolate the effects of the water and impurities in the vacuum chamber on the lifetime. The following experiments were all performed two months after the Group I experiments, and materials other than those used in the devices fabricated here were deposited in the chamber during this span. Since we expect that many of the impurities measured by LC-MS are residue from previous depositions that are released from the vacuum chamber walls during device fabrication, we tried to remove those impurities by cleaning the chamber. Deposition shields were replaced with clean shields, and the surfaces in the chamber were wiped with acetone to remove deposited organic materials. After high-vacuum evacuation, the evaporation sources were heated to their maximum temperature to complete the cleaning. The amount of water was expected to be equal before and after cleaning while the amount of organic impurities should be reduced. Figure 2 shows the device lifetime and the partial pressure of water for batches of devices fabricated before cleaning ( Before ), after chamber cleaning and high-vacuum evacuation overnight ( Cleaning I ), and after high-vacuum evacuation for an additional 2 days following the deposition of Cleaning I ( Cleaning II ). To eliminate its influence, the device fabrication time of all devices was 160 min. Although the partial pressure of water measured by Q-mass during deposition increased from 3.3 × 10 −5 Pa for Before to 1.4 × 10 −4 Pa for Cleaning I , the device lifetime after cleaning greatly improved from 19 h to 99 h. On the other hand, the lifetimes were nearly the same for Cleaning I and Cleaning II even though the partial pressure of water during Cleaning II (3.7 × 10 −5 Pa) was lower. Thus, the increased amount of water during Cleaning I appears to have had little effect on the lifetime. Although Yamamoto et al . demonstrated a great decrease in lifetime when devices were fabricated in the ultra-high vacuum region with a water partial pressure of 3 × 10 −7 Pa and water amounts ranging over four orders of magnitude were introduced by storing the incomplete devices in a chamber with a water partial pressure of 3 × 10 −4 Pa for varying lengths of times 25 , the negligible influence in this paper is reasonable since the increase in water was much smaller (only a four-fold increase in partial pressure). Figure 2 Effect of chamber cleaning on lifetime and vacuum environment. Device lifetime (bars), contact angles of ITO substrate stored in the vacuum chamber for 30 min before deposition (circles), and partial pressures of water during deposition for the batches fabricated before cleaning ( Before ), after chamber cleaning and evacuation to high vacuum overnight ( Cleaning I ), and evacuated to high vacuum for an additional two days ( Cleaning II ). Full size image The impurities were again examined using LC-MS on Si wafers that were stored in the vacuum chamber for 15 h before fabricating either the Before or Cleaning I batches of devices. Although 88 materials were detected for both batches, the quantity of 58 of the materials decreased after chamber cleaning, leading to a decrease in the total ion count for all detected impurities of more than 15%. The contact angle of ITO substrates stored in the chamber for 30 min before loading the device substrates improved from 43° after cleaning to 17° after cleaning. Although the detected organic impurities after cleaning were still sufficient to affect the contact angle, the increase in contact angle was greatly reduced by cleaning. Thus, the impurities in the vacuum chamber were correlated with the lifetime, and changes in the amount of water in the chamber at the levels measured here had little influence. Analysis of impurities in vacuum chamber We further analyzed the impurities in the chamber to better understand their origin. Figure 3 shows a histogram separating the materials detected in the previous section by molecular mass. Interestingly, not only low-molecular-mass materials but also high-molecular-mass materials were detected in the vacuum chamber. We can tentatively assign some of the signals to α-NPD ( 1 ), Tris-PCz ( 2 ), TPBi (3), T2T ( 4 ), TCTA ( 5 ), C 13 H 10 N 2 ( 6 ), bis(2-ethylhexyl) adipate (DOA, 7 ), C 18 H 34 O 4 ( 8 ), diisononyl phthalate ( 9 ), and C 22 H 42 O 5 ( 10 ) (see Supplementary Fig. S2 for the chemical structures) based on the exact molecular masses and compositions calculated by LC-MS analysis. Surprisingly, despite the chamber and evaporation sources being at room temperature, a variety of previously deposited materials ( 1 – 5 ) and their fragments or degradation products ( 6 likely originates from TPBi) were detected. Furthermore, compounds that most likely do not originate from the materials used for OLED active layers ( 7 – 10 ) were detected. These include DOA, which is used in many chambers as a vacuum grease, and diisononyl phthalate, which is a well-known plasticizer for imparting low-temperature flexibility to polyvinyl chloride or rubber. These materials could be released from the resins used for insulation, the grease coating vacuum components, and the O-rings. Using LC-MS analysis, similar impurities floating in a different evaporation chamber having a different design and being operated under different conditions were also found (see Supplementary Note 2 ). Figure 3 Molecular masses of impurities. Histogram showing the distribution of molecular masses of the detected materials. Full size image Figure 4(a) groups the detected materials by their amount after cleaning as a percentage of that before cleaning. Of the 58 materials that decreased in quantity by chamber cleaning, about two thirds decreased by more than 50%, and the majority of the materials that decreased by more than 75% had molecular masses greater than 500 ( Supplementary Fig. S3 ). The breakdown in Fig. 4(b) of the changes in ion counts based on the identified signals shows that the signals from the OLED materials, 1 – 5 , and their fragments or degradation products, 6 , were significantly reduced by chamber cleaning (yellow bars). This indicates that such materials can be removed by wiping the chamber and replacing the deposition shields with clean ones. In contrast, the signals proposed to originate from plasticizers, 7 – 10 , decreased much less (only 15–40%) and actually increased in one case after chamber cleaning (purple bars). Since plasticizers are incorporated in the chamber components, the source of these materials cannot be greatly reduced simply by cleaning. Figure 4 Analysis of LC-MS data after chamber cleaning. ( a ) Breakdown of the ion counts after cleaning as a percentage of those before cleaning for the detected materials. ( b ) Ratios of the ion counts before cleaning to those after cleaning for the identified impurities (see Supplementary Fig. S2 ). Full size image Thus, cleaning the chamber in this manner can actually lead to a short-term increase in the release of some impurities. One possibility is that debris released from crevices and corners in the chamber by mechanical scraping may not be completely removed by wiping with solvent and instead leave a thin film of residue that can be more easily discharged in the chamber under vacuum. Decreasing the amount of these impurities by rethinking the methods for chamber cleaning and the materials used in chamber components could help to improve lifetime. Device fabrication time and amount of impurities Having established that deposition time affects lifetime because of impurities in the chamber, we more deeply investigated the correlation between lifetime and device fabrication time by increasing or decreasing the waiting time equally before and after the emission layer (EML) deposition while keeping the deposition timing of all other layers constant ( Group II ). Other parameters such as the deposition rates (see Methods for the detailed rates) for the different layers were controlled to be nearly the same for each batch, and only the waiting time was varied. In addition, the organic materials and evaporation source surroundings were degassed by evaporating amounts of materials corresponding to at least the same thicknesses as used in the devices after any exposure of the organic chamber to atmosphere such as to load more organic material. This degassing procedure resulted in a vacuum pressure that was relatively constant during the deposition of the organic layers. To analyze the change in impurities, Si wafers for LC-MS analysis were loaded with the device substrates during the device fabrication. The Si wafers were covered with non-contact shadow masks during deposition but exposed to the vacuum the same as the device substrates at all other times. Since these experiments were performed in three series over a span of three months, the chamber conditions were different. Even so, the contact angles were within the range of 10–18° after chamber cleaning before each series of experiments, and the initial device characteristics were reproducible ( Group II in Table 1 ). Thus, regardless of the time, the contamination was assumed to be relatively constant. Figure 5(a) shows the dependence of the lifetime on the device fabrication time, which is the time from the start of the HAT-CN deposition to the end of the Al deposition, for a variable EML waiting time. The lifetime and the device fabrication time were correlated even when only the time before and after the EML deposition was changed. A clear correlation is also found between the device fabrication time and the total ion count of impurities on the Si wafers plotted in Fig. 5(b) , the x -intercept of which corresponds to the ~40 min of deposition time when the Si wafers were masked. Further, the contact angle of the ITO substrates stored in the chamber increased with storage time ( Supplementary Fig. S4 ). These results suggest that the lifetime is impacted by impurities adsorbed at the EML interfaces since the effect of water was found to be small in our experiments. Figure 5 Effect of fabrication time on lifetime and vacuum environment. ( a ) Dependence of the lifetime on the device fabrication time for the Group II devices when the waiting times before and after EML deposition were equally extended or shortened. Lifetime values of 12 of the batches were averaged for two to four devices each, while only a single lifetime was available for the other five batches. ( b ) Total ion count for impurities measured by LC-MS on Si wafers loaded with the devices during fabrication. ( c ) Impurity quantity measured by WTD-GC-MS for DOA (squares) and DOP (triangles). Full size image Reproducible correlations between deposition amount and device fabrication time measured were found for some, but not all, of the impurities. Of the identified impurities, materials 7 , 8 , and 10 were found to deposit in amounts that reproducibly correlate with the device fabrication time when analyzed for deposition series performed one month apart ( Supplementary Fig. S5 ). Among the unidentified impurities, 21 materials showed a correlation between their quantity and the lifetime, and 16 of these materials contained at least one oxygen atom, with some containing up to five oxygen atoms such as C 22 H 40 O 5 or C 22 H 42 O 5 . Such oxygen-rich materials are expected to originate from the chamber components rather than the OLED source materials. In addition, materials containing nitrogen, phosphorus (C 11 H 17 OP), and chlorine (C 10 H 20 ONCl) were detected. Halogens such as chlorine are particularly known to influence the lifetime 30 , 31 . Thus, the correlations found here suggest that these materials are affecting the lifetime. Considering the actually amount of some of the impurities from chamber components that can enter the device, the area density of DOA, a well-known plasticizer found in chamber components, absorbed on the Si wafers was confirmed to be on the sub-ng/cm 2 order using WTD-GC-MS analysis ( Fig. 5(c) ). The amount of adsorbed bis(2-ethylhexyl)phthalate (DOP) was also detected on the sub-ng/cm 2 order ( Fig. 5(c) ) with a strong correlation on the device fabrication time, and rough calculations suggest that these amounts could realistically cover enough of the surface to have an effect on the substrates and devices (see Supplementary Note 3 ). Thus, even extremely small amounts of the impurities mixing with the active layers during device fabrication can have a great impact on the device lifetime. We also found that the dependence of lifetime on device fabrication time becomes weaker for lifetimes corresponding to a larger decrease in initial luminance ( Supplementary Note 4 ). This suggests that extrinsic degradation caused by the impurities has a greater impact on the initial degradation than at later stages, which was also found for degradation from water in the chamber 36 . Mixed water in the organic film can be decomposed into H + and OH − ions by an electric field of a few MV/cm during device driving. Since these ions are very active, the organic materials in the device are decomposed, generating trap sites and quenchers 24 , 25 , 37 , 38 . The mixed impurities in the organic thin films found here might have similar effects. Further, by inserting an organic layer of just 1 nm in the EML interface, the charge trap density at the interface has been reported to greatly change 39 , suggesting that charge trapping could also be influenced by the extremely small amounts of impurities. Such reactions should increase with the amount of impurities and cease once the impurities have reacted, leading to a larger influence on the initial degradation. Discussion We found that a major cause of batch-to-batch variation in the lifetime of OLEDs is differences in the device fabrication times. Although the sensitivity of device lifetime on fabrication time may vary for different chamber designs, these results indicate that device fabrication time should be controlled when making devices for lifetime comparison studies. Furthermore, our results suggest that extremely small amounts of impurities at the EML interfaces can greatly influence lifetime. Surprisingly, previously deposited OLED materials were found to be floating in the vacuum chamber even when the chamber and evaporation sources were at room temperature. We showed that wiping the vacuum chamber with solvent to clean it was particularly effective for reducing the impurities from previously deposited OLED materials and improved the lifetime. Longer lifetimes might be achieved by further optimizing the cleaning method, and impurities should be considered during the design stage of vacuum equipment because of the effect of impurities from vacuum components on the device lifetime. For example, the use of resin materials that can release plasticizers into the vacuum chamber, thereby influencing lifetime, should be minimized. The processing of the stainless-steel chamber components during chamber fabrication should be reviewed because residual layers could form on the surface of the stainless steel after fabrication 40 . In particular, contaminants such as oil that diffused into the stainless-steel surfaces could be released when the chamber is under vacuum for extended periods. Electrolytic polishing can reduce surface roughness to achieve lower outgassing 32 , 41 , but phosphoric acid may remain after the polishing. Aluminum alloy can be a source of outgassing because a natural oxide layer with a thickness of 10 nm or more is formed on the surface of the alloy after machining. This porous oxide layer can readily adsorb impurities and gasses that can be released in vacuum 42 . These results shed new light on some of the factors affecting the reproducibility of OLED lifetime, which is one of the great mysteries of OLEDs. Ultimately, because of difficulty eliminating all contamination related issues, the most practical method for fabricating OLEDs is with the shortest process time possible. Methods Device fabrication and characterization Devices were fabricated by thermal evaporation on to ITO-coated glass substrates pre-patterned with polyimide bank structures to define a circular active device area of 0.04 cm 2 (Atsugi Micro Co., Ltd.). The substrates were cleaned in multiple solvent baths (see Supplementary Note 1 for more details) and stored in an oven at 80 °C until use. Before deposition of the active layers, the substrates were heated on a hot plate in an N 2 -filled glovebox at 250 °C for 30 minutes, treated with UV in a system connect to the glovebox, and then directly loaded into the load lock chamber of the deposition system. Active layer deposition began 15 min after loading into the deposition chamber. The materials used in this experiment include 1,4,5,8,9,11-hexaazatriphenylenehexacarbonitrile (HAT-CN) as hole-injection layer (HIL), 9,9’,9”-triphenyl-9H,9’H,9”H-3,3’:6’,3”-tercarbazole (Tris-PCz) as hole-transport layer (HTL), 3,3-di(9H-carbazol-9-yl)biphenyl (mCBP) doped with (4s,6s)-2,4,5,6-tetra(9H-carbazol-9-yl)isophthalonitrile (4CzIPN) as emission layer (EML), 2,4,6-tris(biphenyl-3-yl)k,3,5-triazine (T2T) as hole-blocking layer, 2,7-bis(2,2’-bipyridine-5-yl)triphenylene (Bpy-TP2) as electron-transport layer, and LiF as electron-injection layer. Cathodes were deposited by evaporation of Al. The organic materials for each group of experiments were from the same synthetic lots to eliminate the influence of starting material purity. The material purities of Tris-PCz, mCBP, 4CzIPN, and T2T were 99.2%, 99.9%, 99.5%, and 99.4%, respectively, for Group I and 99.9%, 99.9%, 99.6%, and 99.9%, respectively, for Group II as evaluated by LC-MS. The results of elemental analysis for HAT-CN (calculated H: 0%, C: 56.26%, and N: 43.74%) were H: 0.07%, C: 56.44%, and N: 43.52% (same lot used for both groups). For Bpy-TP2 (calculated H: 4.51%, C: 85.05%, and N: 10.44%), elemental analysis found H: 4.47%, C: 85.15%, and N: 10.45% for Group I and H: 4.36%, C: 85.15%, and N: 10.46% for Group II. The device structure was ITO/HAT-CN (10 nm)/Tris-PCz (30 nm)/15% 4CzIPN:mCBP (30 nm)/T2T (10 nm)/Bpy-TP2 (40 nm)/LiF (0.8 nm)/Al (100 nm). The target of deposition rates of HAT-CN, Tris-PCz, T2T, Bpy-TP2, and LiF were 0.15 ± 0.05 Å/s, 0.5 ± 0.1 Å/s, 0.2 ± 0.05 Å/s, 0.5 ± 0.1 Å/s, and 0.04 ± 0.01 Å/s, respectively. The rate for Al deposition was 1 Å/s for the first 10 nm and then was increased up to 5 Å/s for the remainder. The EML was deposited once a stable deposition rate above 0.6 Å/s was obtained for mCBP to conserve material, so the deposition rate of CBP varied between 0.6 and 1 Å/s with 4CzIPN deposited at the corresponding rate for 15 wt%. An external quantum efficiency measurement system (C9920-12, Hamamatsu Photonics K. K.) was used to measure the current-density-voltage characteristics and external quantum efficiency of the OLEDs. Device lifetime under constant-current driving conditions was measured at a temperature of 30 °C for an initial luminance of 1000 cd/m 2 or an initial current density of 10 mA/cm 2 using a lifetime measurement system (System Giken Co., Ltd.). The partial pressure of water in the vacuum chamber was measured using the quadrupole mass spectrometer (Q-mass) of a residual gas analyzer (Qulee HGM-302, ULVAC Inc.). Analysis of impurities in vacuum chamber To examine the impurities, Si wafers were analyzed by LC-MS and WTD-GC-MS after storage in the vacuum chamber. Since obtaining the standard curves necessary to evaluate the absolute amounts of impurities detected by LC-MS 43 would be impractical for the nearly 100 impurities that were found, we focus on the ion counts, which can indicate the relative change in impurity amount when compared for an individual materials. The LC-MS analyses were performed using a liquid chromatography system (LC-30AD, Shimadzu) equipped with a UV/Visible detector (SPD-20A, Shimadzu) and a mass detector (Exactive™, Thermo Scientific). The masses of adsorbed impurities were measured using WTD-GC-MS. The WTD-GC-MS analyses were performed using a silicon wafer analyzer (SWA-256, GL Sciences) in combination with a gas chromatography system (7890A GC, Agilent Technologies) and a mass spectrometer (5975C inert XL MSD, Agilent Technologies). Contact angle measurements of ITO substrates stored in the chamber for 30 min before loading the device substrates were also performed to provide further evidence of the effect of impurities on surfaces in the chamber. The substrates were treated with a UV-O 3 system (UV253, Filgen, Inc.), loaded in the vacuum chamber, and stored there for 30 minutes at a pressure in the 10 −6 Pa range with the evaporation sources at room temperature. Contact angles were measured immediately after unloading the ITO substrates from the chamber using a DropMaster DMe-201 (Kyowa Interface Science Co., Ltd.) and water as the solution. The contact angles of ITO substrates immediately after UV-O 3 treatment were less than 5°. Additional Information How to cite this article : Fujimoto, H. et al . Influence of vacuum chamber impurities on the lifetime of organic light-emitting diodes. Sci. Rep. 6 , 38482; doi: 10.1038/srep38482 (2016). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Reproducibility is a necessity for science but has often eluded researchers studying the lifetime of organic light-emitting diodes (OLEDs). Recent research from Japan sheds new light on why: impurities present in the vacuum chamber during fabrication but in amounts so small that they are easily overlooked. Organic light-emitting diodes use a stack of organic layers to convert electricity into light, and these organic layers are most commonly fabricated by heating source materials in vacuum to evaporate and deposit them onto a lower temperature substrate. While issues affecting the efficiency of OLEDs are already well understood, a complete picture of exactly how and why OLEDs degrade and lose brightness over time is still missing. Complicating matters is that devices fabricated with seemingly the same procedures and conditions but by different research groups often degrade at vastly different rates even when the initial performance is the same. Unable to attribute these reproducibility issues to known sources such as the amount of residual water in the chamber and the purity of the starting materials, a report published online in Scientific Reports on December 13, 2016, adds a new piece to the puzzle by focusing on the analysis of the environment in the vacuum chamber. "Although we often idealize vacuums as being clean environments, we detected many impurities floating in the vacuum even when the deposition chamber is at room temperature," says lead author Hiroshi Fujimoto, chief researcher at Fukuoka i3-Center for Organic Photonics and Electronics Research (i3-OPERA) and visiting associate professor of Kyushu University. Because of these impurities in the deposition chamber, the researchers found that the time until an OLED under operation dims by a given amount because of degradation, known as the lifetime, sharply increased for OLEDs that spent a shorter time in the deposition chamber during fabrication. This trend remained even after considering changes in residual water and source material purity, indicating the importance of controlling and minimizing the device fabrication time, a rarely discussed parameter. Research partners at Sumika Chemical Analysis Service Ltd. (SCAS) confirmed an increase of accumulated impurities with time by analyzing the materials that deposited on extremely clean silicon wafers that were stored in the deposition chamber when OLED materials were not being evaporated. Using a technique called liquid chromatography-mass spectrometry, the researchers found that many of the impurities could be traced to previously deposited materials and plasticizers from the vacuum chamber components. "Really small amounts of these impurities get incorporated into the fabricated devices and are causing large changes in the lifetime," says Professor Chihaya Adachi, director of Kyushu University's Center for Organic Photonics and Electronics Research (OPERA), which also took part in the study. In fact, the new results suggest that the impurities amount to less than even a single molecular layer. To improve lifetime reproducibility, a practice often adopted in industry is the use of dedicated deposition chambers for specific materials, but this can be difficult in academic labs, where often only a limited number of deposition systems are available for testing a wide variety of new materials. In these cases, deposition chamber design and cleaning in addition to control of the deposition time are especially important. "This is an excellent reminder of just how careful we need to be to do good, reproducible science," comments Professor Adachi.
10.1038/srep38482
Physics
Thermoelectric nanodevice based on Majorana fermions is proposed
L. S. Ricco et al, Tuning of heat and charge transport by Majorana fermions, Scientific Reports (2018). DOI: 10.1038/s41598-018-21180-9 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-21180-9
https://phys.org/news/2018-04-thermoelectric-nanodevice-based-majorana-fermions.html
Abstract We investigate theoretically thermal and electrical conductances for the system consisting of a quantum dot (QD) connected both to a pair of Majorana fermions residing at the edges of a Kitaev wire and two metallic leads. We demonstrate that both quantities reveal pronounced resonances, whose positions can be controlled by tuning of an asymmetry of the couplings of the QD and a pair of MFs. Similar behavior is revealed for the thermopower, Wiedemann-Franz law and dimensionless thermoelectric figure of merit. The considered geometry can thus be used as a tuner of heat and charge transport assisted by MFs. Introduction Majorana fermions (MFs) are particles that are equivalent to their antiparticles. The corresponding concept was first proposed in the domain of high-energy physics, but later on existence of the elementary excitations of this type was predicted for certain condensed matter systems. Particularly, MFs emerge as quasiparticle excitations characterized by zero-energy modes 1 , 2 appearing at the edges of the 1D Kitaev wire 3 , 4 , 5 , 6 , 7 . Kitaev model is used to describe the emerging phenomena of p -wave and spinless topological superconductivity. Kitaev topological phase can be experimentally achieved in the geometry consisting of a semiconducting nanowire with spin-orbit interaction put in contact with s -wave superconducting material and placed in external magnetic field 8 , 9 . Other condensed matter systems were also proposed as candidates for the observation of MFs. They include ferromagnetic chains placed on top of superconductors with spin-orbit interaction 10 , 11 , fractional quantum Hall state with filling factor ν = 5/2 12 , three-dimensional topological insulators 13 and superconducting vortices 14 , 15 , 16 . MFs residing at the opposite edges of a Kitaev wire are elements of a robust nonlocal qubit which appears to be immune to the environment decoherence. This attracted the interest of the researchers working in the domain of quantum information and transport, as systems with MFs 17 , 18 , 19 can be in principle used as building blocks for the next generation of nanodevices 20 , 21 , including current switches 20 and quantum memory elements 21 . At the same time, similar systems were proposed as thermoelectric nanodevices 22 , 23 , 24 , 25 . In this work, following the proposals of thermoelectric detection of MF states 22 , 23 , 24 , 25 , we explore theoretically zero-bias thermal and electrical transport through one particular geometry consisting of an individual QD coupled both to a pair of MFs and metallic leads as shown in the Fig. 1(a) . The MFs reside at the edges of a topological U-shaped Kitaev wire, similar to the case of ref. 19 . The QD coupling to the MFs is considered to be asymmetric, while coupling to the metallic leads is symmetric, and MFs are supposed to overlap with each other. The results of our calculations clearly show that thermoelectric conductance, thermopower, Wiedemann-Franz law 26 and dimensionless thermoelectric figure of merit (ZT) as function of the QD electron energy demonstrate resonant behavior. Moreover, the position of the resonance can be tuned by changing the coupling amplitudes between the QD and the MFs, which allows the system to operate as a tuner of heat and charge assisted by MFs. Figure 1 ( a ) The sketch of the geometry we consider. Topological U-shaped Kitaev wire with a pair of MFs η A and η B is placed in contact with a QD, which is connected as well to two metallic reservoirs. The coupling of the QD to the MFs is asymmetric and is characterized by tunneling matrix elements λ A and λ B , while coupling to the metallic leads is symmetric and is characterized by the tunneling matrix element V . ε 2 denotes the coupling between two MF states. ( b ) Equivalent auxiliary setup (Kitaev dimer) resulting from the mapping of the original system onto the system with nonlocal fermion residing in QD 2 . t is tunneling matrix element between the QDs 1 and 2, Δ is the binding energy of the Cooper pair delocalized between them. Full size image Model For theoretical treatment of the setup depicted in the Fig. 1(a) , we use the Hamiltonian proposed by Liu and Baranger 27 : $$\begin{array}{rcl} {\mathcal H} & = & \sum _{\alpha k}{\varepsilon }_{k}{c}_{\alpha k}^{\dagger }{c}_{\alpha k}+{\varepsilon }_{1}{d}_{1}^{\dagger }{d}_{1}+V\,\sum _{\alpha k}({c}_{\alpha k}^{\dagger }{d}_{1}+{\rm{H}}.{\rm{c}}\mathrm{.)}\\ & & +{\lambda }_{A}({d}_{1}-{d}_{1}^{\dagger }){\eta }_{A}+{\lambda }_{B}({d}_{1}+{d}_{1}^{\dagger }){\eta }_{B}+i{\varepsilon }_{2}{\eta }_{A}{\eta }_{B},\end{array}$$ (1) where the electrons in the leads α = H , C (for hot and cold reservoirs, respectively) are described by the operators \({c}_{\alpha k}^{\dagger }\) ( c αk ) for the creation (annihilation) of an electron in a quantum state labeled by the wave number k and energy ε k . For the QD \({d}_{1}^{\dagger }\) ( d 1 ) creates (annihilates) an electron in the state with the energy ε 1 . The energies of both electrons in the leads and QD are counted from the chemical potential μ (we consider only the limit of small source-drain bias, thus assuming that chemical potential is the same everywhere). V stands for the hybridization between the QD and the leads. The asymmetric coupling between the QD and MFs at the edges of the topological U-shaped Kitaev wire is described by the complex tunneling amplitudes λ A and λ B . Introduction of an asymmetry in the couplings can account for the presence of the magnetic flux which can be introduced via Peierls phase shift 27 . ε 2 stands for the overlap between the MFs. Without loss of generality, we can put: \({\lambda }_{A}=\frac{(t+{\rm{\Delta }})}{\sqrt{2}}\) and \({\lambda }_{B}=i\frac{({\rm{\Delta }}-t)}{\sqrt{2}}\) , respectively for the left \(({\eta }_{A}={\eta }_{A}^{\dagger })\) and right \(({\eta }_{B}={\eta }_{B}^{\dagger })\) MFs, and introduce an auxiliary nonlocal fermion \({d}_{2}=\tfrac{1}{\sqrt{2}}({\eta }_{A}+i{\eta }_{B})\) 20 , 21 . The expressions for \({\lambda }_{A}=|{\lambda }_{A}|{e}^{i{\varphi }_{A}}\) and \({\lambda }_{B}=|{\lambda }_{B}|{e}^{i{\varphi }_{B}}\) constitute a convenient gauge for our problem. We put ϕ A = 0 and \({\varphi }_{B}=(n+\tfrac{1}{2})\pi \) with integer n = 0, 1, 2, … corresponding to the total flux through the ring of Fig. 1 . This parameter is experimentally tunable by changing the external magnetic field. This fact gives certain advantages to our proposal with respect to the previous works with asymmetric couplings between a single QD and a pair of MFs at the ends of a topological Kitaev wire 28 , 29 , 30 , 31 . According to ref. 32 the parameter ε 2 describing the overlap between the MFs depends on magnetic field in an oscillatory manner, the amplitudes \(|{\lambda }_{A}|=\frac{t+{\rm{\Delta }}}{\sqrt{2}}\) and \(|{\lambda }_{B}|=\frac{|{\rm{\Delta }}-t|}{\sqrt{2}}\) demonstrate the same behavior (see Sec. III-A of ref. 30 ) and thus external magnetic field affects not only the relative phase between λ A and λ B but their absolute values as well. To fulfill the condition | λ B | < | λ A | one should place the QD closer the MF η A than to the MF η B . We map the original Hamiltonian into one where the electronic states d 1 and d 2 are connected via normal tunneling t and bounded as delocalized Cooper pair, with binding energy Δ: $$\begin{array}{rcl} {\mathcal H} & = & \sum _{\alpha k}{\varepsilon }_{k}{c}_{\alpha k}^{\dagger }{c}_{\alpha k}+V\,\sum _{\alpha k}({c}_{\alpha k}^{\dagger }{d}_{1}+{\rm{H}}.{\rm{c}}\mathrm{.)}+{\varepsilon }_{1}{d}_{1}^{\dagger }{d}_{1}\\ & & +{\varepsilon }_{2}{d}_{2}^{\dagger }{d}_{2}+(t{d}_{1}{d}_{2}^{\dagger }+{\rm{\Delta }}{d}_{2}^{\dagger }{d}_{1}^{\dagger }+{\rm{H}}.{\rm{c}}\mathrm{.)}-\frac{{\varepsilon }_{2}}{2}.\end{array}$$ (2) This expression represents a shortened version of the microscopic model for the Kitaev wire corresponding to the Kitaev dimer (see Fig. 1(b) ). As it was shown in the refs 33 and 34 this model allows clear distinguishing between topologically trivial and Majorana-induced zero-bias peak in the conductance. In what follows, we use the Landauer-Büttiker formula for the zero-bias thermoelectric quantities \({ {\mathcal L} }_{n}\) 22 , 23 : $${{\mathscr{L}}}_{n}=\frac{1}{h}\,\int \,d\varepsilon \,(-\frac{{\rm{\partial }}{f}_{F}}{{\rm{\partial }}\varepsilon })\,{\varepsilon }^{n}\,{\mathscr{T}},$$ (3) where h is Planck’s constant, \({\rm{\Gamma }}=2\pi {V}^{2}\,{\sum }_{k}\,\delta (\varepsilon -{\varepsilon }_{k})\) is Anderson broadening 35 and f F stands for Fermi-Dirac distribution. The quantity $${\mathscr{T}}=-{\rm{\Gamma }}\text{Im}({\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}})$$ (4) is electronic transmittance through the QD, with \({\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}}\) being retarded Green’s function for the QD in the energy domain ε , obtained from the Fourier transform \({\tilde{{\mathscr{G}}}}_{{\mathscr{A}} {\mathcal B} }=\int \,d\tau {\tilde{{\mathscr{G}}}}_{{\mathscr{A}} {\mathcal B} }{e}^{\frac{i}{\hslash }(\varepsilon +i{0}^{+})\tau }\) , where $${{\mathscr{G}}}_{{\mathscr{A}} {\mathcal B} }=-\frac{i}{\hslash }\theta (\tau )\,{\rm{Tr}}\{\rho {[{\mathscr{A}}(\tau ),{ {\mathcal B} }^{\dagger }\mathrm{(0})]}_{+}\}$$ (5) corresponds to the Green’s function in time domain τ , expressed in terms of the Heaviside function θ ( τ ) and thermal density matrix ρ for Eq. ( 1 ). Experimentally measurable thermoelectric coefficients can be expressed via \({ {\mathcal L} }_{0}\) , \({ {\mathcal L} }_{1}\) and \({ {\mathcal L} }_{2}\) as: $$G={e}^{2}\,{ {\mathcal L} }_{0},$$ (6) $$K=\frac{1}{T}({ {\mathcal L} }_{2}-\frac{{ {\mathcal L} }_{1}^{2}}{{ {\mathcal L} }_{0}})$$ (7) and $$S=-(\frac{1}{eT})\frac{{ {\mathcal L} }_{1}}{{ {\mathcal L} }_{0}}$$ (8) for the electrical and thermal conductances and thermopower, respectively (T denotes the temperature of the system). We also investigate the violation of Wiedemann-Franz law, given by $$W\,F=\frac{1}{T}(\frac{K}{G}),$$ (9) in units of Lorenz number L 0 = ( π 2 /3) ( k B / e ) 2 and corresponding behavior of the dimensionless figure of merit 22 , 23 $$ZT=\frac{{S}^{2}GT}{K}.$$ (10) For Eq. ( 4 ), we use equation-of-motion (EOM) method 36 summarized as follows: $$(\varepsilon +i{0}^{+}){\tilde{{\mathscr{G}}}}_{{\mathscr{A}} {\mathcal B} }={[{\mathscr{A}},{ {\mathcal B} }^{\dagger }]}_{+}+{\tilde{{\mathscr{G}}}}_{[{\mathscr{A}}, {\mathcal H} ] {\mathcal B} },$$ (11) with \({\mathscr{A}}= {\mathcal B} ={d}_{1}\) . As our Hamiltonian given by Eqs ( 1 ) and ( 2 ) is quadratic, the set of the EOM for the single particle Green’s functions can be closed without any truncation procedure 37 . We find the following four coupled linear algebraic equations: $$(\varepsilon -{\varepsilon }_{1}-{\rm{\Sigma }}){\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}}=1-t{\tilde{{\mathscr{G}}}}_{{d}_{2}{d}_{1}}-{\rm{\Delta }}{\tilde{{\mathscr{G}}}}_{{d}_{2}^{\dagger }{d}_{1}},$$ (12) where Σ = − i Γ is the self-energy of the coupling with the metallic leads $${\tilde{{\mathscr{G}}}}_{{d}_{2}{d}_{1}}=+\frac{{\rm{\Delta }}{\tilde{{\mathscr{G}}}}_{{d}_{1}^{\dagger }{d}_{1}}}{(\varepsilon -{\varepsilon }_{2}+i{0}^{+})}-\frac{t{\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}}}{(\varepsilon -{\varepsilon }_{2}+i{0}^{+})},$$ (13) $${\tilde{{\mathscr{G}}}}_{{d}_{2}^{\dagger }{d}_{1}}=-\frac{{\rm{\Delta }}{\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}}}{(\varepsilon +{\varepsilon }_{2}+i{0}^{+})}+\frac{t{\tilde{{\mathscr{G}}}}_{{d}_{1}^{\dagger }{d}_{1}}}{(\varepsilon +{\varepsilon }_{2}+i{0}^{+})}$$ (14) and $${\tilde{{\mathscr{G}}}}_{{d}_{1}^{\dagger }{d}_{1}}=-2t{\rm{\Delta }}\tilde{K}{\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}},$$ (15) with $$\tilde{K}=\frac{{K}_{{\rm{MFs}}}}{\varepsilon +{\varepsilon }_{1}-{\rm{\Sigma }}-{K}_{-}},$$ (16) $${K}_{{\rm{MFs}}}=\frac{(\varepsilon +i{0}^{+})}{[{\varepsilon }^{2}-{\varepsilon }_{2}^{2}+2i\varepsilon {0}^{+}-{{\mathrm{(0}}^{+})}^{2}]}$$ (17) and $${K}_{\pm }=\frac{(\varepsilon +i{0}^{+})\,({t}^{2}+{{\rm{\Delta }}}^{2})\pm {\varepsilon }_{2}({t}^{2}-{{\rm{\Delta }}}^{2})}{[{\varepsilon }^{2}-{\varepsilon }_{2}^{2}+2i\varepsilon {0}^{+}-{{\mathrm{(0}}^{+})}^{2}]}.$$ (18) This gives the Green’s function of the QD: $${\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}}=\frac{1}{\varepsilon -{\varepsilon }_{1}-{\rm{\Sigma }}-{{\rm{\Sigma }}}_{{\rm{MFs}}}},$$ (19) where the part of self-energy $${{\rm{\Sigma }}}_{{\rm{MFs}}}={K}_{+}+{\mathrm{(2}t{\rm{\Delta }})}^{2}\tilde{K}{K}_{{\rm{MFs}}}$$ (20) describes the hybridization between MFs and QD. Importantly, for the low temperatures regime, the substitution of Eq. ( 19 ) into Eq. ( 3 ) and its decomposition into Sommerfeld series 23 , 26 allows to get analytical expressions for thermoelectric coefficients: $$\frac{G}{{G}_{0}}=\frac{K}{{G}_{0}{L}_{0}T}\approx {{\mathscr{T}}|}_{\varepsilon =0},$$ (21) $$S\approx e{L}_{0}T{\frac{1}{{\mathscr{T}}}\frac{d{\mathscr{T}}}{d\varepsilon }|}_{\varepsilon =0},$$ (22) where $${\mathscr{T}}=\tfrac{{\tilde{{\rm{\Gamma }}}}^{2}}{{[\varepsilon -{\varepsilon }_{1}-{K}_{+}-\tfrac{{\mathrm{(2}t{\rm{\Delta }}{K}_{{\rm{MFs}}})}^{2}(\varepsilon +{\varepsilon }_{1}-{K}_{-})}{{(\varepsilon +{\varepsilon }_{1}-{K}_{-})}^{2}+{{\rm{\Gamma }}}^{2}}]}^{2}+{\tilde{{\rm{\Gamma }}}}^{2}},$$ (23) with $$\tilde{{\rm{\Gamma }}}=[1+\frac{{\mathrm{(2}t{\rm{\Delta }}{K}_{{\rm{MFs}}})}^{2}}{{(\varepsilon +{\varepsilon }_{1}-{K}_{-})}^{2}+{{\rm{\Gamma }}}^{2}}]{\rm{\Gamma }}.$$ (24) Comparison of the Eqs ( 21 ) and ( 22 ) allows us to conclude that the peak values of the electric conductance are reached when S = 0 for which \(d\,{\mathscr{T}}/d\varepsilon =0\) which happens when $${\varepsilon }_{1}=\frac{({t}^{2}-{{\rm{\Delta }}}^{2})}{{\varepsilon }_{2}}.$$ (25) As we will see below, fulfillment of this condition corresponds to the presence of an electron-hole symmetry in the system. Note that as ε 2 enters in the denominator of the Eq. ( 25 ), even slight differences between t and Δ will be enough to change drastically the position of the resonance if hybridization between the MFs is small. Results and Discussion In our further calculations, we scale the energy in units of the Anderson broadening \({\rm{\Gamma }}=2\pi {V}^{2}\,{\sum }_{k}\,\delta (\varepsilon -{\varepsilon }_{k})\) 35 and take the temperature of the system k B T = 10 −4 Γ. The Anderson broadening Γ defines the coupling between the QD and the metallic leads, which is assumed to be symmetrical for a sake of simplicity. We start our analysis from the case when only a single MF ( η A ) is coupled to the QD. In terms of the amplitudes t ,Δ this corresponds to t = Δ. To be specific, we fix t = Δ = 4Γ. Looking at Eq. ( 2 ), we see that the terms \({d}_{1}{d}_{2}^{\dagger }+{\rm{H}}.{\rm{c}}.\) and \({d}_{2}^{\dagger }{d}_{1}^{\dagger }+{\rm{H}}.{\rm{c}}.\) enter into Hamiltonian with equal weights, and thus we are in the superconducting (SC)-metallic boundary phase. Figure 2(a) shows the electrical conductance \(G={e}^{2}\,{ {\mathcal L} }_{0}\) scaled in units of the conductance quantum G 0 = e 2 / h as a function of the QD energy level ε 1 , for several coupling amplitudes ε 2 between the MFs. Note that, if MFs are completely isolated from each other ( ε 2 = 0), the conductance reveals a plateau with G = G 0 /2 whatever the value of ε 1 (black line), and similar trend is observed in the thermal conductance shown in the Fig. 2(b) . The effect is due to the leaking of the Majorana fermion state into the QD 38 . The MF zero-mode becomes pinned at the Fermi level of the metallic leads, but within the QD electronic-structure. With increase of the coupling between the wire and the QD, the MF state of the Kitaev wire leaks into the QD. As a result, a peak at the Fermi energy emerges in the QD density of states (DOS), while in the DOS corresponding to the edge of the wire the corresponding peak becomes gradually suppressed. Consequently, the QD effectively becomes the new edge of the Kitaev wire. This scenario was reported experimentally in the ref. 9 . Figure 2 Electrical and thermal conductances of the system corresponding to SC-metallic boundary phase, t = Δ = 4Γ: ( a ) Electrical conductance as function of the QD energy level ε 1 for several ε 2 values of the couplings between MFs. ( b ) Corresponding thermal conductance. For both cases the resonance at the Fermi energy ε 1 = 0 occurs if ε 2 ≠ 0. For ε 2 = 0 the conductance plateau is observed (see main text for the corresponding discussion). The inset shows the equivalent circuit with an auxiliary fermion d 2 constructed from MFs η A and η B (red half-circles). Full size image To get resonant response of the thermoelectric conductances one should consider the case ε 2 ≠ 0, corresponding to the splitting of the MF zero-bias peak. The resonant behavior of G and K can be understood as arising from the presence of an auxiliary fermion d 2 , in the Hamiltonian [Eq. ( 2 )], whose energy ε 2 is now detuned from the Fermi level (see inset of Fig. 2(b) ). In this case, the regular fermion state instead of the corresponding half-fermion provided by MF η A gives the main contribution to the charge and heat current. In this scenario, filtering of the electricity and heat emerges: the maximal transmission occurs at ε 1 = 0. Our Fig. 2(a,b) recover the findings of Fig. 5(a) in ref. 23 . Our work, however, have an important novel dimension: we demonstrate that even small deviations of the system from the SC-metallic boundary phase which can be achieved by the control of the asymmetry of the couplings allows realization of the efficient tuners of electricity and heat. This effect is shown in the Fig. 3(a,b) . As one can see, even small detuning of the coefficient t from the value t = Δ leads to substantial blueshift (for the case t > Δ) or redshift (for the case t < Δ) of the conductance resonances. Such sensitivity is a direct consequence of the Eq. ( 25 ) defining the position of the resonances. Figure 3 Electrical and thermal conductances as functions of the QD energy level outside SC-metallic boundary phase. Slight deviations from the condition t = Δ result in the shift of the resonance peak for the electrical (panel (a)) and thermal (panel (b)) conductances. The corresponding resonances are blueshifted for t > Δ and redshifted for t < Δ as compared to the case of the SC-metallic boundary phase. Insets show the equivalent circuit with auxiliary fermion d 2 constructed from MFs η A and η B (red half-circles). Full size image To shed more light on the effect of the tuning of charge and heat transport in the system, we make a plot of the quantity \({\mathscr{T}}=-{\rm{\Gamma }}\text{Im}({\tilde{{\mathscr{G}}}}_{{d}_{1}{d}_{1}})\) appearing in the Eqs ( 3 ) and ( 4 ), as function of ε 1 and ε , see Fig. 4(a–d) . Figure 4(a) corresponds to the case t = Δ, ε 2 = 0. One can recognize a “cat eye”-shaped central structure, corresponding to the vertical line at ε = 0. Everywhere along this line \({\mathscr{T}}={\rm{constant}}\) , which according to the Eq. ( 21 ) means that changes in ε 1 do not affect the conductance. This corresponds well to the conductance plateau in the Fig. 2 . If ε 2 is finite, the “cat eye” structure transforms into a double-fork profile as it is shown in the Fig. 4(b) . Note that in this case, movement along the vertical line corresponding to ε = 0 lead to the change of the function \({\mathscr{T}}\) , which according to the Eq. ( 21 ) leads to the modulation of the conductance. The maximal value is achieved at the point ε 1 = 0, which corresponds well to the resonant character of the curves shown in the Fig. 2 . The introduction of the finite value of ε 2 and the asymmetry of the coupling between the QD and MFs ( t ≠ Δ) leads to the shifts of the double-fork structure either upwards by ε 1 scale for t > Δ (panel (c), blueshift of the resonant curves in the Fig. 3 ) or downwards by ε 1 scale for t < Δ (panel (d), redshift of the resonant curves in the Fig. 3 ). It should be noted that similar results to the transmittance were reported both theoretically (ref. 30 ) and experimentally (ref. 31 ) for the geometry of a linear Kitaev wire with a QD attached to one of its ends placed between source and drain metallic leads. Differently from the case considered in our work, the authors account for the spin degree of freedom and particularly for ref. 31 , they evaluate the dependence of the conductance on the energy level of the QD and magnetic field, while we further analyze ε and asymmetry of couplings dependencies relevant for the understanding of the tuner regime. Despite the distinct geometry and spinless regime, our results and those reported in refs 30 , 31 are in good correspondence with each other, thus validating the mechanism pointed out in refs 30 , 32 of field-assisted overlapping between MFs and tunnel-couplings with the QD. Figure 4 Transmittance \({\mathscr{T}}\) spanned by the axes of ε 1 and ε . Panels (a,b) show the regime corresponding to SC-metallic boundary phase with t = Δ, for ε 2 = 0 and finite ε 2 , respectively. Panel (a) reveals characteristic “cat eye”-shaped central structure at the Fermi level responsible for the onset of the conductance plateau. Panel (b) exhibits a double-fork structure responsible for the resonant character of the conductance for ε 2 ≠ 0. Introduction of the asymmetry of the QD to MFs coupling leads to the vertical shift of the double-fork feature resulting in the blueshift (panel (c)) or redshift (panel (d)) of the resonant conductance curve. The bright arcs visualized in all panels represent poles of the Green’s function of the QD. Full size image The possibility to tune electric and thermal conductances opens a way for tuning the thermopower ( S ), Wiedemann-Franz law ( WF ) and dimensionless figure of merit ( ZT ) as it is shown in the Fig. 5(a–c) . In the Fig. 5(a) the dependence of the thermopower on ε 1 is demonstrated. If t > Δ, at ε 1 = 0, S > 0 and the setup behaves as a tuner of holes. On the contrary, for t < Δ, at ε 1 = 0, S < 0 and the setup behaves as a tuner of electrons. Figure 5(b,c) illustrate the violation of WF law and the behavior of the dimensionless thermoelectric ZT , respectively. Note that ZT does not reach pronounced amplitudes, i.e., ZT < 1 26 , even for finite values of G and K as dependence on S 2 prevails if we take into account Eq. ( 21 ) into Eq. ( 10 ). Figure 5 ( a ) Thermopower ( S ), ( b ) Wiedemann-Franz law ( WF ) and ( c ) the figure of merit ( ZT ) as function of the QD energy level ε 1 for several ε 2 values of the couplings between MFs. Deviation from the condition t = Δ leads to the shift of the curves. Full size image Conclusions In summary, we considered theoretically thermoelectric conductances for the device consisting of an individual QD coupled to both pair of MFs and metallic leads. The charge and heat conductances of this system as functions of an electron energy in the QD reveal resonant character. The position of the resonance can be tuned by changing the degree of asymmetry between the QD and the MFs, which allows us to propose the scheme of the tuner of heat and charge. Thermopower, Wiedemann-Franz law and the figure of merit are found to be sensitive to the asymmetry of the coupling as well. Our findings will pave way for the development of thermoelectric nanodevices based on MFs.
In March 1938, the young Italian physicist Ettore Majorana disappeared mysteriously, leaving his country's scientific community shaken. The episode remains unexplained, despite Leonardo Scascia's attempt to unravel the enigma in his book The Disappearance of Majorana (1975). Majorana, whom Enrico Fermi called a genius of Isaac Newton's stature, vanished a year after making his main contribution to science. In 1937, when he was only 30, Majorana hypothesized a particle that is its own anti-particle and suggested that it might be the neutrino, whose existence had recently been predicted by Fermi and Wolfgang Pauli. Eight decades later, Majorana fermions, or simply majoranas, are among the objects most studied by physicists. In addition to neutrinos—whose nature, whether or not they are majoranas, is one of the investigative goals of the mega-experiment Dune—another class not of fundamental particles but of quasi-particles or apparent particles has been investigated in the field of condensed matter. These Majorana quasi-particles can emerge as excitations in topological superconductors. A new study by Ph.D. student Luciano Henrique Siliano Ricco and his supervisor Antonio Carlos Ferreira Seridonio and others, was conducted on the Ilha Solteira campus of São Paulo State University (UNESP) in Brazil and published in Scientific Reports. Credit: FAPESP "We propose a theoretical device that acts as a thermoelectric tuner—a tuner of heat and charge—assisted by Majorana fermions," Seridonio said. The device consists of a quantum dot (QD), represented in figure A by the symbol ε1. QDs are often called "artificial atoms." In this case, the QD is located between two metallic leads at different temperatures. The temperature difference allows thermal energy to flow across the QD. A quasi-one-dimensional superconducting wire—called a Kitaev wire after Russian physicist Alexei Kitaev, currently a professor at the California Institute of Technology (Caltech) in the U.S.—is connected to the QD. In this study, the Kitaev wire was ring- or U-shaped and had two majoranas (η1 and η2) at its edges. The majoranas emerge as excitations characterized by zero-energy modes. Credit: FAPESP "When the QD is coupled to only one side of the wire, the system behaves resonantly with regard to electrical and thermal conductance. In other words, it behaves like a thermoelectric filter," Seridonio said. "I should stress that this behavior as a filter for thermal and electrical energy occurs when the two majoranas 'see' each other via the wire, but only one of them 'sees' the QD in the connection." Another possibility investigated by the researchers involved making the QD "see" the two majoranas at the same time by connecting it to both ends of the Kitaev wire. "By making the QD 'see' more of η1 or η2, i.e., by varying the system's asymmetry, we can use the artificial atom as a tuner, where the thermal or electrical energy that flows through it is redshifted or blueshifted," Seridonio said (see figure B). This theoretical paper, he added, is expected to contribute to the development of thermoelectric devices based on Majorana fermions.
10.1038/s41598-018-21180-9
Earth
Climate change intensifies night-time storms over Lake Victoria
Wim Thiery et al. Hazardous thunderstorm intensification over Lake Victoria, Nature Communications (2016). DOI: 10.1038/ncomms12786 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms12786
https://phys.org/news/2016-09-climate-night-time-storms-lake-victoria.html
Abstract Weather extremes have harmful impacts on communities around Lake Victoria, where thousands of fishermen die every year because of intense night-time thunderstorms. Yet how these thunderstorms will evolve in a future warmer climate is still unknown. Here we show that Lake Victoria is projected to be a hotspot of future extreme precipitation intensification by using new satellite-based observations, a high-resolution climate projection for the African Great Lakes and coarser-scale ensemble projections. Land precipitation on the previous day exerts a control on night-time occurrence of extremes on the lake by enhancing atmospheric convergence (74%) and moisture availability (26%). The future increase in extremes over Lake Victoria is about twice as large relative to surrounding land under a high-emission scenario, as only over-lake moisture advection is high enough to sustain Clausius–Clapeyron scaling. Our results highlight a major hazard associated with climate change over East Africa and underline the need for high-resolution projections to assess local climate change. Introduction Severe thunderstorms and associated high waves represent a constant threat to the 200,000 fishermen operating on Lake Victoria 1 , 2 . The International Red Cross assumes that 3,000–5,000 fishermen die every year on the lake 2 , by which it substantially contributes to the global death toll from natural disasters. Each perished fisherman leaves on average eight relatives without an income, underlining the vulnerability of East African fishing communities to these natural hazards 1 , 2 , 3 . Despite the long-known bad reputation of Lake Victoria 4 , the understanding of the drivers of these extreme thunderstorms remains limited 5 . Moreover, anthropogenic climate change may significantly affect these hazardous weather systems. In many parts of the world, future climate simulations project an intensification of precipitation extremes and associated weather conditions 6 , 7 , 8 , 9 , 10 , 11 , but the potential future changes in extremes over Lake Victoria are still unknown. In this study, we use a unique combination of state-of-the-art satellite remote sensing, a high-resolution regional climate model and coarser-scale ensemble simulations to project changes in extreme precipitation over Lake Victoria. We project a strong and robust increase in precipitation extremes over Lake Victoria and show that this increase is about double over the lake compared with surrounding land. Although the occurrence of extreme precipitation in the present-day climate is mostly controlled by atmospheric dynamics, its future intensification can be entirely attributed to the advection of more humid air over the lake. Results Satellite data analysis Satellite observations enable the recognition of severe weather by detecting overshooting tops (OTs), that is, dome-like protrusions atop a cumulonimbus anvil induced by intense updrafts 12 , 13 . OTs mark the presence of vigorous thunderstorms and are tightly linked to severe weather reports 12 , 13 , 14 , 15 . By applying an OT detection algorithm to Meteosat Second Generation observations (Methods), we establish a new severe thunderstorm climatology for East Africa. The results reveal a marked imprint of Lake Victoria on the diurnal thunderstorm cycle and confirm its status as one of the most convectively active regions on Earth 5 , 13 , 16 , 17 , 18 ( Fig. 1 ). From 2005 to 2013, 73% of all 1,400,000 OT pixels detected over the lake occurred at night (22:00 to 9:00 UTC ), in contrast to the surrounding land where afternoon storms dominate (72% of all 4,200,000 OT pixels during 9:00 to 16:00 UTC ). Local evaporation and mesoscale circulation have been identified as key drivers of the present-day diurnal cycle of precipitation over Lake Victoria 5 , 17 , 18 , 19 , 20 , 21 , but so far it is not known how mean and extreme precipitation over this lake respond to a temperature increase induced by anthropogenic greenhouse gas emissions. To address this question, we performed a high resolution ( ∼ 7 km grid spacing), coupled lake–land–atmosphere climate projection for the African Great Lakes region with the regional climate model COSMO-CLM 2 , and analysed coarser-scale ensemble projections from the Coordinated Regional climate Downscaling Experiment (CORDEX) for the end of the century under a high-emission scenario (RCP8.5; Methods, Supplementary Fig. 1 and Supplementary Table 1 ). Figure 1: Lake imprint on severe thunderstorm occurrence. ( a , b ) Satellite-based overshooting tops (OT) detections during 2005–2013 over the Lake Victoria region (red square in the inset panel), from 9:00 to 15:00 UTC and from 00:00 to 9:00 UTC , respectively, as derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI; Methods). Full size image Extreme precipitation projections The projections show a contrasting change of mean and extreme precipitation over Lake Victoria ( Fig. 2 ; Supplementary Fig. 2 ), with mean precipitation decreasing while the intensity of extreme precipitation increases. Moreover, by the end of the century the increase in extremes (precipitation above the 99th percentile) is 2.4±0.1 times higher over the lake than over its surrounding land in the high-resolution projection (1.8±1.0 times in the CORDEX ensemble). Today convection initiates in the eastern part of the lake and intensifies while being advected westwards along the trade winds 4 , 5 . In the future, storms are projected to release extreme precipitation more in the eastern part of the lake, leading to an eastward shift of intense precipitation ( Fig. 2a,b ). Figure 2: Projected end-of-century changes in extreme precipitation over Lake Victoria. ( a ) Night-time 99th percentile precipitation ( P 99%,night , 00:00 to 9:00 UTC ) and ( b ) its projected future change from the high-resolution COSMO-CLM 2 model. ( c , d ) 24 h Lake (blue bars) and surrounding land (red bars) binned precipitation change (P bin) from COSMO-CLM 2 and the ensemble mean of nine CORDEX-Africa members, respectively. The red rectangle in Supplementary Fig. 1 includes the land pixels considered as surrounding land. All changes are between time periods 1981–2010 and 2071–2100 under RCP8.5. Full size image In contrast to the increase in extremes, the annual mean precipitation projected by the high-resolution model declines over the lake by 6% ( Supplementary Fig. 2 and Supplementary Note 1 ) 22 . This is also evident from changes in daily binned precipitation over the lake, which show an overall future drying except for precipitation above the 90th percentile ( Fig. 2c,d ). If we correct for this average drying (Methods), the effect of Lake Victoria on future extremes is even more pronounced, with the increase being 3.2±0.3 times larger over Lake Victoria compared with surrounding land in COSMO-CLM 2 and even 4.2±1.6 times larger in the CORDEX ensemble. In other words, very intense storms are projected to become more frequent in the future over Lake Victoria. For example, by the end of the century a 1-in-15-year precipitation event over Lake Victoria becomes a 1-in-1.5-year event in the high-resolution projection (1-in-0.8-year event in the CORDEX ensemble). In both cases this exceeds the projected increase in storm frequency over land ( Supplementary Fig. 3 ). Assessing uncertainty Based on a single high-resolution projection ( ∼ 7 km), we cannot assess modelling uncertainties or compare emission scenarios. Since this type of simulations are computationally very expensive, this is a recurrent limitation of studies investigating climate change at high resolution 23 , 24 , 25 , 26 . By providing ensemble projections at coarser resolution ( ∼ 50 km), the CORDEX initiative enables uncertainty assessments within the constraints of the quality of both the downscaling tool and the lateral boundary conditions 27 . Although some differences occur between the high- and coarse-resolution projections, it is clear that the lake effect on the future precipitation distribution is robust ( Fig. 2c,d ; Supplementary Fig. 3 ). This is further confirmed by the fact that the projected response in the coarse-resolution ensemble ( Fig. 2d ) is to a large extent independent of the driving global model. In particular, every CORDEX simulation projects a reduction in over-lake precipitation for all bins below the 90th percentile and an amplification of the increase in the highest bins, thereby corroborating the high-resolution model ( Fig. 2 ). Comparison of the coarse-resolution RCP8.5 and RCP4.5 ensembles moreover demonstrates that the choice of the emission scenario does not influence Lake Victoria’s amplifying role on extreme precipitation changes. At the same time COSMO-CLM 2 clearly outperforms all CORDEX models as well as a state-of-the-art reanalysis in terms of precipitation representation, underlining the benefits of enhanced resolution and use of a lake model for climate simulations over the region ( Supplementary Figs 4–6 and Supplementary Note 2 ) 5 , 28 , 29 , 30 , 31 . Decreasing the horizontal grid spacing to convection-permitting scales (below ∼ 4 km) would most likely improve the skill of our climate simulations even more, since the convection parameterization employed in the high-resolution model still entails a number of limitations 25 , 26 , 32 , 33 , 34 , 35 . Overall these findings highlight the need for running coordinated high-resolution projections to quantify local climate change in regions with a particular dynamical regime 23 . Driving mechanisms To better understand the processes controlling present-day extreme precipitation occurrence and its future change, we analysed observations and a multi-year reanalysis downscaling with COSMO-CLM 2 (Methods). Satellite observations of OTs and precipitation reveal that increased night-time thunderstorm activity and rainfall amounts over Lake Victoria are preceded by intense storms and rainfall over land the prior afternoon ( Fig. 3a,b ). Large-scale moisture availability contributes to this positive relationship, but alone it cannot explain the observed correlation ( Supplementary Note 3 ). Land storms therefore act as a positive feedback for the intensity of night-time lake storms. These severe land storms could impact storm intensity over the lake in two ways. First, they could enhance moisture convergence by increasing the near-surface-specific humidity (thermodynamic control; Fig. 3c , Supplementary Fig. 7 ). Second, they could modify the lake/land breeze system 5 by cooling the land surface (dynamic control). In that case the cold pools of the afternoon storms act to reduce gradients in near-surface air temperature between lake and land ( Fig. 3d ), thereby weakening the lake breeze and possibly also moisture transport away from the lake. If the cold anomaly persists into the night, this could strengthen the land breeze and by that possibly stimulate moisture convergence and column instability 36 . Interestingly, lake evaporation does not control the occurrence of extremes over Lake Victoria, despite its key role in the regional hydrological cycle 5 , 17 , 19 . Figure 3: Afternoon controls on night-time extreme precipitation. ( a ) Afternoon SEVIRI overshooting tops (OT) pixel detections over land surrounding Lake Victoria versus night-time OT pixels over the lake (2005–2013; blue). ( b ) Afternoon TRMM 3B42 precipitation (P) around Lake Victoria versus precipitation over the lake (1998–2013; red) and corresponding modelled values from a 10-year reanalysis downscaling with COSMO-CLM 2 (1999–2008; brown) (Methods). ( c , d ) Same as b , but for the afternoon land 2-m specific humidity ( Q V,2M ) and lake–land temperature contrast , respectively, as derived from the reanalysis downscaling. Each variable on the y axis was binned according to the variable on the x axis using a bin width of 1%. Full lines indicate the bin median and shaded uncertainty bands the interquartile range. Note the logarithmic x axis. Full size image Given the importance of moisture convergence for triggering precipitation extremes over Lake Victoria, we investigate whether dynamic or thermodynamic controls on moisture convergence dominate and how this might change towards the future (Methods). In the present-day climate, moisture convergence more than triples during 24 h periods (9:00 to 9:00 UTC ) with extreme night-time precipitation compared with average conditions (81 × 10 10 versus 26 × 10 10 kg d −1 on average). A large fraction (74%) of this increase can be attributed to dynamical effects, while only 26% is due to the enhanced moisture content of converging air masses ( Supplementary Fig. 7 and Supplementary Table 2 ). We thus conclude that mesoscale circulation is crucial for triggering extremes in the present-day climate (see also Supplementary Notes 3 and 4 ). For the end-of-the-century projection, in contrast, we find that the intensification of precipitation extremes is entirely due to the enhanced moisture content of converging air masses. Under RCP8.5, the model projects a 27% increase in moisture convergence during extremes. This rise is entirely attributed to thermodynamic effects as dynamical changes reduce moisture convergence by 3% ( Supplementary Table 2 ). The increase in moisture convergence is consistent with the modelled sensitivity of strong precipitation extremes (99.9th percentile) to temperature changes: only over the lakes the theoretically expected Clausius–Clapeyron scaling is attained, whereas over the surrounding land the scaling is constrained by moisture availability ( Supplementary Fig. 8 and Supplementary Note 5 ). Finally, we find no role for lake evaporation changes, as its increase during extremes is 50 times smaller than the rise in moisture convergence. The picture is different for the decrease in annual mean precipitation, where mesoscale dynamical changes dominate. By the end of the century, night-time near-surface air temperatures will increase more rapidly over land compared with the lake, thus weakening the lake–land temperature contrast responsible for the land breeze, night-time moisture advection and updrafts. In addition, during daytime the warmer land will intensify the lake breeze and associated moisture divergence from the lake. In summary, we have shown that new satellite-based detections of severe storms reveal a clear diurnal variation in storm activity over Lake Victoria and that nights with more intense storm activity are preceded by afternoons with more intense storms over the neighbouring land. Using a dedicated, high-resolution climate model set-up for equatorial East Africa, we found that these intense land storms favour moisture convergence by enhancing moisture availability but especially by weakening the afternoon lake breeze and strengthening the night-time land breeze ( Fig. 4 ). We project a substantial future decline in annual mean precipitation over Lake Victoria, which may be explained by changing mesoscale dynamics associated with a faster warming land ( Fig. 4 ). However, despite this average decrease, we project a strong and robust increase in precipitation extremes over Lake Victoria and show that this increase is about double over the lake compared with surrounding land. The rise in precipitation extremes is entirely due to enhanced future moisture availability ( Fig. 4 ), and only over the lake the advection of more humid air supplies enough moisture to sustain Clausius–Clapeyron scaling. The increase in extremes is therefore not physically incongruous with the decrease in mean precipitation caused by mesoscale dynamical changes. Figure 4: Processes controlling night-time precipitation extremes and climate change over Lake Victoria. ( a ) In the present-day climate, local evaporation and net moisture flux convergence (MFC; Methods) along the land breeze both contribute to night-time precipitation generation over Lake Victoria. ( b ) Climate change simulations project a decrease in average precipitation, despite enhanced lake evaporation. Future mesoscale circulation changes impeding thunderstorm development are responsible for this decrease. ( c ) Present-day precipitation extremes are associated with increased MFC, of which 74% is explained by enhanced atmospheric convergence and the remaining 26% by enhanced moisture content of advected air masses ( Fig. 3 ; Supplementary Fig. 7 ). ( d ) The future intensification of precipitation extremes is amplified over Lake Victoria compared with surrounding land and entirely due to higher moisture content of converging air masses. Full size image Discussion Our results emphasize a major hazard associated with climate change over East Africa with potential severe human impacts. Lake Victoria directly sustains the livelihood of 30 million people living at its coasts and its fishing industry is a leading natural resource for East African communities 1 , 2 . However, given the projected increase in extreme over-lake thunderstorms, the current vulnerability of local fishing communities 2 , 3 and their growing exposure driven by rapid urbanization along the lakefront 37 , this lake is likely to remain the most dangerous stretch of water in the world. At the same time, our findings mark an opportunity for developing a satellite-based early warning system for hazardous thunderstorms over Lake Victoria. Warning systems deriving predictions from the strong afternoon controls on night-time thunderstorms ( Fig. 3 ) have the potential to substantially reduce the vulnerability of local fishing communities. This would complement ongoing efforts, in particular by the UK Met Office 18 , to provide storm warnings for the region based on numerical weather prediction. This study finally underscores the need for high-resolution projections to assess local climate change, especially in regions with a particular dynamical regime where extreme precipitation responses to anthropogenic climate change may be very different from large-scale projections 11 , 38 , 39 . High-resolution projections accounting for lake–atmosphere interactions are still very rare and may face challenges 27 , 40 , but adopting this approach is critical to assess future climate impacts in regions where lakes are abundant. Methods Overshooting top detections We applied an overshooting top detection algorithm (OTDA) 12 , 14 to the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) infrared satellite data for equatorial East Africa (23° E to 43° E; 11° S to 7° N). The SEVIRI instrument provides images at 15-min temporal and ∼ 4-km spatial resolution over the Lake Victoria region 41 . The OTDA builds on the premise that OTs are composed of a small region of very cold infrared brightness temperatures surrounded by a warmer cirrus anvil cloud 12 , 14 , 15 . As OTs penetrate through the level of neutral buoyancy (LNB), they continue to cool at a rate of 7–9 K km −1 making them much colder than the anvil cloud which typically resides between the LNB and the tropopause 42 . The OTDA first identifies candidate OT regions by selecting SEVIRI pixels with an infrared brightness temperatures ≤217.5 K and near to or colder than the tropopause temperature defined by the Modern Era Retrospective analysis for Research and Applications (MERRA). Subsequently the cirrus anvil cloud surrounding a candidate OT region is sampled, and if the candidate is substantially (≥6 K) colder than the anvil it is classified as an OT. Detection thresholds for the OTDA were based on the analysis of a large sample of OT-producing storms depicted within 1-km spatial resolution Moderate-resolution Imaging Spectroradiometer (MODIS) and Advanced Very High Resolution Radiometer (AVHRR) imagery in combination with OT product user feedback from the National Oceanic and Atmospheric Administration (NOAA) operational weather forecasting community. The OTDA finally corrects for parallax errors when locating OT-producing storms, thereby assuming a cloud top height of 16 km. Using this approach, more than 50 million OT pixels were detected from 2005 to 2013 over equatorial East Africa. A single OT is on average composed of 11 OT pixels and typically does not exceed 15 km in diameter. Climate simulations All simulations were performed with COSMO-CLM 2 , which couples the non-hydrostatic regional climate model COSMO-CLM version 4.8 to the Community Land Model version 3.5 (CLM3.5) and the Freshwater Lake model (FLake) 43 , 44 . Detailed descriptions of this state-of-the-art model system and its subcomponents are provided in earlier studies 5 , 43 , 44 , 45 , 46 , 47 , 48 , 49 . The COSMO-CLM 2 model was applied in its tropical configuration 5 , 47 to generate three climate simulations. First, a control simulation (CTL) was conducted with the ERA-Interim reanalysis as lateral boundary conditions for the period 1996–2008 and using the 0.44° COSMO-CLM CORDEX-Africa evaluation simulation 47 as intermediate nesting step. The same nesting strategy was employed to dynamically downscale a global climate model (GCM) simulation from the Coupled Model Intercomparison Project phase 5 (CMIP5). The Max Plank institute MPI-ESM-LR GCM was selected based on its high skill over East Africa 50 . GCM downscalings were performed for the historical reference period 1978–2010 (HIS) and the future projection 2068–2100 under the high emission scenario RCP8.5 (FUT). This scenario was chosen as it is expected to facilitate interpretation by yielding a strong climate response and as it provides the likely upper bound of the changes which may be expected by the end of the century. All experiments were conducted at a horizontal resolution of 0.0625° ( ∼ 7 km), using 50 vertical levels and a time step of 60 s ( Supplementary Table 1 ). Moist convection was parameterized by the Tiedtke mass flux scheme 48 . The model domain encompasses the central part of the East African rift ( Supplementary Fig. 1 ) and therewith includes most of the African Great Lakes. The first 3 years of each simulation were considered as spin-up and excluded from the analysis. Overall, the simulations are designed to simulate the influence of a high-emission scenario on mean and extreme precipitation over and around Lake Victoria. Large-scale precipitation changes (for example, over the whole of East Africa 40 ) and influences of decadal variability 51 , 52 are thereby beyond the scope of this study. Data binning and correction for average drying Binned precipitation changes Δ P bin ( Fig. 2c,d ) were computed using a 1% bin width and assuming tied ranks. The lake influence on extreme precipitation changes was computed as the ratio between the mean daily precipitation change over lake and land for the highest bin (containing precipitation above the 99th percentile). Uncertainty ranges were derived as the maximum difference between this ratio and the ratio obtained with one standard deviation added or subtracted from the mean, respectively. In addition, the binned change was also corrected for the change in mean precipitation. The correction is performed by subtracting from each precipitation bin change Δ P bin the fractional contribution to the average precipitation change assuming equal weights ( P bin Δ P / P ). By doing so the integral over all bins becomes zero and only the lake influence on the precipitation distribution is retained. Moisture convergence calculation The vertically integrated, instantaneous moisture flux convergence (MFC, kg s −1 ) over Lake Victoria was computed following along the red circle denoted in Supplementary Fig. 1 . The specific humidity is indicated by q (kg kg −1 ), u n is the wind velocity (m s −1 ) normal to the contour’s outer edge (outward defined positive), g the standard gravitational acceleration and dp the segmented pressure differences (Pa) between the surface pressure P 0 and the pressure P taken at a height of 7 km above sea level. The contour segments d c (m) are defined using the integer midpoint circle algorithm with 1.4° radius and centred at 1°S–33° E ( Supplementary Fig. 1 ). Given the total change in moisture convergence during extremes , and the change induced by atmospheric dynamics given by it is possible to attribute the occurrence of extremes in the present-day climate to dynamic and thermodynamic contributions. In equation (2), subscript CTL corresponds to all days of the CTL simulation and EX only to the 24 h periods (9:00 to 9:00 UTC ) associated with night-time precipitation above the 99th percentile (00:00 to 9:00 UTC ). As such, the averaged specific humidities during extremes and the entire time period were used to rescale the moisture content during extremes to climatological values. Equation (2) thus computes ΔMFC assuming no changes in atmospheric moisture content, that is, taking only circulation changes into account (dynamic contribution). In addition, we used this framework to identify the drivers of future changes in extreme precipitation: in this case we only considered the 24 h periods associated with extreme night-time precipitation in the HIS and FUT simulations, respectively. CORDEX ensemble analysis The public domain RCP8.5 ensemble established by the Coordinated Regional climate Downscaling Experiment (CORDEX) for Africa currently consists of 16 members, from which we selected the nine members that compute the two-way lake–atmosphere exchange interactively with a lake model (in casu FLake; Supplementary Table 1 ). Daily precipitation sums of these nine members are available from 1951 to 2100 at 0.44° ( ∼ 50 km) resolution and served as a basis to calculate return period and binned precipitation changes after nearest-neighbour remapping to the COSMO-CLM 2 grid. Data availability All materials that have contributed to the reported results are available upon request, including code and the COSMO-CLM 2 model output (26 TB). CORDEX-Africa simulations are available at , ERA-Interim data at and TRMM observations at . Additional information How to cite this article: Thiery, W. et al. Hazardous thunderstorm intensification over Lake Victoria. Nat. Commun. 7:12786 doi: 10.1038/ncomms12786 (2016).
Lake Victoria in East Africa will become a hotspot for hazardous thunderstorms due to climate change. This is shown by an international study published in Nature Communications on the 23rd of September. Stef Lhermitte (TU Delft) analysed the differences between storms during the day (which occur mainly over land) and during the night (occur mainly over the lake). Lake Victoria is divided among Uganda, Kenya, and Tanzania. With a surface close to 70,000 km2, it is the biggest lake in Africa. The lake is also a notoriously dangerous place for the 200,000 people who go fishing there at night. The International Red Cross estimates that between 3,000 and 5,000 fishermen per year lose their lives in violent storms on the lake. Difference between day and night That Lake Victoria can be so stormy at night is related to the circulation in the atmosphere above the enormous water surface, explains Lhermitte, who was working for KU Leuven (Belgium) during the time of the research. During the day, a breeze develops that flows from the cool water towards the warm land. At night, the opposite occurs: the land breeze flows away from the cooling land towards the warmer lake. As the lake is shaped like a circle, these land breezes from all directions converge above the lake. Add evaporation to this cocktail and you get a lot of storms, rain, wind, and waves. Satellite observations The scientists were able to provide scientific evidence for this pattern in collaboration with American space agency NASA. Thanks to new NASA satellite products they were able to map the number of hazardous thunderstorms and their locations in East Africa – every 15 minutes for a period ranging from 2005 to 2013. During the day, most storms rage over the surrounding land, especially the typical afternoon thunderstorms that are caused by local upsurges of warm air. At night, these storms concentrate over Lake Victoria. The new NASA satellite observations clearly show the day-and-night rhythm of the climate above and around Lake Victoria (day image on the left, night image on the right). The darker the image, the more storms were counted there between 2005 and 2013. Credit: Delft University of Technology Hotspot for night-time storms To predict the impact of climate change on this process, the team, led by Wim Thiery (KU Leuven and ETH Zurich) also ran climate simulations using an advanced computer model: "If we start from a business-as-usual scenario, whereby the emission of greenhouse gases continues to increase, the extreme amounts of rainfall over Lake Victoria will increase by twice as much as the rainfall over the surrounding land. As a result, the lake will become a hotspot for night-time storms. Superstorms that only occur once every 15 years today will occur almost every year by the end of the century." Warning system The scientists plan to do further research to optimize existing warning systems for local fishermen. The results make it possible to better predict extreme storms over the lake and to reduce the vulnerability of the local fishermen. In the meantime, a prototype of a new warning system has been developed.
10.1038/ncomms12786
Earth
Today's ocean models can only simulate less than 5% of the currents at 1,000-meter depth
Fenzhen Su et al, Widespread global disparities between modelled and observed mid-depth ocean currents, Nature Communications (2023). DOI: 10.1038/s41467-023-37841-x Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-37841-x
https://phys.org/news/2023-05-today-ocean-simulate-currents-meter.html
Abstract The mid-depth ocean circulation is critically linked to actual changes in the long-term global climate system. However, in the past few decades, predictions based on ocean circulation models highlight the lack of data, knowledge, and long-term implications in climate change assessment. Here, using 842,421 observations produced by Argo floats from 2001-2020, and Lagrangian simulations, we show that only 3.8% of the mid-depth oceans, including part of the equatorial Pacific Ocean and the Antarctic Circumpolar Current, can be regarded as accurately modelled, while other regions exhibit significant underestimations in mean current velocity. Knowledge of ocean circulation is generally more complete in the low-latitude oceans but is especially poor in high latitude regions. Accordingly, we propose improvements in forecasting, model representation of stochasticity, and enhancement of observations of ocean currents. The study demonstrates that knowledge and model representations of global circulation are substantially compromised by inaccuracies of significant magnitude and direction, with important implications for modelled predictions of currents, temperature, carbon dioxide sequestration, and sea-level rise trends. Introduction The oceans, covering more than 70% of the earth’s surface and containing 95% of all water, critically modulate climate change through carbon dioxide sequestration and by absorbing excess heat 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . Dynamically, the large-scale ocean circulation plays a key role in the Earth’s climate system by redistributing the heat of the ocean 12 , 13 . Stratification arises through the processes of vertical mixing, one of the products of which is the mid-depth ocean near 1000 m between variable waters near the surface and stable waters at greater depth 14 . Interactions between surface currents and thermohaline currents inject changes into the mid-depth ocean near 1000 m. In turn, mid-depth ocean circulation has a significant impact on the global ocean as a whole by altering the physical characteristics of ocean waters 15 , 16 , 17 , 18 that play a fundamental role in the longer-term evolution of Earth’s climate system. However, knowledge of mid-depth ocean circulation is hampered by the lack of direct observations and novel methods are therefore needed to improve data, modeling accuracy in order to resolve the mechanism of mid-depth ocean circulation and related processes. In the past two decades, over 4000 floats from the Argo program provided more than two million profiles of water column properties from the upper 2000-m depth of the global ocean 10 , 19 , 20 . These enable the construction of satellite-tracked sea-surface profiles and cycles for direct estimation of mid-depth ocean velocities, and spatio-temporal circulation patterns from regional to global scales 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 . Argo velocities derived in this manner are Lagrangian, whereas ocean circulation models are Eulerian, resulting in systematic biases of current velocity when the coordinates are transformed 34 , 35 , 36 . Based on simulated Argo experiments 37 it has been suggested that Argo floats accelerate near strong mean flows due to the eddy‐mean flow interaction effect, although other potential biases and their implications for models remain highly uncertain. Using comprehensive and quantitative methods, we present here the detailed assessment and validation of global ocean circulation model skills near 1000 m depth. Results Sparse well-modelled mid-depth ocean Lagrangian velocities near 1,000 m depth for all the earth’s oceans were computed and multiple accuracy indicators were used to compare Argo float velocities with simulated velocities from global circulation models. The assessment is based on 842,421 Argo float observation displacements over 20 years, together with float simulation experiments. The goal is to map the areas of the oceans that are more reliably modelled and classify them in terms of how much is known about spatial variations in mid-ocean circulation. Global 1000-m ocean velocities were obtained from three eddy-permitting and daily-mean ocean general circulation models, including ECCO2 (Estimating the Circulation and Climate of the Ocean model, Phase II), OFES (the Ocean General Circulation Model for the Earth Simulator) and CMEMS GLORYS12 (Global Ocean Reanalysis and Simulations), and Argo velocities were acquired from the ANDRO dataset. Indicators of differences (see Supplementary Materials for details) comprise: DOD (Difference of Direction); DOV (Difference of Velocity); PDOV (Percentage of Velocity Difference); SD (Separation Distance); SS (Skill Score based on the cumulative simulated separation distance normalized by the cumulative observed displacement). We describe the distribution of each of these statistics before presenting the results of cluster analysis to reveal the linkages between indicators and overall spatial patterns of the clusters. Fig. 1 shows the global spatial distribution of the well-modelled areas, defined as areas where DOD is less than 30° and PDOV is less than 50%. The two most critical indicators are adopted, and the effect of parameter thresholds on the results can be seen in Supplementary Table 1 . Overall, only 1.3–3.8% of the global oceans are considered to have a robust or reliable prediction of the 1000-m depth circulation and hence, improved velocity field magnitude and direction data are needed for more than 96% of the global ocean. This deficiency therefore calls into question the reliability of predictions of the long-term implications of mid-depth ocean circulation changes. Meanwhile, several regions are better modelled, including the Antarctic Circumpolar Current and the equatorial Pacific Ocean, which is the largest spatially continuous well-modelled area that enables more reliable estimates of ocean current dynamics in the 1000-m layer. Fig. 1: Global distribution of well-modelled areas for models, 1.415% for ECCO2(a), 1.325% for OFES(b), and 3.843% for CMEMS(c). The well-modelled areas are highlighted by blue dots. Full size image Spatial patterns of widespread global disparities The global distribution of DOD in Fig. 2 shows that DOD in at least 81.1% of the oceans exceeds 45° or more (as shown by the most accurate model CMEMS), meaning that accurate simulation of circulation direction at 1000-m is lacking in more than four-fifths of the global oceans. However, there are considerable areas with better representation, where DOD is below 45°, notably in the equatorial Pacific Ocean and the Indian Ocean but also parts of the Antarctic Circumpolar Current, the Agulhas Current, the Gulf Stream and the Brazilian Current (as the darkest cool colors in Fig. 2 shows). Fig. 2: Global spatial distribution of DOD (Difference of direction) and DOV (Difference of velocity) for ECCO2 (a, b), OFES (c, d), CMEMS (e, f ). The ( a, c, e ) panels: mean DOD during 2001–2020 (shaded color). Warm (Cool) color indicates that observed velocity of float displacements exceeds (lags) simulated velocity. The ( b, d, f ) panels: averaged DOV during 2001–2020 (shaded color). Full size image The global spatial distribution of DOV for ECCO2 and OFES in Fig. 2 and Supplementary Fig. 2 shows that observed velocities are greater than simulated values near the equator (6°N-10°S) and especially across most of the western boundary current regions. Globally, DOV for ECCO2 exceeds 3 cm/s across 51.12 million km 2 . Regions with the most significant positive DOV anomalies include the Agulhas Current and Gulf Stream, where DOV reaches its highest values (16 cm/s) and PDOV reaches a maximum of 92.7%. In the Antarctic Circumpolar Current regions, positive and negative values of DOV are staggered, but negative PDOV dominates the overall spatial pattern and simulated velocities are in general much greater than observed values and more than twice the observed velocities across 1.02 million km 2 of the global oceans (PDOV > 100%). As for CMEMS (Fig. 2-f ), similar underestimation of velocities is shown near the equator and parts of the western boundary current regions. Significant kinetic deviations exist in zonal and meridional velocity (Supplementary Fig. 10 ) and in the mean velocity fields (Supplementary Fig. 11 ). These systematic biases may be caused by lack of eddy kinetic energy (Supplementary Fig. 12 ). Significant underestimation of eddy kinetic energy is observed in almost all the western boundary current regions. We also analyzed the SD and SS to reflect an overall combined accuracy statistic. As shown in Supplementary Fig. 3 and Supplementary Fig. 4 , the spatial pattern of SD corresponds very well with observed velocity measurements (Supplementary Fig. 11-a ), high SD occurs in the Southern Ocean, in the western boundary current regions, and near the equator. The SS in the low-latitude oceans is generally greater than in other oceans in ECCO2 and OFES, although sporadic high SS signals are also seen in the middle to high-latitude oceans. Meanwhile, SS for CMEMS shows high SS signals in both low-latitude oceans and Antarctic Circumpolar Current regions. The results suggest that there is a relatively accurate prediction of circulation characteristics across large areas of the low-latitude oceans, viz . −8°N to 8°S (Fig. 1 ). Overall, relatively high SS suggests adequate agreement between observed and simulated current directions (DOD < 45°, Fig. 2 ) in most of the equatorial Pacific Ocean, the north Indian Ocean, and part of the middle Atlantic Ocean. Positive continuous spatial characteristics of SS (>0.5) are evident in both the equatorial Pacific Ocean and the northern Indian Ocean. In addition, there are positive signals (such as DOD and SS) indicating that the Antarctic Circumpolar Current regions are better modelled, but the evidence is relatively weak and even then only for the CMEMS simulation. In the mid-latitude oceans, the strong and persistent western boundary currents show characteristics different to those of other open oceans. In these regions, both simulated and observed velocities are high (Supplementary Fig. 11 ). DOD results are poor (typically > 80° in ECCO2 and OFES) in most of the mid-latitude oceans. Moreover, there are markedly positive DOV signals (> 3 cm/s) in the main pathways of the Agulhas Current, Gulf Stream, Kuroshio Current, Brazilian Current and East Australian Current. For the Gulf Stream (together with its extension), observed velocities exceed those simulated in all models, even up to 70% (Supplementary Fig. 2 ). In addition, SS is generally low (SS < 0.4) across the mid-latitude oceans. Of the high-latitude oceans, the Southern Ocean, most especially around the Antarctic Circumpolar Current (ACC) system, exhibits relatively good agreement between simulated and observed displacements (Fig. 1 ; Fig. 2 ). The core pathways of the ACC are reflected in the mean velocity of Argo floats, revealing strong jet currents, which are also evident in the south Indian Ocean, south Pacific Ocean and southwest Atlantic Ocean (Supplementary Fig. 11-a ). Several of the indicators exhibit clear, spatially continuous patterns along the current pathways, with lower DOD, higher SD and both positive and negative DOV signals. Exceptionally high simulation velocities (PDOV is < −100% in some cases) are a key feature of these regions. But SS values differ between models, showing lower signals in ECCO2 and OFES, and higher in CMEMS (generally above 0.5). Statistical characteristics of widespread global disparities Based on statistical K-means clustering analysis of the indicators (see Supplementary Table 2 ), the global ocean yields four clusters, each with a characteristic coherent profile (Fig. 3 ). A visual assessment of the clusters in Fig. 3-b leads to the following observations relating to accuracy and spatial distribution of the different indicators. Clusters are clearly distributed along latitudinal zones, although the boundary current regions, particularly evident in Cluster-4, do extend longitudinally. Spatial patterns are strongly evident, especially for Cluster-3 and Cluster-4 which are distributed in low latitudes and higher latitudes, while other cluster areas mainly occur at mid-latitudes. Fig. 3: Multivariate K-means clustering analysis results for 4 clusters. a The clustering map shows spatial distribution, and ( b ) the column chart summarizes normalized indicators of displacement disparity for each cluster. c The separate clustering maps show the spatial distribution of each cluster. The clustering indicators are DOD (Difference of direction), DOV (Difference of velocity), SD (Separation distance) and SS (Skill score). Full size image The largest cluster, Cluster-1, covers 40.4% of the oceans with significantly higher DOD, and is widely distributed in the mid-latitudes, including the extended boundary currents regions. Although distributed in similar latitudes, Cluster-2 exhibits poorer ocean current accuracy, with the highest values for DOD and lowest values for SS (average below 0.2), while DOV in this cluster is negative, indicating that velocity is markedly overestimated here. Cluster-3 occupies 14.2% of the oceans, including parts of the equatorial Pacific Ocean, the north Indian Ocean and the Antarctic Circumpolar Current, where DOD, DOV and SD all lie below global mean values. Ocean current accuracy is adequate in regions where this cluster is located. Cluster-4 has the highest DOV and SD values along with higher velocity, and is located across almost all of the Western Boundary Currents and their extensions, as well as most of the Antarctic Circumpolar Current. Accordingly, the ocean current velocity in these regions is severely underestimated. Discussion Overall, our integrated assessment analyses show significant and extensive discrepancies between modelled and observed velocity fields across the oceans, mainly associated with energy underestimation and direction bias. On a global scale, those areas of the oceans that are more accurately modelled include the equatorial Pacific Ocean, the northern Indian Ocean, and part of the Southern Ocean. Regionally, the low-latitude oceans including the equatorial Pacific Ocean, the north Indian Ocean and the middle of the Atlantic Ocean exhibit better agreement in terms of both amplitude and direction of circulation dynamics compared with the middle and high-latitude oceans. Estimation of circulation needs to be improved in the mid- to high-latitudes, where underestimation of both average current energy and direction deviation is evident. The Western Boundary Currents, the most prominent features of mid-latitude ocean circulation, are known for their complex current structure and complex topographic interaction processes and prove particularly challenging to estimate accurately. Of particular importance is the finding that circulation energy is underestimated across almost all of the global oceans. This arises in part because high-frequency dynamics are poorly resolved in ocean circulation models and also due to the inadequate solutions of sub-grid processes, such as the lack of eddy kinetic energy illustrated in Supplementary Fig. 12 . Moreover, parameterizations of sub-grid processes result in a series of errors, a situation which remains still the most advanced challenge in ocean modelling 38 , 39 . Taking tidal processes as an example, previous studies have confirmed that tidal currents traversing rough bathymetry could induce turbulence and mixing to accelerate mid-depth ocean circulation 40 , 41 , 42 . Our analysis further demonstrates that circulation accuracy tends to be poor in oceans with slower mean currents, such as the north Pacific Ocean and the north Atlantic Ocean. There is evidence of intrinsic chaotic variability in these oceans that is not easily resolved by ocean circulation models 43 . Mesoscale features and unstable processes such as eddies and instability waves may also result in simulation errors especially in regions of weak amplitudes 44 . As noted by Fox-Kemper, Adcroft et al. 38 , numerical and parameterization improvements will continue to define the state of the art in ocean circulation modelling. In the future, we expect ocean circulation models to represent observed quantities of ocean flows more faithfully through: (a) More intensive and qualified observations. There is a pressing need for more observations in the mid-depth ocean. A proposal to improve the sampling density of the Argo program particularly in parts of the world ocean where climate impacts are especially strong, was raised by Riser, Freeland et al. 20 . Here we recommend enhancement of observations in the mid-depth ocean, such as by increasing the float deployment and adding an inertial navigation component in float design which can provide locations at greater frequency during the parking period. (b) More productive parameterization. The ocean is a nonconstant and nonuniform dynamical system with high-frequency variations. Thus, productive parameterization is particularly challenging and much needed. To strengthen the prediction skill of mid-depth ocean currents, the parameterization of global modelling is expected to be improved, such as depth-dependent eddy diffusivity and viscosity 45 , 46 , internal mixing with spatio-temporal variability 47 , ice-ocean basal friction 48 , 49 . (c) Finer resolution. With the rapid growth of high-performance computing techniques and cloud-based platforms, higher-resolution global oceanic circulation modelling systems may provide better estimates for certain regions 50 , 51 , 52 . (d) Last but not least, we would like to point out the recent breakthrough in modelling large-scale ocean currents through exact solutions to the geophysical fluid dynamics governing equations 53 , 54 , 55 , 56 , 57 . This kind of pioneering mathematical analysis of geophysical currents may prompt further in-depth theoretical studies 58 . Possible sources of uncertainty in this study are as follows. Firstly, observation of absolute velocity based on the Argo float satellite-tracked system (ARGOS and GPS) is subject to a systematic displacement error of between 8 to 1500 m 33 , 59 . Unpredictable float movement at the ocean surface occurs frequently due to wind, wave and surface current processes. Shear displacement caused by vertical currents during descending and ascending is an additional source of error. The ANDRO dataset has been partially corrected for both types of error and the error could be further adjusted, for example using an Argo simulation system to derive accurate systematic errors 37 . A further caveat is that ocean circulation models generate mean conditions and are not suited to resolving fine-scale spatial and high-frequency temporal variability. The output of ocean circulation models used in our work has similar deficiencies. Notwithstanding these uncertainties, our study offers important insights that improve the awareness of the representational accuracy of ocean circulation in models, and enhances regional observations. The study highlights discrepancies between modelled and observed ocean circulation in terms of spatial structure, intensity, and direction that must be accounted for in ocean modeling and prediction. In so doing, our analysis exposes those areas of the mid-depth ocean that are most poorly resolved in terms of circulation and that should, therefore, be prioritized for observation and simulation towards the goal of optimizing ocean circulation models. In conclusion, mean current velocities in the mid-depth oceans appear to be significantly underestimated due to the fact that high-frequency dynamical processes, such as tidal mixing, are not well captured in simulations. Although the equatorial Pacific Ocean, north Indian Ocean and part of the Southern Ocean are better modelled, excessive direction biases (over 75°) are detected across more than 60% of the global oceans, especially in areas where the circulation system is weak. Given deficiencies in the observational record, such stochasticity needs to be incorporated into ocean models through parameterization of eddy diffusivities and mixing parameters. Clear patterns of spatial clustering are revealed, with characteristics for well-modelled oceans of several assessment indicators changing rapidly with increasing latitude, on average. The major contribution of this study lies in revealing the nature and scale of the disparity between our scientific knowledge of ocean circulation and the actual ocean environment. Our analysis can help guide recommendations for more intensive observation and modeling approaches in order to decrease the extensive and significant biases between models and observations. Methods Argo displacements dataset Calculation of 1000-m current velocities from Argo floats relies on measurement of displacement, which is the difference between the location of the last surface-fixed position before departure from the sea surface and the first fixed position after arrival at the sea surface, together with the time interval, typically 9–10 days 20 , 33 , 60 , 61 . For observations, in this study we use the ANDRO (Argo New Displacements Rannou and Ollitrault) displacements 59 . The ANDRO has over 843,000 displacements in individual cycles operating from September 1995 to July 2020. Maximum coverage density is 4.32 floats/100 km 2 . ANDRO covers 86.8% of the global ocean. Of these areas, 34.7% have a total parking time of 300 days, 60.3% have 200 days, and 84.8% have 100 days. Global ocean circulation models velocity dataset Global velocity fields used for Lagrangian tracking analysis are provided by the ECCO2 (Estimating the Circulation and Climate of the Ocean model, Phase II), OFES (the Ocean General Circulation Model for the Earth Simulator) and CMEMS GLORYS12 (Global Ocean Reanalysis and Simulations). ECCO2 produces a physically consistent estimate of global ocean circulation. The circulation model assimilates a huge number of in-situ and satellite observations based on the Massachusetts Institute of Technology general circulation model (MITgcm), using Green’s function optimization 62 . Here, we used the three-day current velocity field datasets from 2001 to 2020, with an eddy-permitting horizontal resolution (approximately 18 km), and 50 vertical levels within the depth range 5 to 5906 m. OFES can perform fifty-year integrations of eddy-resolving ocean simulations in the world ocean 63 . It is based on the Modular Ocean Model (MOM3) which was developed at the Geophysical Fluid Dynamics Laboratory/the National Oceanic and Atmospheric Administration (GFDL/NOAA). The output over 2001-2019 with 1-day average and 1/10° horizontal resolution is used. OFES simulations were conducted on the Earth Simulator under the support of JAMSTEC (Japan Agency for Marine-Earth Science and Technology). CMEMS GLORYS12 reanalysis provided a 3D description of the ocean circulation at the mesoscale 64 . It was designed and implemented using the current real-time global forecasting CMEMS (Copernicus Marine Environment Monitoring Service) system and driven by the NEMO3.1 ocean/sea-ice general circulation model. Ocean observations were assimilated by a reduced-order Kalman filter, including along-track altimeter sea-level anomaly, satellite sea-surface temperature, sea-ice concentration, and in-situ temperature and salinity (T/S) vertical profiles. The reanalysis covered the 1993-2020 period, with a 1/12° horizontal resolution and 50 vertical levels. The daily mean velocity output was used in this work. Regional ocean circulation model velocity dataset Regional velocity fields used for the kinetics comparison test are provided by the DopAnV3R3-ini2007 65 , 66 (Doppio Analysis Version 3 Release 3). The DopAnV3R3-ini2007 is a ROMS-based (Regional Ocean Modeling System) data assimilative reanalysis focusing on the Mid-Atlantic Bight and Gulf of Maine regions of the northwestern North Atlantic. The reanalysis uses four-dimensional variational (4D-Var) data assimilation (DA) derived from comprehensive observational data from in situ platforms, coastal radars, and satellites 66 . The velocity fields output from the DopAnV3R3-ini2007 (available online ) is provided on a 7-km horizontal grid within 40 vertical levels from the coastal ocean to deep sea and covers the period 2007-2021. Kinetics comparison test To address potential underpowering below the mesoscale of global ocean circulation models, velocity fields output from the DopAnV3R3-ini2007 (a ROMS-based reanalysis), is used to compare with the velocity fields output from global ocean circulation models. Lagrangian simulation analysis mentioned in the next section is used here. The simulation traverses the Gulf Stream zone and covers 2014–2018, viz. the 5 years that exhibit the largest Argo displacements, while no diffusivity is set here. For each model, 1102 simulated floats (equal to displacements number of displacements available to Argo) are released. The velocity comparison result between simulated floats advected by the ROMS model and global models is shown in Supplementary Fig. 5 . Lagrangian simulation At the parking stage, Argo floats drift near the 1000 m depth. To reproduce the behavior of real floats during its parking stage, Lagrangian particle tracking analysis was applied. Lagrangian tracking analysis has been used in many studies to model seawater motion and material transport 67 . We used the freely available open-source Lagrangian particle trajectory tool OpenDrift 68 to compute simulated float trajectories. The spatial and temporal velocity fields were smoothed using bilinear interpolation. A second-order Runge-Kutta scheme was adopted as the advection method to derive numerical particle positions (we refer to the simulated Argo float as a “numerical particle”). The analysis tool is designed to operate in off-line mode, so computation of simulated float trajectory relies on the Eulerian velocity field output from the ocean circulation model. The simulations of different models were added with varying diffusivity. The description of the random diffusivity term is in the next section. In our experiment, numerical particles were advected with the velocity field output of global ocean models. Each Argo cycle where parking depth lies between 950 and 1050 dbar has a corresponding numerical simulation particle. To ensure a strict comparison with real Argo floats, the uniform starting position, parking depth and parking duration of the observed trajectories were set for all numerical particles. The time step of forward numerical particle trajectory prediction was set to 1-hour and prediction results were stored at 1-day intervals. This Lagrangian simulation allows the generation of float clusters, improving the statistical significance of the outputs. A number of virtual floats is released at the starting position of each Argo cycle. The centroid of the displacement probability density cloud is considered to be representative of the float cluster displacement. The size of cluster (that is, the number of virtual floats for each release) is determined by a spatial coverage test as shown in Supplementary Fig. 8 . We selected the velocity field of OFES as it has the largest diffusivity among the models, thereby ensuring the robustness of the value. Given the large number of simulations performed, we selected 100 as the size of the cluster to balance statistical accuracy and computational intensity. Stochastic diffusivity test As emphasized in a series of other studies, the diffusivity of Lagrangian analysis is a significant issue 67 , 69 , 70 . In general, the stochastic term is used in Lagrangian analysis to model stochastic fluctuations, including subgrid scale diffusion and unresolved physics such as eddies, waves, turbulence or mixing processes. A map of diffusivity may be estimated by floats observations 71 , 72 , 73 . However, different diffusivities have been set up for ocean general circulation models 63 , 64 . So here horizontal isotropic diffusivities account for the dynamic processes that are not fully resolved by the global ocean circulation models 37 . The stochastic walk of the particle tracking is implemented by the Wiener noise process. To quantify the suitable diffusivities, a stochastic diffusivity test was conducted by Lagrangian simulation only in the Gulf Stream zone. As previous studies noted, validations of high-resolution model simulations are essential to ensure the applicability of the stochastic term 67 , 73 . We conduct the test through comparing the Lagrangian simulation results between global models and ROMS. No diffusivity was set for ROMS simulation. Supplementary Fig. 6 visualizes the changes of kinetics indicting by absolute velocity for each model when the diffusivity varies. For simulations of ECCO2 and OFES, their kinetic distributions gradually approach that of ROMS as the diffusivity increases, with the frequency tending to decrease for small velocities ( \( < \) 10 cm s −1 ) and increase for large velocities ( \( > \) 10 cm s −1 ). It seems that the applicable value is around 1000 to 1500 m 2 s −1 . Further tests, detailed below, may provide accurate values. Instead, the kinetic distributions of CMEMS GLORYS12 simulations moves away from that of ROMS as the diffusivity leaves zero. Accordingly, the stochastic diffusivity is set to 10 −5 m 2 s −1 , which is indeed close to zero 74 . It makes sense for a non-zero diffusivity which allows floats to deviate from deterministic trajectories driven by sub-grid scale diffusion. To further test the diffusivity value for ECCO2 and OFES, correlations of probability density functions of velocities are calculated as shown in Supplementary Fig. 7 . Linear correlations of each simulation of different diffusivity with the ROMS simulation are calculated. According to the maximum correlation, the diffusivity for ECCO2 is 1100 m 2 s −1 and the value for OFES is 1400 m 2 s −1 . The diffusivities here demonstrated reasonable agreement with the global average value estimated by Cole et al. 75 . Moreover, the Kolmogorov-Smirnov test results show that the simulated kinetic distribution is consistent with that of the observed kinetics. Quantitative difference framework To assess the modelling accuracy of global ocean circulation, a quantitative difference framework was used. The framework was designed for displacement comparison between observed and simulated floats. Three kinds of indicators quantify circulation differences using float movement characteristics. The first indicator is DOD (Difference of Direction), which represents the angle between two vectors derived from observed and simulated displacement. The geometric meaning of DOD is similar to the cosine similarity used in trajectory similarity studies 76 . The DOD is calculated as follows: $${DOD}={{{{{\rm{atan}}}}}}\, \left({y}_{{v}_{{{obs}}}},\, {x}_{{v}_{{{obs}}}}\right)-{{{{{\rm{atan}}}}}}\, \left({y}_{{v}_{{{sim}}}},\, {x}_{{v}_{{{sim}}}}\right)$$ (1) where \({v}_{{{{{{\rm{obs}}}}}}}\) and \({v}_{{{{{{\rm{sim}}}}}}}\) are velocity vectors (in x, y space) exported from observed and simulated float displacement, respectively. Secondly, velocity related indicators include DOV and PDOV. DOV (Difference of Velocity) is the magnitude of velocity difference between observed and simulated displacements: $${{DOV}}=\left|{{v}}_{{obs}}\right|-\left|{{v}}_{{sim}}\right|$$ (2) PDOV (Percentage of Velocity Difference) represents the percentage-ratio of velocity difference compared to absolute observed velocity: $${{PDOV}}=\left(\frac{|{v}_{{{obs}}}|-|{v}_{{{sim}}}|}{|{v}_{{{obs}}}|}\right)\times 100\%$$ (3) The third indicator included two universal displacement measures to quantify the movement consistency of the floats. SD (Separation Distance) is the distance between the end points of the simulated and observed float displacements, demonstrating movement differences directly. SS (Skill Score) is based on the cumulative simulated separation distance normalized by the cumulative observed displacements length 76 : $${{\mbox{SS}}}=\left\{\begin{array}{c}1-\frac{{{\mbox{c}}}}{{{\mbox{n}}}},\quad \left({{\mbox{c}}}\le {{\mbox{n}}}\right)\\ 0,\quad\quad\quad \left({{\mbox{c}}}\, > \, {{\mbox{n}}}\right)\end{array}\right.,$$ (4) where n is a tolerance threshold, c is a normalized cumulative Lagrangian separation distance. An estimate of c is obtained by dividing the cumulative Lagrangian separation distance (SD) by the cumulative length of the observed displacement (L): $$c=\mathop{\sum }\limits_{i=1}^{N}{{{{{{\rm{SD}}}}}}}_{i}\bigg/\mathop{\sum }\limits_{i=1}^{N}{L}_{i},$$ (5) where i = 1, 2,…, N , and N is the total number of days. As for the tolerance threshold ( n ), we selected n = 1.8, considering the expectations and requirements of the simulated model. SS has been used in several other studies to assess numerical ocean circulation model accuracy 77 , 78 , 79 . In general, SS varies between 0 and 1, whereby higher SS values indicate improved prediction skills. Calculations of difference are based on the match-up displacement pair dataset. With the dataset, all indicators are rasterized and adjusted to a resolution of 1° \(\times\) 1°. The python-based library Cartopy was used to set map projections and map evaluation results. The multivariate K-means clustering method was adopted to explicitly extract regions with similar biases, seeing detailed statistics in the later section (several compound parameters statistics). Observation data investigation The significance of difference assessments between observation and simulation relies on the number of Argo float observations in each statistical unit (1° \(\times\) 1°). The spatial coverage of Argo observations with different amounts is shown in Supplementary Fig. 1 . Several compound parameters statistics To investigate the influence of the evaluation thresholds on the analysis, we computed compound evaluation thresholds across statistics. When DOD and DOV lie between 30–90° and 10–70%, respectively, areas of the ocean classified as well-modelled comprised 0.27-59.85% of the total (Supplementary Table 1 ). Multivariate K-means clustering of normalized indicators (DOD (°), DOV (cm s-1), SD (km), SS (score)) were used to explore the linkages between indicators, and their spatial distribution. Indicators were normalized to have a mean of zero and standard deviation of one. We chose 4 clusters as increasing it to 5 produced two clusters with similar profiles and broad distributions to Cluster 1, shown in Supplementary Table 2 which also summarizes the overall profile and %Area for each Cluster. Data availability All observation and model data that support the findings of this study are available as follows. The ANDRO (Argo New Displacements Rannou and Ollitrault) displacements available online ( ) on the marine sciences data platform SEANOE (SEA scieNtific Open data Edition). The ECCO2 output is available online ( ). The OFES output is available online ( ). The CMEMS GLORYS12 reanalysis is delivered through the Copernicus Marine Environment Monitoring Service (CMEMS) available at . The DopAnV3R3-ini2007 velocity field output is available at . Code availability All analyses in this manuscript are reproducible, instructions and scripts could be found in the repository Middepth_currents_validation 80 ( ).
Ocean motion plays a key role in the Earth's energy and climate systems. In recent decades, ocean science has made great strides in providing general estimates of large-scale ocean motion. However, there are still many dynamic mechanisms that are not fully understood or resolved. Prof. Su Fenzhen's team at the Institute of Geographic Sciences and Natural Resources Research of the Chinese Academy of Sciences and their collaborators found that humans know less than 5% of the ocean currents at depths of 1,000 meters below the sea surface, with important implications for modeled predictions of climate change and carbon sequestration. Their findings were published in Nature Communications. The researchers used a displacement dataset of 842,421 observations produced from Argo floats from 2001 to 2020. Lagrangian velocities were computed near 1,000-meter depth, and several accuracy indicators were used to compare Argo float velocities with simulated values from global circulation models. Results showed that only 3.8% of the mid-depth oceans can be considered as accurately modeled. "An important finding that circulation energy in almost all of the world's oceans is underestimated. This is probably due to the poor resolution of high-frequency dynamics in ocean circulation models and the inadequacy of current solutions to sub-grid processes," said Prof. Su. "In the future, we expect to develop ocean circulation models that could more faithfully represent observed ocean currents through more intensive and qualified observations, more productive parameterization, finer model resolution, and in-depth theoretical analysis," he said. The study highlights the nature and extent of the mismatch between scientific knowledge and the actual ocean environment. It can help guide recommendations for more intensive observational and more accurate predictions to reduce the large and significant biases between models and observations.
10.1038/s41467-023-37841-x
Medicine
A bacteria likely to reduce the cardiovascular risks of one in two people
Supplementation with Akkermansia muciniphila in overweight and obese human volunteers: a proof-of-concept exploratory study, Nature Medicine, DOI: 10.1038/s41591-019-0495-2 , nature.com/articles/s41591-019-0495-2 Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-019-0495-2
https://medicalxpress.com/news/2019-07-bacteria-cardiovascular-people.html
Abstract Metabolic syndrome is characterized by a constellation of comorbidities that predispose individuals to an increased risk of developing cardiovascular pathologies as well as type 2 diabetes mellitus 1 . The gut microbiota is a new key contributor involved in the onset of obesity-related disorders 2 . In humans, studies have provided evidence for a negative correlation between Akkermansia muciniphila abundance and overweight, obesity, untreated type 2 diabetes mellitus or hypertension 3 , 4 , 5 , 6 , 7 , 8 . Since the administration of A. muciniphila has never been investigated in humans, we conducted a randomized, double-blind, placebo-controlled pilot study in overweight/obese insulin-resistant volunteers; 40 were enrolled and 32 completed the trial. The primary end points were safety, tolerability and metabolic parameters (that is, insulin resistance, circulating lipids, visceral adiposity and body mass). Secondary outcomes were gut barrier function (that is, plasma lipopolysaccharides) and gut microbiota composition. In this single-center study, we demonstrated that daily oral supplementation of 10 10 A. muciniphila bacteria either live or pasteurized for three months was safe and well tolerated. Compared to placebo, pasteurized A. muciniphila improved insulin sensitivity (+28.62 ± 7.02%, P = 0.002), and reduced insulinemia (−34.08 ± 7.12%, P = 0.006) and plasma total cholesterol (−8.68 ± 2.38%, P = 0.02). Pasteurized A. muciniphila supplementation slightly decreased body weight (−2.27 ± 0.92 kg, P = 0.091) compared to the placebo group, and fat mass (−1.37 ± 0.82 kg, P = 0.092) and hip circumference (−2.63 ± 1.14 cm, P = 0.091) compared to baseline. After three months of supplementation, A. muciniphila reduced the levels of the relevant blood markers for liver dysfunction and inflammation while the overall gut microbiome structure was unaffected. In conclusion, this proof-of-concept study (clinical trial no. NCT02637115 ) shows that the intervention was safe and well tolerated and that supplementation with A. muciniphila improves several metabolic parameters. Main To overcome the worldwide evolution of cardiometabolic diseases, research has increasingly focused its attention on interventions that target the gut microbiota 2 . Among commensal bacteria residing in the intestine, Akkermansia muciniphila has attracted growing interest for its health-promoting effects 9 . In rodents, treatment with A. muciniphila reduces obesity and related disorders, such as glucose intolerance, insulin resistance, steatosis and gut permeability 10 , 11 , 12 . Recently, in rodents, we serendipitously discovered that pasteurization of A. muciniphila enhances its beneficial properties on adiposity, insulin resistance and glucose tolerance 11 . However, translational evaluation of A. muciniphila for human investigation was hampered by the need for animal-derived compounds in the growth medium used to culture this bacterium. We circumvented this major issue by developing a synthetic medium compatible with human administration 11 . The main objectives of this exploratory study were (1) to evaluate the feasibility, safety and tolerance of A. muciniphila supplementation, and (2) to explore for the first time the metabolic effects of A. muciniphila supplementation in humans. The study was designed as an exploratory and proof-of-concept study for a first supplementation in humans. The primary outcomes were safety, tolerability (that is, hepatic and renal function, inflammation) and metabolic parameters (that is, insulin resistance, circulating lipids, visceral adiposity and body mass index (BMI)). The secondary outcomes were gut barrier function (that is, plasma lipopolysaccharides (LPS)/metabolic endotoxemia), gut microbiota composition and metabolites. In 2017, the first reported preliminary human data from this study obtained using 5 volunteers per group suggested that treatment with either placebo, two doses of live A. muciniphila (low dose of 10 9 bacteria per day or high dose of 10 10 bacteria per day) or pasteurized A. muciniphila (10 10 bacteria per day) was safe in individuals with excess body weight; no changes in safety parameters or reported adverse events were observed after 15 d of daily administration 11 . In this study, we further extended this randomized, double-blind, placebo-controlled proof-of-concept and feasibility study using daily oral administration of A. muciniphila for 3 months, either live or pasteurized and compared their effects at the highest dose tested, that is, at 10 10 bacteria per day, in individuals exhibiting excess body weight (overweight or obese), insulin resistance and metabolic syndrome. Individuals were enrolled and underwent randomization to receive either a placebo, live A. muciniphila (10 10 bacteria per day) or pasteurized A. muciniphila (10 10 bacteria per day) as a supplement for 3 months, with the specific advice to keep to their normal dietary intake and physical activity during the study period (see flow chart in Extended Data Fig. 1 ). Although participants were randomized, we found that before starting supplementation (that is, at T0), participants receiving pasteurized cells exhibited significantly higher levels of insulin and lower insulin sensitivity than those in the placebo group (Supplementary Table 1 ). In the interest of safety, an early visit was scheduled 15 d after the start of supplementation. We found that both safety and tolerability were similar between the two groups receiving the different forms of A. muciniphila compared to the placebo group (Supplementary Tables 2 and 3 ), except for a higher white blood cell count (WBC) in the placebo and treated groups (Supplementary Table 2 ). We further followed safety and tolerability parameters until three months after the start of supplementation and did not observe any adverse events (Supplementary Tables 4 and 5 ). In addition, compliance was higher than 99% in all groups (Supplementary Table 5 ). After 3 months, the placebo group exhibited a significant increase of fasting plasma insulin ( P < 0.05, T3 versus T0; Fig. 1a ), contrary to participants receiving both forms of A. muciniphila where reduced plasma insulin levels (approximately 30%) were observed compared to the placebo group (Fig. 1a ). This effect was significant between the pasteurized A. muciniphila and placebo groups (Fig. 1a ). Fasting glycemia was not affected (Fig. 1b ); however, participants were not highly hyperglycemic at baseline (Supplementary Table 1 ). Fig. 1: Changes in parameters related to glucose metabolism and WBC. a , Insulinemia. b , Glycemia. c , Insulin resistance score. d , Insulin sensitivity. e , DDPIV activity. f , White blood cell count. Differential values (mean difference and mean difference from placebo) are expressed as the mean ± s.e.m., either as raw data or as percentages. The bars represent the mean change from baseline value per group, with their s.e.m. Mann–Whitney U -tests or unpaired t -tests were performed to compare the differential values of both treated groups versus the placebo group (intergroup changes), according to the distribution. The respective P values are indicated in the table below each plot; when the test is significant, the bars are marked with an asterisk. The lines represent the raw values before and after three months of supplementation. The distribution of values within each group for each timing is illustrated by a box-and-whisker plot. In the box plots, the line in the middle of the box is plotted at the median, and the inferior and superior limits of the box correspond to the 25th and the 75th percentiles, respectively. The whiskers correspond to the minimum and maximum values. Matched-pairs Wilcoxon signed-rank tests or paired t -tests were performed to verify changes from baseline (intragroup changes), according to the distribution. When the difference is significant, a capped line is marked above the group concerned with the corresponding P value. Changes between 0 and 3 months across the 3 groups were analyzed with Kruskal–Wallis or one-way ANOVA tests according to the distribution; group-wise comparisons were performed using Bonferroni and Tukey’s corrections for multiple testing, respectively. When the difference is significant, a line is marked above the concerned groups with the corresponding P value. Placebo group, n = 11; pasteurized bacteria group, n = 12; live bacteria group, n = 9 for all parameters, except for WBC: placebo, n = 11; pasteurized bacteria group, n = 11; live bacteria group, n = 8. All tests were two-tailed. * P < 0.05. Full size image We also measured insulin sensitivity and resistance (homeostatic model assessment (HOMA) method) and found that insulin sensitivity was significantly reduced at T3 in the placebo group (Fig. 1c,d ). Both forms of A. muciniphila improved this parameter. Indeed, pasteurized A. muciniphila markedly and significantly improved the insulin sensitivity index by about 30% compared to the placebo group (Fig. 1d ) and live A. muciniphila significantly improved the insulin resistance score (Fig. 1c ). Hemoglobin A1c (HbA1c) was not modified by supplementation with A. muciniphila (Supplementary Table 4 ); however, this may be explained by the fact that participants did not have diabetes and had normal HbA1c at baseline (Supplementary Table 1 ). Besides its impact on incretins and glucose metabolism, the activity of the enzyme dipeptidyl peptidase 4 (DPP4) is thought to be involved in modulating inflammation. Indeed, several studies have shown a lower inflammatory tone when DPP4 inhibitors were used, thereby suggesting that this enzyme may contribute to improving glucose metabolism and lowering cardiometabolic risk by other mechanisms than modulating incretin levels 13 , 14 , 15 . In this study, we found that pasteurized A. muciniphila significantly lowered DPP4 activity at the end of the 3-month period compared to baseline (Fig. 1e ). This parameter remained stable in both the placebo and live A. muciniphila groups. Consistent with the hypothesis that this enzyme may contribute to improving glucose metabolism and lowering cardiometabolic risk by other mechanisms than modulating incretins levels, we did not find any significant changes in plasma glucagon-like peptide-1 (GLP-1) levels (Extended Data Fig. 2 ). WBC counts are elevated in obesity 16 and numerous very large cohort studies and meta-analyses have clearly linked elevated WBC counts with glucose intolerance or the risk of developing type 2 diabetes 17 . More recently, WBC counts were suggested as predictors for incident type 2 diabetes mellitus (T2DM) in obese individuals 18 , 19 , 20 . Therefore, in accordance with these observations, we measured WBC counts in the study groups. Interestingly, we found that WBC remained significantly increased compared to baseline and week 2 in the placebo group (Supplementary Table 2 and Fig. 1f ), whereas pasteurized A. muciniphila supplementation completely abolished this effect, resulting in significantly lower WBC counts in the pasteurized A. muciniphila group compared to the placebo group (Fig. 1f ). The magnitude of the differences between T0 and T3 or the placebo group (that is, 866 cells µl −1 ) is highly significant, since a difference between 300 and 1,000 cells µl −1 is clinically relevant 17 , 18 , 19 , 20 . Although C-reactive protein was not significantly changed (Supplementary Table 4 ), we measured other markers associated with cardiometabolic risk. We found lower soluble CD40 ligand levels in the pasteurized versus the placebo group, but this effect did not reach significance ( P = 0.059; Extended Data Fig. 2a ). The chemokine growth-regulated oncogene/CXCL1 was decreased in the pasteurized A. muciniphila group at T3 versus T0 and versus the placebo group ( P = 0.055), whereas monocyte chemoattractant protein 1 (MCP1) decreased by 21% versus placebo but did not reach significance (Extended Data Fig. 2b,c ). Recent studies showed that A. muciniphila gavage reduces plasma cholesterol in rodents 11 , 21 and can also prevent the development of atherosclerosis 22 . We found that administration of pasteurized A. muciniphila significantly decreased total cholesterol by 8.68% compared to placebo (Fig. 2a ), whereas low-density lipoprotein (LDL) cholesterol was −7.53% lower and triglycerides were −15.71% lower but did not reach significance (Fig. 2b,c ). Interestingly, the magnitude of the effects on lipids observed was equivalent to that induced by dietary supplementation with phytosterols according to a recent meta-analysis 23 . Fig. 2: Changes in parameters related to lipid metabolism. a , Total cholesterol. b , LDL cholesterol. c , Triglycerides. The differential values (mean difference and mean difference from placebo) are expressed as the mean ± s.e.m., either as raw data or as percentages. The bars represent the mean change from baseline value per group, with their s.e.m. Mann–Whitney U -tests or unpaired t -tests were performed to compare the differential values of both treated groups versus the placebo group (intergroup changes), according to the distribution. The respective P values are indicated in the table below each plot; when the test is significant, the bars are marked with an asterisk. The lines represent the raw values before and three months after receiving treatment. The distribution of values within each group for each timing is illustrated by a box-and-whisker plot. In the box plots, the line in the middle of the box is plotted at the median, and the inferior and superior limits of the box correspond to the 25th and the 75th percentiles, respectively. The whiskers correspond to the minimum and maximum values. Matched-pairs Wilcoxon signed-ranks tests or paired t -tests were performed to verify changes from baseline (intragroup changes), according to the distribution; when drawn, the capped line above the group concerned shows the corresponding P value. When the difference is significant, a capped line is marked above the group concerned. Changes between 0 and 3 months across the 3 groups were analyzed with a Kruskal–Wallis or one-way ANOVA test according to the distribution; group-wise comparisons were performed using Bonferroni and Tukey’s corrections for multiple testing, respectively. When the difference is significant, a line is marked above the groups concerned with the corresponding P value. Placebo group, n = 11; pasteurized bacteria group, n = 12; live bacteria group, n = 9 for all parameters. All tests were two-tailed. * P < 0.05. Full size image Numerous large cohort studies have linked raised activity of hepatic enzymes such as γ-glutamyltransferase (GGT), aspartate aminotransferase (AST) and alanine transaminase (ALT) to adverse changes in glucose and lipid metabolism, to the extent that those enzymes are considered inflammatory markers and risk factors for the development of insulin resistance and incident T2DM 24 , 25 , 26 , 27 . In rodents, several studies 12 , 28 , 29 , 30 have reported that supplementation with A. muciniphila reduces GGT, AST and ALT levels as well as hepatic steatosis. Strikingly, pasteurized A. muciniphila significantly reduced both GGT and AST levels after 3 months compared to baseline (Fig. 3b,c ), but not ALT (Fig. 3a ). Particularly, GGT levels were markedly and significantly decreased by about 24% in the pasteurized A. muciniphila group compared to the T3 levels observed in the placebo group ( P = 0.009). None of these parameters were affected by supplementation with live A. muciniphila (Fig. 3a–c ). Fig. 3: Changes in hepatic and general enzymes. a , ALT activity. b , AST activity. c , γ-Glutamyltransferase activity. d , LPS activity. e , LDH activity. f , Creatine kinase activity. Differential values (mean difference and mean difference from placebo) are expressed as the mean ± s.e.m., either as raw data or as percentages. The bars represent the mean change from baseline value per group, with their s.e.m. Mann–Whitney U -tests were performed to compare the differential values of both treated groups versus the placebo group (intergroup changes), according to the distribution. The respective P values are indicated in the table below each plot; when the test is significant, the bars are marked with an asterisk. The lines represent the raw values before and after three months of supplementation. The distribution of values within each group for each timing is illustrated by a box-and-whisker plot. In the box plots, the line in the middle of the box is plotted at the median, and the inferior and superior limits of the box correspond to the 25th and the 75th percentiles, respectively. The whiskers correspond to the minimum and maximum values. Matched-pairs Wilcoxon signed-rank tests were performed to verify changes from baseline (intragroup changes), according to the distribution. When the difference is significant, a capped line is marked above the group concerned with the corresponding P value. Kruskal–Wallis analyses were used to compare changes between 0 and 3 months across the 3 groups according to the distribution. All group-wise comparisons were performed using Bonferroni’s correction for multiple testing. When the difference is significant, a line is marked above the groups concerned with the corresponding P value. Placebo group, n = 11; pasteurized bacteria group, n = 12; live bacteria group, n = 9 for all parameters except for creatine kinase: placebo group, n = 11; pasteurized bacteria group, n = 11; live bacteria group, n = 8. All tests were two-tailed. * P < 0.05. Full size image To further explore the potential mechanisms underlying the reduction of GGT and AST, we focused on plasma LPS. Indeed, numerous data obtained in humans suggest that translocation of endotoxins contributes to liver injury 31 , 32 , 33 as well as insulin resistance 32 , 34 . Moreover, we and others have shown that A. muciniphila reinforces gut barrier function and eventually reduces plasma LPS 10 , 11 , 22 , 29 . Therefore, we measured plasma LPS before and after A. muciniphila supplementation. Pasteurized A. muciniphila significantly decreases LPS compared to baseline, but also compared to the placebo group at T3 (Fig. 3d ). Thus, we speculated that such significant findings could be involved in the favorable metabolic changes observed, such as improved glucose metabolism and hepatic inflammatory markers and decreased WBC. It is worth nothing that pasteurized A. muciniphila supplementation decreases serum lactate dehydrogenase (LDH) and creatine kinase levels at T3 versus T0, two enzymes considered valid markers of whole-body tissue damage and muscle-specific injury, respectively (Fig. 3e,f ). Since gut microbiota have been linked with metabolism and cardiometabolic risk factors 2 , 35 , 36 , and A. muciniphila is linked with improved metabolic parameters 36 , 37 we measured the levels of A. muciniphila at baseline and after supplementation (Extended Data Fig. 3a ). First, we found that the abundance of A. muciniphila was similar between groups at baseline, whereas supplementation significantly increased by 1.7–2.6 log the quantity of A. muciniphila recovered in the feces of the pasteurized and live A. muciniphila groups, respectively (Extended Data Fig. 3a ). Interestingly, baseline characterization of the fecal microbiome performed on the 3 groups showed that there was no significant difference between groups at baseline (permutational multivariate analysis of variance (MANOVA), R 2 = 0.066, P = 0.51; Extended Data Fig. 3b ). Moreover, at the end of the intervention, the difference in gut microbiome composition between the 3 groups was slightly higher than at baseline while still non-significant (permutational MANOVA, R 2 = 0.075, P = 0.18; Extended Data Fig. 3b ). We evaluated the alteration in microbiota composition from baseline to end point (pairing per individual) and found that none of the treatments induced significant community-wide compositional change, although treatment with live bacteria had a slightly higher impact (partial distance‐based redundancy analysis (dbRDA), adjusted R 2 = 0.03, P = 0.095), than pasteurized (partial dbRDA, adjusted R 2 = 0.02, P = 0.14) and placebo (partial dbRDA, adjusted R 2 = 0.01, P = 0.66). Therefore, these results demonstrate that supplementation with either pasteurized or live A. muciniphila did not affect the overall structure of the gut microbiome. This finding is in line with previous data obtained in rodents, which showed that the gut microbiome of mice supplemented with live A. muciniphila was not significantly modified 10 . We also observed that the administration of pasteurized A. muciniphila slightly decreased body weight by approximately −2.27 kg ( P = 0.09), fat mass by approximately −1.37 kg ( P = 0.09) and hip circumference by −2.63 cm ( P = 0.09) (Fig. 4a–c ) compared to the placebo group. Waist circumference was decreased by approximately 1.56 cm, but this change did not reach statistical significance (Fig. 4d ). These differences are all of clinical relevance in the context of metabolic disorders and we may not rule out that improvement of different metabolic parameters is associated with the impact of supplementation on body weight, fat mass and hip circumference. Fig. 4: Changes in anthropometric parameters. a , Body weight. b , Fat mass. c , Hip circumference. d , Waist circumference. Differential values (mean difference and mean difference from placebo) are expressed as the mean ± s.e.m., either as raw data or as percentages. The bars represent the mean change from baseline value per group, with their s.e.m. Mann–Whitney U -tests were performed to compare the differential values of both treated groups versus the placebo group (intergroup changes) according to the distribution. The respective P values are indicated in the table below each plot. The lines represent the raw values before and after three months of supplementation. The distribution of values within each group for each timing is illustrated by a box-and-whisker plot. In the box plots, the line in the middle of the box is plotted at the median, and the inferior and superior limits of the box correspond to the 25th and the 75th percentiles, respectively. The whiskers correspond to the minimum and maximum values. Matched-pairs Wilcoxon signed-rank tests were performed to verify changes from baseline (intragroup changes) according to the distribution and, when drawn, the capped line above the group concerned shows the corresponding P value. Kruskal–Wallis analysis was used to compare changes between 0 and 3 months across the 3 groups according to the distribution. All group-wise comparisons were performed using Bonferroni’s correction for multiple testing. Placebo group, n = 11; pasteurized bacteria group, n = 12; live bacteria group, n = 9 for all parameters except for hip and waist: placebo group, n = 10; pasteurized bacteria group, n = 12; live bacteria group, n = 9. All tests were two-tailed. Full size image Our study has several limitations. Although most of the primary outcomes were reached, we did not find significant changes in visceral adiposity and BMI. However, we did not use specific and accurate methods such as dual-energy X-ray absorptiometry to precisely estimate the quantity of visceral versus subcutaneous fat. Also, this pilot and exploratory study enrolled a small number of individuals; thus, the study was not powered to deliver definitive conclusions on the end points related to metabolic parameters. Also, physical activity level and precise calories intake were not determined using dedicated measures. However, all the groups were investigated blindly; therefore, we may argue that any confounding factors were probably equally distributed between the different groups. Finally, we observed comparable apparent worsening of the phenotype of the placebo group over time as noted in other studies 38 , 39 , 40 . In conclusion, this proof-of-concept prospective study shows the feasibility of culturing and administering A. muciniphila to humans. Our data unequivocally show that administration of a daily dose as high as of 10 10 cells of A. muciniphila is safe in the longer term (that is, 3 months). This study provides a promising start for the development of future clinical interventions with appropriate design to confirm and extend our findings, which show the safety and impact of oral supplementation with A. muciniphila in overweight or obese insulin-resistant individuals. Methods Participants and study design This study was designed as a randomized, placebo-controlled, parallel-group pilot study. Between December 2015 and December 2017, 32 overweight/obese individuals (BMI > 25 kg m −2 ) aged between 18 and 70 years volunteered to participate and were enrolled in the study. Eligible participants had been diagnosed with metabolic syndrome according to the National Cholesterol Education Program Adult Treatment Panel III definition, that is, at least three of the five following criteria: fasting glycemia > 100 mg dl −1 ; blood pressure ≥ 130/85 mmHg or antihypertensive treatment; fasting triglyceridemia ≥ 150 mg dl −1 ; high-density lipoprotein (HDL) cholesterol < 40 mg dl −1 for men, < 50 mg dl −1 for women; and/or waist circumference > 102 cm for men, >88 cm for women and whose insulin sensitivity was <75% (refs. 41 , 42 ), evaluated using HOMA-modeling of insulin sensitivity (HOMA Calculator, University of Oxford). Written informed consent was obtained from each participant and the study protocol was approved by the Commission d’Ethique Biomédicale Hospitalo-facultaire of the Université catholique de Louvain. The study was registered at as trial no. NCT02637115. Participants were recruited at the Cliniques universitaires Saint-Luc in Brussels. A total of 160 participants aged 18–70 years were screened. Forty-five overweight or obese individuals with insulin resistance and metabolic syndrome were eligible for inclusion. In this group, five declined to participate. Therefore, 40 individuals were enrolled and received either a placebo, live A. muciniphila (10 10 bacteria per day) or pasteurized A. muciniphila (10 10 bacteria per day) as a supplement for 3 months, with the specific advice to keep to their normal dietary intake and physical activity during the study period (see flow chart in Extended Data Fig. 1 ). To prevent any viability or shelf life issues, the A. muciniphila bacteria were delivered to participants frozen in glycerol. The placebo contained the same amount of glycerol. The viable count of the A. muciniphila bacteria delivered to participants did not change during the entire intervention (data not shown). Of the 40 participants, 7 had to be excluded before study completion: 1 in the placebo group; 1 in the pasteurized bacteria group; and 5 in the live bacteria group. Three early terminations were due to personal reasons (that is, mainly because of the difficulty to attend the nine scheduled visits at the hospital) and four were due to untimely use of antibiotics during the study. One additional participant in the placebo group was excluded from the analysis because of protocol violation. This resulted in a total of 32 participants: a placebo group of 11 participants; a pasteurized bacteria group of 12 participants; and a live bacteria group of 9 participants. These 32 participants completed the 3-month supplementation. Participants were allocated to one of the treatment arms following a randomized block design with a block size of eight. The Microsoft Excel randomization function was used to generate the allocation sequence. Participants and physicians were both blinded to treatment allocation. Apart from the placebo group (administered an equivalent volume of sterile PBS containing glycerol), participants were assigned to ingest either 10 10 cells of live A. muciniphila or 10 10 cells of pasteurized A. muciniphila in PBS containing glycerol daily for 3 months. Packages containing either glycerol (placebo) or glycerol and A. muciniphila (pasteurized and live bacteria groups) were given to participants every 2 weeks during follow-up visits, with the instructions to take one dose every morning on an empty stomach. Participants were instructed to keep the packages in the freezer compartment of a home refrigerator until a dose was needed. A temperature sensor was also provided to all participants to monitor the temperature during transport and home storage (at −20 °C). Anaerobic fermentation, concentration and packaging of the bacteria and placebo were performed according to the hazard analysis and critical control points quality system using the medium level food grade as described previously 11 . Pasteurization consisted of heat treatment at 70 °C for 30 min of fresh A. muciniphila . Exclusion criteria were: presence of acute or chronic progressive or chronic unstabilized diseases; alcohol consumption (>2 glasses per day); previous bariatric surgery; any surgery in the 3 months before the study or planned for 6 months after enrolling; pregnancy or pregnancy planned in the 6 months after enrolling; regular physical activity (>30 min of sports activities 3 times a week); consumption of dietary supplements (omega-3 fatty acids, probiotics, prebiotics, plant stanols/sterols) in the month before the study; inflammatory bowel disease or irritable bowel syndrome; diabetic gastrointestinal autonomic neuropathy (such as gastroparesis or reduced gastrointestinal motility); consumption of more than 30 g of dietary per day; consumption of vegetarian or unusual diets; lactose intolerance or milk protein allergy; gluten intolerance; current treatment with medications influencing the parameters of interest (glucose-lowering drugs such as metformin, DPP4 inhibitors, GLP-1 receptor agonists, acarbose, sulfonylureas, glinides, thiazolidinediones, sodium-glucose cotransporter-2 inhibitors, insulin, lactulose, consumption of antibiotics in the 2 months before or during the study, glucocorticoids, immunosuppressive agents, statins, fibrates, orlistat, cholestyramine or ezetimibe); and baseline HbA1c > 7.5%. At baseline and at the end of the intervention, anthropometric measurements were assessed including body weight (kg) and BMI (kg m −2 ). Waist and hip circumference (cm) were measured using a flexible tape. Fat mass (kg) was assessed using electric bioimpedance analysis (Body Composition Analyzer, type BC-418 MA; TANITA). Blood samples were collected at baseline and at the end of the intervention, after an overnight fast (8 h minimum). Based on the analytes of interest, different tubes were used: sodium fluoride-coated tubes for fasting glycemia and insulinemia; lithium-heparin-coated tubes for enzymatic activities; and LPS-free heparin sulfate-coated tubes for LPS measurement (BD Vacutainer glass sodium heparin tubes, catalog no. 368480). One set of tubes was sent directly to the hospital laboratory for the following blood analyses: fasting glycemia; insulinemia; HbA1c (%); total cholesterol; LDL cholesterol (calculated); HDL cholesterol; triglycerides; GGT; ALT; AST; LDH; creatine kinase; and WBC. The other tubes were brought to the research laboratory and kept on ice. Plasma was immediately isolated from whole blood by centrifugation at 4200 g for 10 min at 4 °C and stored at −80 °C for further analyses. For safety purposes, participants were asked to come back to the study hospital 2 weeks after the beginning of the intervention for blood sampling and clinical examination, allowing comparison of clinical parameters with baseline values. Blood sample analysis included C-reactive protein, urea, creatinine, glomerular filtration rate, AST, ALT, GGT, LDH, creatinine kinase, various coagulation parameters and hematologic profiling. Forty participants were included in this analysis at 2 weeks. The same measurements were performed at 3 months for the 32 participants who completed the intervention. Compliance and presence of undesired side effects were monitored every 2 weeks during follow-up visits when participants were asked to fill in a questionnaire. We listed the adverse events most likely to occur during the study in the questionnaire. We also invited participants to point out any other adverse event that either newly emerged or worsened. The list of side effects included nausea, flatulence, bloating, cramps, borborygmus and gastric reflux. If adverse event(s) occurred, participants had to specify the number of days during which the effect(s) occurred. Each adverse event was calculated as the percentage of occurrence on the total number of days of intervention. Compliance was also assessed according to participants’ daily records and the number of returned packages. Compliance was calculated as the percentage of the number of days where packages were actually ingested against the total number of days of the intervention. Participants were instructed to maintain their usual diets, levels of physical activity, current treatments and lifestyles throughout the intervention period. Quality control tests were also applied during the protocol; each participant received a 2-week supply of bacteria (14 bags + 1 bag in case of difficulties in attending the hospital and having to interrupt supplementation). Thus, participants came to the clinic every 2 weeks to receive a new supply containing 2 weeks of bacteria or placebo (15 bags, 14 + 1). At the hospital, bags were stored and kept at −80 °C before being delivered to participants. During transport from the laboratory to home and then in a refrigerator at home, we provided each participant with a device (TempTale4; Sensitech) to monitor the temperature throughout the study, including at home, to detect any potential temperature deviation over the period of supplementation. In addition, we randomly tested the viability of A. muciniphila (for the live bacteria group) by culturing the contents of the bags maintained at −80 °C but also those maintained at −20 °C, thereby validating the viability of cells at −20 °C. Biochemical analyses Insulin sensitivity and resistance were both analyzed using HOMA. This test consists of taking three blood samples at 5 min intervals for each individual. Insulinemia and glycemia were determined for each sample and the mean values were then entered in the HOMA2 calculator (v.2.3.3, available from ) to estimate insulin sensitivity (%) and insulin resistance. 43 , 44 Insulinemia was evaluated by immunoanalysis. Glycemia was assessed by enzymatic test (hexokinase) with ultraviolet detection (Cobas 8000; Roche Diagnostics). HbA1c (%) was determined by high-performance liquid chromatography (G8 HPLC Analyzer; Tosoh). C-reactive protein was assessed by immunoturbidimetry (Cobas 8000). Total cholesterol, HDL cholesterol, triglycerides and GGT were dosed by enzymatic colorimetric method (Cobas 8000). LDL cholesterol concentrations were estimated using the Friedewald formula. AST and ALT were assessed by enzymatic dosage (International Federation of Clinical Chemistry and Laboratory Medicine) without activation by pyridoxal phosphate (Cobas 8000). A kinetic enzymatic test was performed to evaluate urea; creatinine was assessed by kinetic staining test (Jaffe method) (Cobas 8000). The glomerular filtration rate was estimated according to the CKD-EPI equation. The parameters related to muscle function (creatinine kinase and LDH) were assessed by ultraviolet test (Cobas 8000). All these tests were performed at the hospital laboratory. Blood LPS endotoxin activity was measured with the Endosafe nexgen-MCS (Charles River Laboratories) based on the limulus amebocyte lysate kinetic chromogenic method, which measures color intensity directly related to the endotoxin concentration in a sample. Plasma was diluted 1/50 to 1/100 with endotoxin-free buffer (Charles River Laboratories) to minimize interference in the reaction and heated for 15 min at 70 °C. Each sample was diluted with endotoxin-free limulus amebocyte lysate reagent water (Charles River Laboratories) and treated in duplicate. Two spikes for each sample were included in the determination. All samples were validated for recovery and coefficient of variation. The lower limit of detection was 0.005 EU ml −1 . Growth-related oncogene, sCD14L and MCP1 were assessed in each blood sample in duplicate using a MILLIPLEX MAP Human Cytokine/Chemokine Magnetic Bead Panel; Merck Millipore) and measured using Luminex technology (BioPlex; Bio-Rad Laboratories) according to the manufacturer’s instructions. Active plasma GLP-1 levels were determined by sandwich ELISA (Merck Millipore). DPP4 activity was assessed by quantifying the production of p-nitroanilide (pNA) from glycine-proline-pNA (Sigma-Aldrich) using a standard curve of free pNA. For this, plasma samples were incubated for 30 min with glycine-proline-pNA at 37 °C and enzymatic activity was measured by kinetic analysis (380 nm) (SpectraMax M2; Molecular Devices). Fecal microbiome analysis A. muciniphila was quantified with quantitative PCR as described in Everard et al. 10 Each assay was performed in duplicate in the same run. The cycle threshold of each sample was then compared with a standard curve (performed in triplicate) made by diluting genomic DNA (fivefold serial dilution) (DSMZ). The taxonomic composition of fecal microbiota was determined by DNA extraction of fecal samples stored frozen (−80 °C) and library preparation for dual-index 16S ribosomal RNA gene sequencing as described in Vandeputte et al. 45 . Demultiplexing of the sequencing data was performed using LotuS55 v.1.565, followed by quality control and sequence variant matrix building using the DADA2 (ref. 46 ) pipeline v.1.6.0 with taxonomic annotation by RDP classifier 47 v.2.12 using default parameters. Statistical analyses of microbiota composition were performed in R using the packages vegan (version 2.5-3) 48 and CoDaSeq (version 0.99-3) 49 . As recommended for microbiota composition data analysis, the abundance matrix was centered log-ratio-transformed (CoDaSeq:codaseq.clr) using the minimum proportional abundance detected for each taxon for the imputation of zeros. Samples with >10,000 reads ( n = 63 samples and genera with relative abundance >0.001 ( n = 99)) were included in the data analysis. Differences in microbiota profiles between treatment arms at baseline and at end point were evaluated by permutational MANOVA. Microbiota alteration from baseline to end point was evaluated per treatment arm and pairing by participant by performing a distance-based redundancy analysis (partial dbRDA, centered log-ratio-transformed matrix, Euclidean distance) by using time point as an explanatory variable while partialling out intraindividual similarity. Fecal microbiota dissimilarity between samples was represented by genus-level principal coordinates analysis with Aitchison distance (Euclidean distance with centered log-ratio-transformed matrix) using the phyloseq (version 1.26.0), vegan (version 2.5-3) and ggplot2 R packages (version 3.1.0). Confidence ellipses for each of the six sample groups (corresponding to the three different treatment arms at baseline or at end point) were drawn at the 0.80 confidence level assuming a Student’s t -distribution. The intervention effect is symbolized by colored arrows, with direction and length corresponding to the shift in group centroid from baseline to end point for each treatment arm. The arrows lengths were multiplied by five for visual clarity and the three arrows were re-centered to the centroid of all baseline samples (three arms confounded). Statistical analysis The normal distribution of continuous variables, expressed as raw data or as the difference between the two main time points (T0 and T3 months), was tested using the Shapiro–Wilk test. The appearance of box plots and Quantile–Quantile plots was also taken into account. All the following statistical tests were chosen in accordance with normality tests. For all parameters, and within each group, the intervention effect was calculated by subtracting the value obtained at T0 from the value obtained at T3 months for each participant. We named the differential value obtained ‘mean difference’. The ‘mean difference from placebo’ was then calculated by subtracting the mean difference calculated for the placebo group from the mean difference calculated for the active group. The ‘mean difference from placebo’ was expressed as the raw data and as a percentage. Unpaired t -tests or non-parametric Mann–Whitney U -tests were used to assess the significance of differences between the mean differences of the two treated groups versus the mean differences of the placebo group. According to the distribution, either paired t -tests or non-parametric, two-tailed, matched-pairs Wilcoxon signed-rank tests were performed to identify the differences between T0 and T3 within each group. One-way analysis of variance (ANOVA) or Kruskal–Wallis tests were used to compare baseline parameters and the differential values across the three groups, according to the distribution; P values were adjusted using Bonferroni’s correction. For baseline characteristics, the mean and s.d. were used to present the raw data of the normal variables, while the median and interquartile range were used to report non-normal variables. The data of the safety tables were expressed as the mean and s.d. Data presented in the figures were expressed as the mean and s.e.m. of the mean. Statistical analyses were conducted using SPSS v.23.0 (IBM Corporation). All tests were two-tailed and significance was set at P < 0.05. Graphics were drawn with the Prism software v.7.0 (GraphPad Software). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available upon request. All figures are provided with individual values to have a direct access to the raw data. The 16S sequencing datasets generated during the current study are available from the European Genome-Phenome Archive ( ) under accession no. EGAS00001003585 .
In 2007, Patrice Cani (FNRS-WELBIO researcher) and his team at the Louvain Drug Research Institute of University of Louvain, in close collaboration with Willem de Vos, professor at UWageningen, discovered the beneficial effects of intestinal bacteria, Akkermansia muciniphila, able to moderate the development of obesity and type 2 diabetes, in mice. In 2017, the team discovered (still in the mouse) that the use of a pasteurized form of Akkermansia leads to an even greater protection than the living bacterium regarding various cardiovascular disease risk factors such as insulin resistance, hypercholesterolemia, or the storage of fat in adipose tissue. Following these discoveries, the UCLouvain team, in collaboration with the Cliniques Universitaires Saint-Luc, developed a clinical study in order to administer the bacteria to humans. For this, it was necessary to develop the capacity to produce the bacterium in large quantity and to make sure that the tests would be without risk for the participants. The UCLouvain researchers administered Akkermansia to overweight or obese volunteers, all displaying insulin resistance (pre-diabetes type 2) and metabolic syndrome, in other words, having several elevated risk factors for cardiovascular diseases. The volunteers were randomly divided into 3 groups (placebo, live bacteria and pasteurized bacteria) and were asked not to change their dietary habits or their physical activity. Akkermansia was provided as a nutritional supplement. The primary goal of this UCLouvain study was to demonstrate the feasibility of daily ingesting Akkermansia for 3 months, without risk. Clara Depommier and Amandine Everard, UCLouvain researchers, observed excellent compliance (the supplements were easy to ingest) and tolerance (there were no side effects) in the groups taking live or pasteurized bacteria. The conclusions are clear: The tests in humans confirm what had already been observed in mice. Ingestion of the (pasteurized) bacterium prevented the deterioration of the health status of the subjects (pre-diabetes, cardiovascular risks). Even better, the researchers observed a decrease in inflammation markers in the liver, a slight decrease in the body weight of the subjects (2.3 kg on average) as well as a lowering of cholesterol levels. In contrast, the metabolic parameters (insulin resistance or hypercholesterolemia) in placebo subjects continued to deteriorate over time. Who does it benefit? According to the WHO, one in three people die every day from cardiovascular disease worldwide. In Western countries, one in two people is overweight and has increased cardiovascular risks. This research of the UCLouvain would limit these risks and therefore potentially have an impact (limit the effects) on half of the population, if properly used. In conclusion, this pilot study demonstrates the feasibility of administrating (pasteurized) Akkermansia bacteria to humans in the form of a food supplement and reports encouraging results on the effectiveness of the Akkermansia-based dietary supplements to reduce cardio-metabolic risk factors. These results pave the way for a large-scale study, to confirm/elaborate these first results, but also endorse the commercialization of the bacteria as food supplements, by 2021. The study is published in Nature Medicine.
10.1038/s41591-019-0495-2
Medicine
Autoimmune diseases in ALS patients linked to genetic mutation
Madelyn E. McCauley et al. C9orf72 in myeloid cells suppresses STING-induced inflammation, Nature (2020). DOI: 10.1038/s41586-020-2625-x Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2625-x
https://medicalxpress.com/news/2020-08-autoimmune-diseases-als-patients-linked.html
Abstract Amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) are neurodegenerative disorders that overlap in their clinical presentation, pathology and genetic origin. Autoimmune disorders are also overrepresented in both ALS and FTD, but this remains an unexplained epidemiologic observation 1 , 2 , 3 . Expansions of a hexanucleotide repeat (GGGGCC) in the C9orf72 gene are the most common cause of familial ALS and FTD (C9-ALS/FTD), and lead to both repeat-containing RNA and dipeptide accumulation, coupled with decreased C9orf72 protein expression in brain and peripheral blood cells 4 , 5 , 6 . Here we show in mice that loss of C9orf72 from myeloid cells alone is sufficient to recapitulate the age-dependent lymphoid hypertrophy and autoinflammation seen in animals with a complete knockout of C9orf72 . Dendritic cells isolated from C9orf72 −/− mice show marked early activation of the type I interferon response, and C9orf72 −/− myeloid cells are selectively hyperresponsive to activators of the stimulator of interferon genes (STING) protein—a key regulator of the innate immune response to cytosolic DNA. Degradation of STING through the autolysosomal pathway is diminished in C9orf72 −/− myeloid cells, and blocking STING suppresses hyperactive type I interferon responses in C9orf72 −/− immune cells as well as splenomegaly and inflammation in C9orf72 −/− mice. Moreover, mice lacking one or both copies of C9orf72 are more susceptible to experimental autoimmune encephalitis, mirroring the susceptibility to autoimmune diseases seen in people with C9-ALS/FTD. Finally, blood-derived macrophages, whole blood and brain tissue from patients with C9-ALS/FTD all show an elevated type I interferon signature compared with samples from people with sporadic ALS/FTD; this increased interferon response can be suppressed with a STING inhibitor. Collectively, our results suggest that patients with C9-ALS/FTD have an altered immunophenotype because their reduced levels of C9orf72 cannot suppress the inflammation mediated by the induction of type I interferons by STING. Main C9orf72 -knockout mice demonstrate hyperplasia of lymphoid organs and age-related systemic inflammation; however, depending on their environment, they range from having no tissue injury and a normal lifespan 7 , to autoantibody production with renal injury 8 and fatal spontaneous autoimmune disease 9 . In the immune system, C9orf72 is most highly expressed in myeloid cells, especially dendritic cells (DCs) 7 . DCs are antigen-presenting cells of the innate immune system that regulate the adaptive immune response, playing an important part in autoimmunity and cancer immunity 10 , 11 . To assess the effect of loss of C9orf72 on DCs, we analysed the activation state of splenic DCs from C9orf72 −/− mice at 8 weeks, when minimal inflammation and lymphoid hyperplasia are present, and at 8 months, when the mice have pronounced markers of systemic inflammation (Extended Data Fig. 1 ). Although the development of DC subtypes was normal in C9orf72 −/− mice (Extended Data Fig. 1a ), we observed increased expression of the costimulatory molecule CD86 on CD11b DCs, becoming more prominent with age (Fig. 1a, b ). DCs are crucial for regulating T cell homeostasis, activation and tolerance, and T cells were reported to be activated in aged C9orf72 −/− mice 8 . We found that T cell development in the thymus was normal in C9orf72 −/− mice (Extended Data Fig. 1f ), but that numbers of memory and effector memory CD4 and CD8 T cells were increased even at 8 weeks, and became more pronounced with age (Fig. 1c, d ). Fig. 1: DC development and T-cell activation in young and aged C9orf72 − /− mice. a , b , Mean fluorescent intensity (MFI) of CD86 in CD11c + splenic DCs from young ( a ; n = 5) and aged ( b ; n = 6) mice. C9, C9orf72 . c , d , Flow cytometric analysis of naive (CD62L + ), memory (CD62L + CD44 + ) and effector memory (CD44 + ) CD4 T cells and CD8 T cells from 8-week-old ( c ; n = 5) and 8-month-old ( d ; n = 6) mice. e , Left, gross images of splenomegaly in C9orf72 - Cx3cr1 Cre mice at 5 months. Right, spleen weights (in milligrams) normalized to body weight (in grams) at 5 months ( n = 7). f , C9orf72-Cx3cr1 Cre mice have decreased numbers of naive CD4 and CD8 T cells, but increased numbers of CD4 memory and CD8 effector memory T cells ( n = 7). g , GSEA of significantly upregulated pathways in CD8 T cells and CD4 T cells of C9orf72-Cx3cr1 Cre mice versus at 3 months (false discovery rate (FDR) less than 0.05) ( n = 4). h , i , Transcripts per million (TPM) values from RNA-seq, showing non-cell-autonomous elevation of ISGs in CD8 T cells (wild-type, n = 4; C9 −/− , n = 3; C9orf72 fl/fl ; Cx3CR1 Cre , n = 3) and CD4 T cells (wild-type, n = 4; C9 −/− , n = 3; C9orf72 fl/fl ; Cx3CR1 Cre , n = 4) in C9orf72-Cx3cr1 Cre mice. a – d , f , One-way analysis of variance (ANOVA). e , Unpaired, two-tailed Student’s t -test. h , i , Two-way ANOVA. a – f , h , i , Data shown as means ± s.e.m. Each dot represents one mouse. Source data Full size image Given that C9orf72 is also expressed at low levels in lymphocytes, we crossed mice containing a C9orf72 conditional null allele ( C9orf72 fl/fl ) 12 to two different myeloid-specific Cre driver lines, Cx3cr1 Cre and LysM Cre , to determine whether the phenotype was cell autonomous to myeloid cells. The loss of C9orf72 selectively in myeloid populations (primarily monocytes, tissue macrophages and DCs 13 ) in C9orf72 fl/fl ; Cx3cr1 Cre mice completely recapitulated the splenomegaly, altered expression of costimulatory molecules in DCs, and T-cell activation seen in C9orf72 complete-knockout mice (Fig. 1e, f and Extended Data Fig. 2c, d ). Similar findings were observed with C9orf72 fl/fl ; LyzM Cre mice (Extended Data Fig. 3 ), although to a milder degree, potentially because the LyzM Cre driver is expressed in fewer DCs (less than 10%) than the the Cx3cr1 Cre driver (more than 90%) 13 . To probe the non-cell-autonomous phenotypes of adaptive immune cells further, we sorted splenocyte populations from control, C9orf72 −/− and C9orf72 fl/fl ; Cx3cr1 Cre mice. Pathway analysis of CD4 and CD8 T cells from C9orf72 fl/fl ; Cx3cr1 Cre mice showed selective upregulation of type I interferon signalling, suggesting that these cells were responding to type I interferons produced by myeloid cells (Fig. 1g–i and Extended Data Fig. 4 ). To better characterize the factors that drive the activation of adaptive immune cells by C9orf72 -deficient DCs, we performed RNA sequencing (RNA-seq) on isolated splenic classical DCs from young wild-type and C9orf72 −/− mice. Principal component analysis (PCA) showed separation of the two genotypes, with C9orf72 −/− DCs having increased expression of a variety of inflammatory cytokines, including interleukin (IL)-6, IL-10 and IL-12β (Extended Data Fig. 5a,b ). Heat map analysis with hierarchical clustering confirmed a strong upregulation of canonical type I interferon response genes in C9orf72 −/− DCs from three of four mice, with no apparent difference in NFκB signalling (Fig. 2a, b ). Fig. 2: Loss of C9orf72 in myeloid cells drives increased production of type I interferons through STING. a , b , Heat maps showing the expression (RNA-seq; maximum to minimum, normalized) of IFN-response genes ( a ) and NF-κB-response genes ( b ) in C9 −/− versus wild-type splenic classical DCs. c – e , Quantitative reverse transcription and polymerase chain reaction (qRT–PCR) analysis of Ifnb1 ( c ), Mx1 ( d ) and Cxcl10 ( e ) in BMDMs from wild-type and C9 −/− mice following stimulation with cGAMP (5 μg ml −1 ) (technical replicates representative of four biological replicates). f , Western blot analysis of BMDMs stimulated with cGAMP (10 μg ml −1 ) for 0 h, 6 h, 9 h, 24 h, or 24 h with bafilomycin (Baf), showing delayed STING degradation in C9 −/− versus wild-type cells (images representative of four experiments). Asterisks indicate molecular-weight marker (Supplementary Fig. 1 ). g , Left, western blot analysis of phosphorylated STING (pSTING, phosphorylated at serine 365) at 0 h versus overnight (24 h) treatment with cGAMP (10 μg ml −1 ), in wild-type versus C9 −/− BMDMs. Right, quantification of pSTING from n = 3 biological replicate experiments. h , Western blot analysis of BMDMs stimulated with cGAMP (10 μg ml −1 ) for 0 h, 6 h, 24 h or 24 h with bafilomycin, showing LC3-II expression (top blot, arrow) in wild-type versus C9 −/− BMDMs, with tubulin loading control (bottom blot) (representative of four biological replicates). i , qRT–PCR analysis of Ifnb1 expression in wild-type and C9 −/− BMDMs, with and without the STING antagonist H151 (1 μM), following stimulation with cGAMP for 6 h (technical replicates representative of three biological replicates). j , Left, spleen weight (in milligrams) normalized to body weight (in grams) at 8–10 weeks ( n = 7); right, gross representative images. k , qRT–PCR analysis of Trem2 and Il10 in total splenocytes of wild-type ( n = 6), STING-deficient goldenticket (Gt −/− ; n = 3), C9 −/− ( n = 5) and C9 −/− :Gt −/− ( n = 6) mice. l , RNA-seq analysis of ISG expression from splenic CD11b cells in 3-month-old wild-type ( n = 4), C9 −/− ( n = 3), and C9 −/− /Gt −/− ( n = 4) mice. Two-way ANOVA. c – e , k , Unpaired two-tailed Student’s t -test. i , j , One-way ANOVA. e – g , k , Data shown as means ± s.d. i , m , Data shown as means ± s.e.m. g , j – l , Each dot represents one mouse. For gel source data, see Supplementary Fig. 1 . Source data Full size image To identify the potential drivers of the hyperactive type I interferon response, we cultured wild-type and C9orf72 −/− bone-marrow-derived macrophages (BMDMs) and stimulated them with several agonists of Toll-like receptors (TLRs) and cytosolic receptors. Activation of TLR3, TLR4 and TLR7 signalling elicited similar interferon (IFN)-β production between wild-type and C9orf72 −/− BMDMs (Extended Data Fig. 5c–e ). However, stimulation with cyclic GMP–AMP (cGAMP)—an agonist of the cGAMP synthase (cGAS)–STING pathway, which senses cytosolic double-stranded DNA 14 —led to hyperactive expression of IFNβ and interferon-stimulated genes (ISGs) in C9orf72 −/− BMDMs compared with wild-type BMDMs (Fig. 2c–e ). STING signalling is regulated by trafficking to the lysosome, and blocking autophagy leads to sustained activation of the type I interferon response and inflammation 15 , 16 , 17 . Given that C9orf72 is involved in endosomal trafficking, autophagy and lysosomal function 18 , we hypothesized that STING degradation is disrupted in C9orf72 −/− cells. Indeed, we observed that the degradation of STING was delayed after stimulation with cGAMP, and that STING phosphorylated at serine 365 persisted in C9orf72 −/− BMDMs (Fig. 2f, g ), indicating that more STING was present to promote interferon activation via interferon regulatory factor 3 (IRF3), and that it was marked for lysosomal degradation. Basal levels of LC3-II (a standard marker of autophagosomes) in C9orf72 −/− BMDMs were increased compared with controls, and we observed that the recently reported induction of LC3-II lipidation by cGAMP activation of STING 19 was further increased in C9orf72 −/− BMDMs (Fig. 2h ). This difference was normalized by treatment with the autophagy inhibitor bafilomycin A1, suggesting that it was driven in part by decreased lysosomal degradation of LC3-II in C9orf72-deficient cells (Fig. 2h ). These data support a model in which diminished lysosomal degradation of STING through autophagy leads to increased levels of STING in late endosomes, serving as a platform for persistent interferon induction 19 . In support of this idea, the hyperactive IFNβ induction by C9orf72 BMDMs in response to cGAMP was completely blocked by a STING antagonist 20 (Fig. 2i ). To further determine to what degree the elevated type I interferon signature in C9orf72 −/− mice was driven by STING in vivo, we crossed C9orf72 −/− mice with STING-deficient goldenticket mice (STING gt/gt ). We observed a rescue of splenomegaly and the myeloid activation marker TREM2 in C9orf72 −/− ;STING gt/gt mice compared with C9orf72 −/− mice (Fig. 2j, k ). To further characterize the rescue, we carried out RNA-seq using splenocytes from wild-type, C9orf72 −/− and C9orf72 −/− ;STING gt/gt mice. We observed that the elevated type I interferon response in isolated splenic CD11b myeloid cells, B cells, and CD4 and CD8 T cells was completely rescued by the deletion of STING (Fig. 2l and Extended Data Fig. 6 ). By contrast, myeloid and T cell activation markers were not fully rescued in the C9orf72 −/− ;STING gt/gt mice (Extended Data Fig. 7 ). The incomplete rescue of systemic inflammatory phenotypes in C9orf72 −/− ;STING gt/gt mice may occur because STING deletion itself promotes hyperactive TLR signalling and inflammation 21 , or because C9orf72 can also regulate non-STING-related pathways 22 . Regardless, these findings support the idea that increased STING activity in C9orf72 −/− myeloid cells drives the chronically elevated type I interferon production observed in these cells, and is a key part of the systemic autoinflammation in C9orf72 −/− mice. We observed strong expression of IL-12β in C9orf72 −/− DCs (Extended Data Fig. 5b ), which—together with IFNα/β—can promote the polarization of T cells towards a T helper 1 (T H 1) phenotype 23 . In support of this, we found increased IL-2 and IFNγ expression in stimulated splenic T cells from C9orf72 −/− mice (Extended Data Fig. 1h ), and RNA-seq of isolated splenic T cells showed increased expression of markers of activation and T H 1 polarization (Extended Data Fig. 8 ). There was also a nonsignificant trend towards elevated IL-17 expression, which drives the production of T H 17 cells (Extended Data Fig. 8 ). This supports the idea that the stimulation of adaptive immune cells by chronic STING/type I interferon activation in C9orf72 −/− mice leads to a propensity to develop autoimmune disease, similar to the effects of chronic activation of STING in mice lacking TREX1 (ref. 24 ). We therefore investigated whether C9orf72 −/− mice in our colony—which demonstrate autoinflammation without spontaneous autoimmune disease—are more susceptible to experimental autoimmune encephalitis (EAE), which has a strong T H 1 and T H 17 component 25 . We observed a graded gene-dose-dependent increase in clinical severity and spinal-cord inflammation in mice lacking either one or both copies of C9orf72 (Fig. 3a, b and Extended Data Fig. 9a–c ), together with increased infiltrating IFNγ-producing T H 1-polarized T cells (Extended Data Fig. 9d–i ). The increased EAE susceptibility in heterozygous mice is important, as C9orf72 expression is only partially lost in repeat-expansion carrier tissues, and indicates that incorporation of an environmental stressor is necessary as heterozygous DCs did not show baseline differences in immune-cell activation markers. This supports a model in which chronic activation of STING in DCs promotes the activation of T H 1-polarized T cells, which infiltrate the nervous system during EAE and drive the enhanced inflammatory response. This finding is of interest given the report that repeat-expansion mutations in C9orf72 are overrepresented in patients with the rare combination of ALS and multiple sclerosis, for which EAE is used as a model 26 . Notably, the chronic activation of STING seen in myeloid cells and the activation of adaptive immune cells seen in C9orf72 −/− mice is distinct from the acute activation of STING in EAE, which mitigates severity by directly promoting T-cell regulatory responses and suppressing T H 1 responses 27 . Fig. 3: C9orf72 −/− mice are more susceptible to EAE and have increased antitumour immunity compared with wild-type mice. a , b , EAE clinical score ( a ) and weight ( b ) over the course of EAE for wild-type ( n = 13), C9 +/− ( n = 13) and C9 −/− ( n = 10) mice. c , Representative images of lungs from wild-type, C9 +/− and C9 −/− mice 14 days after inoculation with B16 melanoma. d , e , C9 −/− mice have a decreased number of tumour colonies ( d ; wild-type, n = 8; C9 +/− , n = 8; C9 −/− , n = 9) and decreased melanin content after inoculation with B16 melanoma cells for 14 days ( e ; wild-type, n = 6; C9 +/− , n = 8; C9 −/− , n = 9); A 570 and A 405 , absorbance at 570 nm and 405 nm respectively. One-way ANOVA. a , b , d , e , Data shown as means ± s.e.m. d , e , Each dot represents one mouse. Source data Full size image Interestingly, studies have also suggested an altered frequency of various cancers in patients with ALS 28 . DCs have a central role in maintaining the tone of the immune system, with a propensity towards autoimmunity also being associated with enhanced antitumour immunity 10 , 11 . We therefore examined a model of antitumour immunity in C9orf72 −/− mice, measuring the tumour burden after intravenous injection of mouse B16 melanoma cells. We observed a C9orf72 dose dependent decrease in B16 melanoma tumour burden in the lungs, with mice lacking either one or both copies of C9orf72 being more resistant than wild-type mice (Fig. 3c–e ); this response was accompanied by an increase in the number of activated T cells in the lungs and spleen after B16 inoculation (Extended Data Fig. 10 ). These data are consistent with a prior study showing that STING-mediated sensing of tumour DNA is critical for priming antitumour CD8 effector T cells 29 . Our data similarly suggest that enhanced STING/type I interferon responses to tumour-derived DNA in C9orf72 −/− DCs more effectively drive cytotoxic antitumour T cells. In summary, these findings indicate that the altered immunophenotype that is observed in C9orf72 −/− mice (or following loss of even one copy of C9orf72 ) leads to enhanced susceptibility to autoimmune disease and increased antitumour immunity. To assess whether a similar altered immunophenotype exists in myeloid cells from C9orf72 repeat-expansion carriers, we examined blood monocyte-derived macrophages (MDMs) from normal controls, patients with sporadic ALS and patients with C9-ALS using RNA-seq. Gene set enrichment analysis (GSEA) showed marked upregulation of pathways related to IFNα/β signalling in C9-ALS MDMs versus sporadic ALS MDMs, overlapping nearly identically with the upregulated pathways seen in immune cells from C9orf72 −/− mice (Fig. 4a–c ; Fig. 1g ). We looked to confirm these findings in a larger validation set, examining whole-blood RNA-seq data from patients with either sporadic ALS ( n = 259) or C9-ALS ( n = 20). We confirmed the previously reported lower expression of C9orf72 in C9-ALS carriers (Fig. 4d ) ( P = 3.37 × 10 −7 , Mann–Whitney U -test), and again observed a significant upregulation of type I interferon signalling on GSEA in patients with C9-ALS versus sporadic ALS (Fig. 4e ) (FDR of less than 0.05). Fig. 4: Patients with C9orf72 repeat expansion in ALS have an enhanced type I interferon signature in peripheral myeloid cells that can be suppressed by a STING antagonist. a , GSEA of RNA-seq for MDMs from patients with C9-ALS ( n = 5) versus sporadic ALS ( n = 5) (FDR less than 0.05). b , Reads per kilobase of transcript per million mapped reads (RPKMs) for the indicated type I interferon-stimulated genes in MDMs from C9-ALS, sporadic ALS and normal controls ( n = 5). One-way ANOVA. c , qRT–PCR validation of the interferon-stimulated genes in b ( n = 3). One-way ANOVA. d , Whole-blood RNA-seq from patients with C9-ALS ( n = 20), showing decreased C9orf72 expression compared with patients with sporadic ALS ( n = 259). Sporadic ALS: n = 259; minimum = −0.722; maximum = 0.697; whiskers = [−0.562, 0.625]; box = [−0.147, 0.174]; median = 0.0214. C9-ALS: n = 20; minimum = −0.779; maximum = 0.355; whiskers = [−0.779, 0.211]; box = [−0.551, −0.215]; median = −0.425. e , Upregulated pathways from RNA-seq of whole blood from patients with C9-ALS versus sporadic ALS (FDR less than 0.05). f , Upregulated inflammatory pathways for cerebellar tissue from patients with C9-ALS ( n = 8) versus sporadic ALS ( n = 10) (FDR less than 0.05). g , qRT–PCR analysis of IFNβ production by microglia isolated from wild-type ( n = 4) and C9 −/− ( n = 4) mouse brains after stimulation with cGAMP. Unpaired one-tailed Student’s t -test. h , qRT–PCR analysis of the ISG mRNAs for MX1 and STAT1 in PBMCs from patients with sporadic ALS and C9-ALS, with and without treatment with the STING inhibitor H151 (1 μM) ( n = 3). Paired, one-tailed Students t -test. i , Table showing transcripts per kilobase million (TPMs) for the indicated interferon-stimulated genes from RNA-seq of MDMs from two C9orf72 patients, with and without treatment with H151 (for 6 h). Blue, percentage decrease; red, percentage increase. b , c , g , h , Data shown as as means ± s.e.m. Each dot represents one sample. Source data Full size image To assess whether this genotype-driven inflammatory signature was present in nervous tissue, we analysed transcriptome data from a previously published RNA-seq dataset from patients with ALS 30 . We examined cerebellar tissue to avoid the variability of inflammation seen in actively degenerating regions, such as the frontal cortex. We again observed decreased levels of C9orf72 expression and upregulation of the type I interferon response (by GSEA) in the cerebellum of patients with C9-ALS versus sporadic ALS (Fig. 4f ). To determine whether this elevated type I interferon signature could be from microglial cells, we examined IFNβ production by isolated mouse microglia after cGAMP stimulation, observing a hyperactive response to STING activation comparable to that seen in peripheral macrophages from C9orf72 −/− mice (Fig. 4g ). Finally, to determine whether the elevated type I interferon signalling observed in C9-ALS myeloid cells is driven by STING, we treated patient peripheral blood mononuclear cells (PBMCs) or MDMs with a STING inhibitor. We observed that the STING inhibitor H151 did not alter basal ISG expression in PBMCs from patients with sporadic ALS, but did consistently suppress ISG expression in C9-ALS PBMCs (Fig. 4h ). We observed a similar suppression of ISGs across their broader signature by carrying out RNA-seq of H151-treated MDMs from patients with C9-ALS (Fig. 4i ). These findings strongly support the idea that patients with C9-ALS/FTD have a genetically determined immunophenotype, characterized by hyperactive type I interferon signalling, that can be detected in either blood or brain tissues across multiple independent datasets, and is driven at least partly by STING activation. In summary, C9orf72 in myeloid cells—including DCs—is essential for maintaining immune homeostasis, with loss of C9orf72 promoting hyperactive production of type I interferons and adaptive immune activation, including enhanced autoimmunity and antitumour immunity. The hyperactive type I interferon response in C9orf72 −/− DCs is mitigated by blocking STING, which was also recently implicated in inflammation in models of Parkinson’s disease 31 . Of note, DC-specific knockout of Tbk1 —another ALS/FTD gene involved in the cGAS–STING pathway—produces a similar immunophenotype, with mice developing systemic interferon-driven inflammation, an increased propensity for autoimmune disease and cancer resistance, supporting a convergence of C9orf72 and TBK1 signalling pathways in ALS/FTD 32 . However, the paradoxical nature of elevated type I interferon signalling in TBK1-deficient DCs suggests that TBK1 does not act solely as a downstream effector of STING 32 . One of our key findings is that the increased type I interferon signature is present in myeloid cells, whole blood and brain tissue from patients with C9-ALS, serving as a potential biomarker of C9orf72 function in these individuals. We hypothesize that decreased C9orf72 expression in people with C9-ALS/FTD promotes this enhanced interferon production, altering their response to environmental factors such as trauma or infection, and may influence the subsequent development of ALS, FTD and autoimmunity. Moreover, the loss-of-function effects on microglia and peripheral DCs may exacerbate the toxic gain-of-function manifestations of C9orf72 expansion in neurons or other cell types. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized. Mice All mice were housed in pathogen-free facilities under 12-h light–dark cycles with access to food and water ad libitum. Temperatures were set to 74 ± 2 ° F with humidity of 30–70%. C9orf72 −/− mice were as described 7 and wild-type mice were purchased from Jackson Laboratories. C9orf72 fl/fl mice were provided by the Pasterkamp laboratory 12 and were crossed with Cx3cr1 Cre (B6J.B6N(Cg)- Cx3cr1 tm1.1(cre)Jung /J) and Lyz2 Cre (B6.129P2- Lyz2 tm1(cre)Ifo /J) animals, purchased from Jackson Laboratories, to obtain Cx3cr1 Cre ; C9orf72 fl/fl and Lyz2 Cre ; C9orf72 fl/fl mice. STING-deficient goldenticket (STING gt/gt ) mice were obtained from Jackson Laboratories (C57BL/6J-Tmem173gt/J). C9orf72 −/− mice were crossed with STING gt/gt mice to obtain C9orf72 −/− ; STING gt/gt mice. All mice have the nuclear background of C57BL/6J mice. Mice were sex- and age-matched. For all experiments, mice were grouped according to genotype. For EAE and B16 melanoma models, genotypes were randomly separated into experimental groups. Researchers were blinded when scoring EAE and counting the tumour burden in the B16 melanoma model; otherwise, the investigators were not blinded to allocation during experiments and outcome assessment. Husbandry and behavioural tests were conducted in accordance with the protocols described by the National Institutes of Health (NIH)’s Guide for the Care and Use of Animals and were approved by the Cedars-Sinai Institutional Animal Care and Use Committees (IACUC number 8161). Flow cytometry and FACS Antibodies used for flow cytometry analysis are shown in Supplementary Table 1 (1:200 dilution). Single-cell suspensions were depleted of red blood cells (RBCs) using RBC lysis buffer (Sigma). Cells were incubated in Fc block (anti-CD16/anti-CD32 antibodies) before staining to prevent nonspecific antibody binding. An LSRFortessa cell analyser (BD Bioscience) was used for flow cytometry and data were analysed with BDFacs Diva and FlowJo. DCs (CD11c + PDCA1 + CD8 + CD11b + ), B cells (CD19 + ), CD4 T cells (CD3 + CD4 + ), CD8 T cells (CD3 + CD8 + ) and CD11b cells were purified from splenocytes by fluorescence-activated cell sorting (FACS) using a FACSAria II cell sorter (BD Biosciences). Gating strategies can be found in the Supplementary Information . Antibodies Antibodies are as follows: phycoerythin (PE)-conjugated anti-CD4 (BioLegend catalogue number 100408; BD Biosciences 563106); anti-CD8α (BioLegend100722); anti-CD44 (BD Biosciences 103011); anti-I-Ab (MHC II) (BioLegend 116416); anti-CD40 (BD Biosciences 561846); anti-CD80 (BD Biosciences 565820); anti-CD86 (BioLegend 105008); anti-CD62L (BD Biosciences 5605070; anti-CD11c (BioLegend 117310); anti-CD11b (BioLegend 101263, 101206); anti-PDCA1 (BioLegend 127008); anti-CD3 (BioLegend 100311); anti-tumour necrosis factor (TNF) (Biolegend 506306); anti-IFNγ (Biolegend 505805); anti-IL-17 (Biolegend 506917); anti-Foxp3 (eBioscience 17-5773-80B); anti-CD19 (BioLegend 115508); low-endotoxin, azide-free (LEAF) purified anti-mouse CD3 (Biolegend 100314); LEAF purified anti-mouse CD28 (BioLegend 102112); propidium iodide solution (BioLegend 79997); Fc block 2.4G2 cell supernatant (American Type Culture Collection (ATCC) HB-197; Research Resource Identification Protocol (RRID) AB_2103740). T-cell activation and cytokine production Single-cell suspensions of total splenocytes were depleted of RBCs (RBC lysis buffer, Sigma). CD4 T cells were isolated using an EasySep mouse isolation kit (StemCell Technologies). For stimulated wells, 96-well plates were precoated with 2 ng μl −1 of anti-CD3 antibodis overnight at 4 °C. Wells were washed three times with phosphate-buffered saline (PBS) and 1 × 10 6 CD4 T cells were plated in 100 μl per well along with 2 ng μl −1 of anti-CD28 antibody. Supernatants were collected after 72 h and enzyme-linked immunosorbent assays (ELISAs) were run for IFNγ, IL-2, IL-4 and IL-17 (BioLegend). Regulatory T cells and cytokine staining Regulatory T cells Cells were collected and washed with 1 ml of staining buffer (0.045 g sodium azide plus 5% fetal bovine serum (FBS) in PBS) and resuspended in Fc block for 20 min, followed by incubation with surface staining antibodies at 4 °C. Intracellular staining of Foxp3 was carried out using a Foxp3/transcription factor staining buffer set (eBioscience catalogue number 00-5523). In brief, fixation/permeabilization buffer was added and cells were incubated for 60 min at room temperature in the dark. Cells were washed with 1× permeabilization buffer and intracellular antibodies were added overnight. The following day, cells were fixed and analysed by flow cytometry with an LSRFortessa cell analyser. TNF staining Total splenocytes were stimulated with lipopolysaccharide (LPS) overnight. GolgiPlug (a protein-transport inhibitor) was added to the wells for 3 h. Cells were collected and resuspended in Fc block for 10 min, and then incubated with surface staining antibodies at 4 °C for 15 min. Cells were washed and resuspended in cytofix/cytoperm (BD Biosciences 554714) overnight. The next day, cells were resuspended in rat serum for 15 min at room temperature and intracellular antibodies were added for another 15 min. Cells were washed, fixed and analysed via flow cytometry with an LSRFortessa cell analyser. Generation and stimulation of BMDMs Bone marrow cells were isolated from femurs and tibias of mice, and cultured for 7 days in RPMI medium including 10% FBS and 1% penicillin–streptomycin–glutamine with 50 ng ml −1 human macrophage colony-stimulating factor (hM-CSF; Peprotech). Cells were plated at 350,000 per 700 μl and stimulated with LPS (100 ng ml −1 ), poly I:C (10 ng ml −1 ), CpG (1 μg ml −1 ) or cGAMP (5 μg ml −1 unless otherwise stated). The STING antagonist H151 (Cayman Chemicals) was added 30 min before stimulation. PBMC and MDM differentiation PBMCs were isolated from human blood samples collected in BD Vacutainer CPT tubes and centrifuged at a relative centrifugal field (RCF) of 1,600 for 20 min. Plasma and PBMCs were collected and centrifuged (300 g for 10 min) to isolate the PBMCs. PBMCs were plated overnight and the STING antagonist H151 (Cayman Chemicals) was added for 6 h before RNA was collected. For MDMs, CD14 + cells were isolated from PBMCs using magnetic CD14 beads (Miltenyi Biotec) according to the manufacturer’s instructions, then cultured in Iscove’s modified Dulbecco’s medium (IMDM) plus 10% FBS plus human macrophage colony-stimulating factor (hM-CSF; 50 ng ml −1 ) for 7 days before RNA was collected. For STING-inhibition experiments, 1 μM H151 (Cayman Chemicals) was added on day 7 for 6 h before RNA was collected. Isolation and stimulation of microglia Microglia isolation Wild-type and C9 −/− mice were first perfused with PBS/heparin; then, their brains were isolated and dissociated using a neural tissue dissociation kit (NTDK; Miltenyi Biotech) and GentleMACS dissociator (‘NTDK brain’ setting). The lysate was collected and passed through a 70-μm strainer to obtain a single-cell suspension. Next, myelin was removed from the cells by incubating with magnetic myelin beads (myelin removal beads II, human, mouse and rat; Miltenyi Biotech) using an autoMACS separator. Cells were then incubated with CD11b magnetic beads (CD11b (microglia) microbeads, human and mouse; Miltenyi Biotech) and sorted using an autoMACS. CD11b cells were counted, plated (at 200,000 cells per well) and cultured in microglia complete media containing Dulbecco’s modified Eagle medium (DMEM)/F-12, GlutaMAXwith HEPES (Invitrogen), 10% FBS, 100 μg ml −1 penicillin/streptomycin, 0.25 μg ml −1 fungizone, 10 ng ml −1 recombinant mouse M-CSF (R&D Systems), 10 ng ml −1 recombinant mouse granulocyte macrophage colony-stimulating factor (GM-CSF; R&D Systems) and 50 ng ml −1 TGF-β1 (Miltenyi Biotech) for 6 days before stimulation. Microglia stimulation Six days after culturing, microglia were stimulated with 10 μg ml −1 cGAMP for 8 h. After stimulation, cells were lysed in lysis buffer and RNA was isolated using a Qiagen RNeasy Micro kit. RNA quality was determined using Nanodrop and 200 ng of RNA was used to make complementary DNA. Isolation of splenic immune cells Splenic immune cells were isolated by mechanical disruption of spleens in PBS with 0.4% EDTA and 0.5% FBS. The cell suspension was spun down at 1,600 rpm for 4 min and resuspended in RBC lysis buffer (Sigma-Aldrich) for 2.5 min. Cells were then washed, resuspended in Fc block for 10 min, and incubated with anti-CD11c and anti-plamacytoid dendritic cell antigen 1 (PDCA1) antibodies (for DCs), anti-CD3 and anti-CD4/CD8 antibodies (for T cells), anti-CD11b antibodies (for myeloid cells) and anti-CD19 antibodies (for B cells) at 4 °C for 15 min, then washed and purified by flow cytometry using a FACSAria II flow cytometer (BD Biosciences). RNA isolation, qRT–PCR and western blotting RNA was isolated using a PureLink RNA Mini kit (Invitrogen). RNA was reverse-transcribed to cDNA with oligo(dT)s using the Promega reverse transcriptase system, and analysed using a SYBR Green master mix (Applied Biosystems). Mouse primers were as follows. For Ifnb1 , forward 5′-AGCTCCAAGAAAGGACGAACAT-3′and reverse 5′-GCCCTGTAGGTGAGGTTGATCT-3′; for Mx1 , forward 5′-AAACCTGATCCGACTTCACTTCC-3′ and reverse TGATCGTCTTCAAGGTTTCCTTGT-3′; for Cxcl10 , forward 5′-CCAAGTGCTGCCGTCATTTTC-3′ and reverse 5′-GGCTCGCAGGGATGATTTCAA-3′; expression was normalized to mouse 18S rRNA (forward 5′-GATGGTAGTCGCCGTGCC-3′ and reverse 5′-GCCTGCTGCCTTCCTTGG-3′). For Trem2 , forward 5′-GACCTCTCCACCAGTTTCTCC-3′ and reverse 5′- TACATGACACCCTCAAGGACTG-3′; for IL-10 , forward 5′-CCAGAGCCACATGCTCCTAGA-3′ and reverse 5′-GGTCCTTTGTTTGAAAGAAAGTCTTC-3′; expression was normalized to mouse actin (forward 5′-AGGTATCCTGACCCTGAAG-3′ and reverse 5′-GCTCATTGTAGAAGGTGTGG-3′). Human primers were as follows. For MX1 , forward 5′-GGTGGTCCCCAGTAATGTGG-3′ and reverse 5′-CGTCAAGATTCCGATGGTCCT-3′; for STAT1 , forward 5′-TGTATGCCATCCTCGAGAGC-3′ and reverse 5′-AGACATCCTGCCACCTTGTG-3′; for IFI44L , forward 5′-TTGTGTGACACTATGGGGCTA-3′ and reverse 5′-GAATGCTCAGGTGTAATTGGTTT-3′; for ISG15 , forward 5′-GAGGCAGCGAACTCATCTTT-3′ and reverse 5′-AGCATCTTCACCGTCAGGTC-3′; for IFI27 , forward 5′-GTGGCCAAAGTGGTCAGG-3′ and reverse 5′-CCAATCACAACTGTAGCAATCC-3′; expression was normalized to RPL13A (forward 5′-CCTGGAGGAGAAGAGGAAAGAGA-3′ and reverse 5′-TTGAGGACCTCTGTGTATTTGTCAA-3′) or B2M (forward 5′-TGCTGTCTCCATGTTTGATGTATCT-3′ and reverse 5′-TCTCTGCTCCCCACCTCTAAGT-3′). Data shown are technical replicates, with each experiment being repeated in the laboratory with biological replicates three to four times. For western blots, equal numbers of cells were lysed in 1× NuPAGE sample buffer (2.5% β-mercaptoethanol, BME), transferred to nitrocellulose membranes, and probed with primary antibodies for LC3 (Novus Biologicals catalogue number NB100-2220, 1:1,000), STING (Cell Signaling Technology 13647, 1:1,000), phospo-STING Ser 365 (Cell Signaling Technology 72971, 1:500), tubulin (Sigma-Aldrich T6074, 1:1,000) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH; Sigma-Aldrich G8795, 1:5,000). The proteins were detected using a LI-COR system, blocking buffer and secondary antibodies (1:15,000). RNA-seq of mouse splenic DCs Splenic DC RNA quality was assessed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). RNA-seq libraries were generated using 200 ng total RNA as input for the TruSeq Stranded mRNA library prep kit (Illumina) according to the manufacturer’s protocol, and samples were indexed using TruSeq RNA single indexes (Illumina). Library preps were analysed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). Quantified libraries were normalized, pooled, and sequenced on a NextSeq 500 sequencer (Illumina) using the single-end 75-nucleotide setting. Raw sequencing reads were demultiplexed, and FASTQ files were aligned to the mouse genome (mm10 genome assembly) using Tophat v2.1.1 and Bowtie v2.3.2. BAM files were indexed with Samtools v1.6 and annotated using Partek software v7.17.0918 to generate RPKM values for each gene. RNA-seq library preparation and GSEA RNA quality was assessed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). RNA-seq libraries were generated using 200 ng total RNA as input for the TruSeq stranded mRNA library prep kit (Illumina) according to the manufacturer’s protocol, and samples were indexed using TruSeq RNA single indexes (Illumina). Library preps were analysed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). Quantified libraries were normalized, pooled, and sequenced on a NextSeq 500 sequencer (Illumina) using the single-end 75-nucleotide setting. Raw sequencing reads were demultiplexed, and FASTQ files were used to generate estimated transcript counts against the mouse transcriptome (mm10) via Salmon v0.8.2. TPM values summed to the gene level were generated using the R Bioconductor package DEseq2. Median TPM values were calculated within each cell type. Genes with a median TPM value of less than 0.5 within each cell type were discarded. The unions of the remaining genes from each cell type were combined, and the signal-to-noise metric (the difference in mean expression divided by the sum of standard deviations, (μA – μB)/(σA + σB)) was calculated for each of these remaining 12,174 genes when comparing between each genotype group within each cell type. These gene values were ranked from highest to lowest, indicating which genes were most upregulated and downregulated with the least variation. These sorted lists of genes were used as inputs for GSEA, using 1,000 permutations of the gene sets. A false discovery rate P value of less than 0.05 was accepted as significant. RNA-seq of MDMs RNA quality was assessed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). RNA-seq libraries were generated using an average of roughly 150 ng total RNA as input for the TruSeq sranded mRNA library prep kit (Illumina) according to the manufacturer’s protocol, and samples were indexed using TruSeq RNA single indexes (Illumina). Library preps were analysed using a Bioanalyzer (Agilent) and quantified via Qubit fluorometric quantification (Thermo Fisher). Quantified libraries were normalized, pooled and sequenced on a NextSeq 500 sequencer (Illumina) using the single-end 75-nucleotide setting. Raw sequencing reads were demultiplexed, and FASTQ files were aligned to the human genome (hg38 assembly) via Tophat v2.1.1 and Bowtie v2.3.2. BAM files were indexed with Samtools v1.6 and annotated using Partek software v7.17.0918 to generate RPKM values for each gene. RPKM values were log 2 -transformed after adding a pseudocount of 1. Genes with a median log 2 -transformed RPKM value of less than 0.5 were discarded. Signal-to-noise ratios of the remaining genes were calculated for each group comparison, and the sorted list of these genes was used as the input for GSEA using 1,000 permutations of the gene sets. A false discovery rate P value of less than 0.05 was accepted as significant. RNA-seq analysis of human whole-blood data RNA-seq libraries were generated using total RNA as input for the TruSeq mRNA library prep kit (Illumina) according to the manufacturer’s protocol, and samples were indexed using TruSeq RNA single indexes (Illumina). Samples were pooled and sequenced on a NovaSeq 6000 sequencer (Illumina) using the paired-end 151-nucleotide setting. Owing to read 2 sequencing errors in a subset of the samples, all samples were aligned as single-ended datasets using read 1 only. RNA-seq reads were aligned to hg19 using STAR (v.2.5.3). Expression was quantified using RSEM (v1.3.0) with the following flags: –fragment-length-max 1000, –no-bam-output, –estimate-rspd. Given the heterogeneity in expression signatures from whole-blood expression data, expression data were first normalized using PEER v1.3 (probabilistic estimation of expression residuals, PubMed identifier (PMID) 22343431), setting 35 hidden determinants ( K = 35) and allowing up to 1,000 iterations. In addition, the following known covariates were included: sex, collection subsite, age at onset, years to diagnosis, ALS functional rating scale, and sample ancestry estimated by PCA, with categorical variables binarized before inclusion. After filtering against non-expressed genes (TPM values of less than 1 for all individuals), differential expression analysis was performed using a trans-quantitative trait locus (QTL) approach using the R/qtl package (version 1.44-9) on PEER residuals, using the C9orf72 locus repeat expansion genotype against the entire transcriptome (see step 13 in PMID 22343431 for additional details). GSEA analysis was performed against the Gene Ontology (GO) gene sets category in MSigDB, and enrichments were visualized using the Enrichment Map plug-in (v3.2.0, PMID 21085593) in Cytoscape (v.3.7.1). RNA-seq analysis of human cerebellum For the human GSEA, the raw count table was obtained from ref. 30 ( , identification code GSE67196) for control, sporadic and C9orf72 repeat carrier frontal cortex and cerebellum samples. RPKM values were derived for these samples using the edgR package in R Bioconductor. Total raw counts in each sample were used as the library size, and the transcript lengths for each Human Genome Organisation (HUGO) Gene Nomenclature Committee (HGNC) identifier were obtained from Ensembl BioMart. Calculating the median RPKM values for each gene and discarding all genes with a median value below 1 filtered for 11,912 genes used in GSEA. The signal-to-noise metric (difference in mean expression divided by the sum of standard deviations, (μA – μB)/(σA + σB)) was calculated for each gene when comparing frontal cortices or cerebella between sporadic or C9-ALS cases with control groups. These gene values were ranked from highest to lowest, indicating which genes are most upregulated and downregulated with the least variation. GSEA was performed on these ranked gene lists with 1,000 permutations of the gene sets, and a false discovery rate P value of less than 0.05 was accepted as significant. GSEA RPKM or fragments per kilobase of transcript per million mapped reads (FPKM) values were log 2 -transformed after adding a pseudocount of 1. Genes with a median log 2 -transformed RPKM/FPKM value of less than 0.5 were discarded. Signal-to-noise ratios of the remaining genes were calculated for each group comparison, and the sorted list of these genes was used as the input for GSEA. Induction and assessment of EAE Disease induction was as described by Hooke Laboratories, and lots 1004 and 1006 were used here. In brief, female mice aged 10–13 weeks were immunized subcutaneously with myelin oligodendrocyte protein (MOG) 35–55 peptide mixed with cerebrospinal fluid (CFA) on day 0. Pertussis toxin was administered at 120 ng per dose intraperitoneally on days 0 and 1. Mice were examined daily starting at day 7 and scored for disease severity on the scale: 0, no clinical score; 0.5, tip of the tail paralysis; 1, total tail paralysis; 1.5, limp tail and hind leg inhibition; 2, limp tail and weakness of hind legs; 2.5, limp tail and dragging of hind legs; 3, limp tail and complete paralysis of hind legs; 3.5, limp tails, paralysis of hind legs and unable to right themselves; 4, limp tails, complete hind legs and partial front leg paralysis; 4.5, no movement around cage; 5, euthanasia. After onset of disease, mice were singly housed and food and water were provided on the cage floor. For timecourse experiments, mice were killed at onset (day 11) and peak (day 15) and spleens and brain were collected. Myelin was removed from the brain (Miltenyi Biotec catalogue number 130-096-733). Single-cell suspensions were stained intracellularly for IFNγ and IL-17. Cells and tumour models The mouse melanoma cell line B16F10 was obtained from ATCC (CRL-6475) and was maintained in DMEM supplemented with 10% FBS, penicillin (100 U ml −1 ) and streptomycin (100 μg ml −1 ). For lung metastatic models, 2 × 10 5 B16 cells were injected via the tail vein and were enumerated visually, helped by the use of a magnifying glass and a lamp, or by extracted melanin, determined by spectrophotometry. Tumour measurements/volumes did not exceed what is permitted by Cedars-Sinai IACUC. To extract melanin, the entire left lobe was placed into a homogenizer and was digested in 300 μl of PBS. The sample was then centrifuged for 10 min at maximum speed and the supernatant was removed. The samples were then processed in 1 ml of lysis buffer (containing 1 M Tris-Hcl, 10% SDS, 0.5 M EDTA, 10 μg ml −1 proteinase K and water) in a shaker at 56 °C until completely dissolved. This process can take up to 48 h and was aided by the addition of additional proteinase K and processing through an 18-gauge needle. Once the melanin was dissolved, samples were centrifuged for 10 min at maximum speed and the supernatant was discarded so that a black pellet remained at the bottom. Next 200 μl of 2N NaOH was added and the sample was placed in a shaker at 95 °C overnight or until completely dissolved. Once the melanin was dissolved, 100 μl of chloroform:methanol (2:1) was added and the sample was mixed well before being centrifuged for 10 min at maximum speed. Then 100 μl of the top layer was read on a SpectraMax spectrophotometer (optical density 405–570). Resulting values were analysed directly. For T-cell-activation studies, mice were killed 14 days after intravenous injection, and lungs and spleens were collected. The lung was selected and the tissue was manually digested in a lysis buffer containing Hanks’ balanced salt solution (HBSS), collagenase and DNaseI with two 10-min incubations at 37 °C. Samples were centrifuged for 5 min at 3,000 rpm, and RBCs were lysed using 1× RBC lysis buffer (eBioscience). The cells were then washed and stained with an Fc blocker to prevent any nonspecific binding. Single cells from digested lung were stained with anti-CD3, anti-CD4, anti-CD8 and anti-CD44 antibodies, and were washed and fixed. Spleens were collected as described in ‘Isolation of splenic immune cells’, stained for anti-CD3, anti-CD4, anti-CD8 and anti-CD44 antibodies, washed, fixed and analysed via flow cytometry with an LSRFortessa. Age-matched T cells from naive mice were used to compare the activation state of T cells with and without B16 inoculation. Statistical analyses Statistical tests used are indicated in figure legends. All data are shown as means ± s.e.m or ± s.d., and all analyses were conducted using Prism software (Graphpad). Human research participants All human research (collection of blood from individuals with ALS or from normal control individuals) was performed with informed consent, according to protocols approved by the institutional review boards of Cedars-Sinai Medical Center (IRB Pro00039304) or Columbia University Medical Center (IRB-AAAQ7026). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The RNA sequencing data reported here have been deposited in Gene Expression Omnibus (GEO, ) under accession number GSE151936. Data from GSE151936 were used for Figs. 1h, i , 2k , 4a, b, i and Extended Data Figs. 4a, b , 5a, b , 6a–g , 7c and 8a–f . Human whole-blood RNA sequencing data reported here can be accessed from the database of Genotypes and Phenotypes (dbGaP, /gap) using accession number phs002055.v1.p1. Data from dbGaP with accession number phs002055.v1.p1 were used for Fig. 4d, e . Human cerebellum RNA-seq data can be found in ref. 30 . Source data are provided with this paper.
A study published today in the journal Nature could help explain why certain people who develop amyotrophic lateral sclerosis (ALS), a deadly neurological disorder also known as Lou Gehrig's disease, are prone to autoimmune diseases. ALS, which has no known cure, causes progressive degeneration of nerve cells in the spinal cord and brain. About 5,000 people are diagnosed with the disease each year. The new research, led by Cedars-Sinai investigators, focuses on a mutation that decreases expression of a gene called C9orf72, the most common known cause of inherited ALS. Investigators found that this mutation, found in about 10% of patients with ALS, causes the stimulator of interferon genes (STING) protein, a critical sensor of viral infections in the immune system, to become hyperactive. This hyperactivity led to increased production of interferons. Interferons are key for fighting viral infections, but constant, uncontrolled production of interferons can lead to systemic inflammation and development of autoimmune diseases. "These findings support that patients with C9orf72 mutations have a fundamentally different set point of their immune system, with increased propensity to autoimmune diseases, and probably altered responses to viruses and other pathogens in the environment," said Robert Baloh, MD, Ph.D., professor of Neurology and director of the Cedars-Sinai Center for Neural Science and Medicine. Baloh is the corresponding author of the study, available online today and in the Sept. 3 print edition of the journal. The C9orf72 mutation, which is believed to have originated in Northern Europe about 1,500 years ago and then spread as a result of Viking travels and wars, is also associated with frontotemporal lobar degeneration, a type of dementia that can accompany ALS. For the study, investigators examined the brain tissue of laboratory mice with the C9orf72 mutation and and also blood and brain tissue from ALS patients who carried the gene. Results included: Immune cells isolated from the laboratory mice showed early spontaneous activation. In addition, a certain type of immune cell, called myeloid cells, had increased production of the interferons in response to activation of the STING protein.Tissues from ALS patients with the genetic mutation and frontotemporal lobar degeneration showed an elevated immune response compared with samples from patients with a different type of ALS. Taken together, those findings suggested that ALS patients with the genetic mutation and frontotemporal lobar degeneration have an altered immune system because their reduced levels of C9orf72 cannot suppress the inflammation caused by the hyperactive STING protein, the investigators said. "These results give us critical insights into the interplay of the immune system and neurodegenerative disease," said Nancy Sicotte, MD, chair of the Department of Neurology at Cedars-Sinai and the Women's Guild Distinguished Chair in Neurology. "They have implications not only for ALS and frontotemporal lobar degeneration, but for other autoimmune and degenerative disorders affecting the nervous system." Interestingly, the investigators also found that mice with the C9orf72 mutation were more resistant to certain cancers, possibly as a byproduct of a hyperactive immune system. A similar decreased incidence of cancers has also been reported in ALS patients, but the reason remains a mystery. "This study ties together research about a really important immune pathway and the genetics of ALS, linking neurodegeneration, autoimmune diseases and cancer," Baloh said. Baloh's Neurodegenerative Diseases Laboratory is now studying how this gene mutation and heightened autoimmune response is linked to neurodegeneration. He said that understanding these connections may help investigators lay the groundwork for developing therapies for ALS.
10.1038/s41586-020-2625-x
Biology
Larvicidal flavonoids inhibit key enzyme in yellow fever mosquitoes
Kazue Inaba et al, Molecular action of larvicidal flavonoids on ecdysteroidogenic glutathione S-transferase Noppera-bo in Aedes aegypti, BMC Biology (2022). DOI: 10.1186/s12915-022-01233-2 Journal information: BMC Biology
http://dx.doi.org/10.1186/s12915-022-01233-2
https://phys.org/news/2022-02-larvicidal-flavonoids-inhibit-key-enzyme.html
Abstract Background Mosquito control is a crucial global issue for protecting the human community from mosquito-borne diseases. There is an urgent need for the development of selective and safe reagents for mosquito control. Flavonoids, a group of chemical substances with variable phenolic structures, such as daidzein, have been suggested as potential mosquito larvicides with less risk to the environment. However, the mode of mosquito larvicidal action of flavonoids has not been elucidated. Results Here, we report that several flavonoids, including daidzein, inhibit the activity of glutathione S -transferase Noppera-bo (Nobo), an enzyme used for the biosynthesis of the insect steroid hormone ecdysone, in the yellow fever mosquito Aedes aegypti . The crystal structure of the Nobo protein of Ae. aegypti (AeNobo) complexed with the flavonoids and its molecular dynamics simulation revealed that Glu113 forms a hydrogen bond with the flavonoid inhibitors. Consistent with this observation, substitution of Glu113 with Ala drastically reduced the inhibitory activity of the flavonoids against AeNobo. Among the identified flavonoid-type inhibitors, desmethylglycitein (4′,6,7-trihydroxyisoflavone) exhibited the highest inhibitory activity in vitro. Moreover, the inhibitory activities of the flavonoids correlated with the larvicidal activity, as desmethylglycitein suppressed Ae. aegypti larval development more efficiently than daidzein. Conclusion Our study demonstrates the mode of action of flavonoids on the Ae. aegypti Nobo protein at the atomic, enzymatic, and organismal levels. Background Mosquitos act as vectors of many infectious diseases caused by a huge number of pathogens and parasites, epitomized by the spread of malaria [ 1 ]. Despite decades of intensive research, the effective and sustainable management of mosquito vector populations remains a difficult challenge [ 2 , 3 ]. Among vector mosquito species, the mosquitos of the genus Aedes , which includes the yellow fever mosquito Ae. aegypti , are competent vectors of several human infectious viruses, such as the dengue virus, yellow fever virus, and Zika virus. As the Aedes mosquitos are widely distributed, they are recognized as an important factor in the global burden of infectious diseases. Many insecticides have been developed and applied for the control of Aedes vectors [ 2 , 4 ]. However, emergence of resistance in wild Ae. aegypti populations has reduced the efficiency of insecticides [ 4 , 5 , 6 , 7 , 8 ]. For example, although pyrethroids and organophosphates are the most widely used and effective insecticides against Ae. aegypti , resistance to these insecticides has been reported [ 4 ]. In the case of pyrethroid resistance, mutations in a voltage-gated sodium channel gene have been shown to induce a type of resistance known as the knockdown resistance , which has been globally observed in the Ae. aegypti populations [ 4 ]. Therefore, a new insecticide, whose chemical structure and target differ from those of the currently used insecticides, is highly desirable. Flavonoids, which are secondary metabolites from plants and other microorganisms that can affect many aspects of insect development and physiology [ 9 ], can exert larvicidal activity against Ae. aegypti [ 10 ]. For example, flavonoid extracts or purified flavonoids from several plants exhibit larvicidal activity against Ae. aegypti and other vector mosquitos [ 11 , 12 , 13 , 14 , 15 , 16 ]. Culture broths of a species of the actinomycete Streptomyces show toxicity to Ae. aegypti larvae, and this was revealed to be due to several flavonoids, including genistein and daidzein [ 17 ]. It is generally expected that flavonoids are relatively safe, showing less risk to the environment with minimal impacts on animal and human health, and are thought to be beneficial to human health [ 18 , 19 , 20 ]. Therefore, flavonoids could be preferable lead compounds for developing an environment-friendly insecticide to control Ae. aegypti [ 21 , 22 ] . However, the underlying mechanism of action of these flavonoids at the molecular, cellular, and organismal levels remains largely unknown. Although some flavonoids are known to inhibit acetylcholine esterase activity, no correlation is found between larvicidal activity and acetylcholine esterase inhibition [ 14 ]. It is important to understand the modes of action of the flavonoids on larvicidal activity against Ae. aegypti for the development of safe and biorational flavonoidal insecticides for future resistance management. Here, we report that some flavonoids act as the inhibitors of the Ae. aegypti Noppera-bo (AeNobo) protein, which belongs to the glutathione S -transferase (GST) epsilon subfamily. Nobo plays a specialized role in the biosynthesis of the principal insect steroid hormones, ecdysteroids [ 23 , 24 , 25 ]. Similar to other ecdysteroidogenic enzymes [ 26 ], genetic studies have demonstrated that Nobo is required for molting and metamorphosis (i.e., ecdysteroid-dependent developmental processes) in the fruit fly Drosophila melanogaster and the silkworm Bombyx mori [ 23 , 24 , 25 ]. As ecdysteroids are particularly important for the life cycle of insects, it is expected that a chemical inhibitor of ecdysteroidogenic enzymes, including Nobo, would be an insect growth regulator (IGR) that impacts insect development without affecting organisms other than insects [ 27 , 28 ]. Our group has developed a high-throughput screening system to identify chemical compounds that inhibit in vitro enzymatic activity of recombinant purified Nobo proteins [ 29 ]. Using this system, we previously isolated multiple inhibitors of D. melanogaster Nobo (DmNobo) and succeeded in an integrated structure biological analysis to partly reveal the mode of action of these inhibitors, including the vertebrate female sex hormone 17β-estradiol [ 30 , 31 ]. In this study, we expanded our strategy to the AeNobo recombinant protein to identify potential AeNobo inhibitors and demonstrate their mode of action. Results Identification of several flavonoids as AeNobo inhibitors According to a Basic Local Alignment Search Tool database search, an Ae. aegypti gene closest to the D. melanogaster nobo gene is LOC5569853 , annotated by the Ae. aegypti genome database (Locus tag: AaeL_AAEL007955). The GenBank data Accession number XM_001658698.3 predicts that LOC5569853 encodes a protein having 271 amino acid (aa) residues, while another GenBank database EAT40301.1 predicts that LOC5569853 encodes a 220-aa protein. We realized that the 271-aa protein has a long extension at the N-terminus as compared to the 220-aa protein and DmNobo (Additional file 1 : Figure S1 A ), and the 220-aa protein (instead of the 271-aa protein) is substantially similar to DmNobo in terms of aa length. Subsequently, we examined whether a short LOC5569853 cDNA encoding the 220-aa protein can compensate for DmNobo loss-of-function mutation during development. The forced expression of DmNobo driven by phantom-GAL4 fly strain rescues developmental lethality of DmNobo knock-out homozygous mutant animals [ 23 ]. We found that phantom-GAL4 -driven expression of the short cDNA allowed DmNobo knock-out homozygous mutant animals to complete their development and grow to the adult stage (Additional file 2 : Table S1). This result confirmed that LOC5569853 is an Ae. aegypti ortholog of nobo , and the 220-aa protein is functionally equivalent to DmNobo. Hereafter, we designate LOC5569853 as AeNobo. Next, we examined whether AeNobo enzymatic activity was inhibited by 17β-estradiol, which we previously identified as a DmNobo inhibitor (Additional file 1 : Figure S2 A ). We prepared a purified AeNobo recombinant protein (220-aa length) using an Escherichia coli protein expression system. The GST enzymatic activity was examined using the fluorogenic artificial substrate 3,4-dinitrobenzamidedichlorofluorescein (3,4-DNADCF) [ 29 ]. In this assay system, GSTs catalyze GSH conjugation to the weakly fluorescent molecule, 3,4-DNADCF, giving rise to a highly fluorescent product, 4-GS-3-NADCF. Using this system, we found that the specific enzymatic activity of AeNobo to conjugate glutathione (GSH) and 3,4-DNADCF was 2.04 ± 0.04 μmol·min −1 ·mg −1 in vitro, suggesting that AeNobo exhibits GST enzymatic activity. However, we discovered that the concentration of 50% inhibition (IC 50 ) of 17β-estradiol against AeNobo was 21.3 μM, which was approximately 10-fold higher than that against DmNobo (1.2–2.3 μM) (Additional file 1 : Figure S2 B ) [ 29 , 30 ]. These data motivated us to identify other chemical compounds that inhibit AeNobo enzymatic activity, having IC 50 values equivalent to or lower than that of 17β-estradiol against DmNobo. To identify AeNobo inhibitors, we performed high-throughput screening for the inhibitors of GSH conjugation activity of AeNobo using 3,4-DNADCF. Among 9600 chemical compounds obtained from the Drug Discovery Initiative of the University of Tokyo [ 29 ], we identified 2′-hydroxyflavanone with an IC 50 value of 4.76 μM (Fig. 1 A). Based on this result, we focused on flavonoids because flavonoids are well known to affect several aspects of the insect life cycle [ 9 ]. To further determine the kind of flavonoid compounds that inhibit AeNobo enzymatic activity, we examined 13 flavonoids excluding 2′-hydroxyflavanone, which we easily obtained at a relatively lower price in Japan, for the in vitro enzymatic activity assay. The 13 selected flavonoids included flavanones, flavone, isoflavones, flavanols, isoflavan, and anthocyanidins (Table 1 , Additional file 1 : Figure S3). We found that the IC 50 values of 9 compounds, including luteolin, biochanin A, daidzein, fisetin, kaempferol, myricetin, quercetin, cyanidin chloride, and petunidin were lower than 10 μM (Fig. 1 B, Table 1 ). These results suggest that some, but not all, flavonoid chemicals can be classified as AeNobo inhibitors. Fig. 1 Identification and characterization of daidzein and luteolin as flavonoids that inhibit the AeNobo enzymatic activity and interact with H-sites of AeNobo. A Schematic of the library screen to identify chemical compounds that inhibit AeNobo in vitro with IC 50 values of less than 10 μM. One of the identified compounds was 2′-hydroxyflavanone. B Schematic of a screen to identify flavonoid compounds that inhibit AeNobo in vitro with IC 50 values of less than 10 μM. The IC 50 values of the nine tested compounds, including daidzein and luteolin, were less than 10 μM. C , D Chemical structures of daidzein ( C ) and luteolin ( D ). E , F Inhibition of the GSH conjugation activities of AeNobo with an artificial fluorescent substrate, 3,4-DNADCF, in the presence of daidzein ( E ) and luteolin ( F ). Each relative activity is defined as the ratio of activity compared between the respective proteins without the flavonoids. All the data points in duplicate assays are indicated. G , H Amino acid residues interacting with daidzein ( G ) and luteolin ( H ). Carbon atoms of daidzein and luteolin are colored orange and light violet, respectively. Oxygen, nitrogen, and sulfur atoms are colored red, blue, and yellow, respectively. A water molecule interacting with each ligand is represented with a yellow sphere. Amino acid residues located within a 4.0-Å radius of the nearest atom of the flavonoids are shown. Additionally, amino acid residues that form hydrogen bonds within a 3.3-Å radius of the nearest atom of the flavonoids are also shown. Hydrogen bonds are illustrated by dashed yellow lines. The two views are related by a 180° rotation around the bold black line axis Full size image Table 1 Inhibitory activity of 2′-hydroxyflavanone and other 13 flavonoids against AeNobo. Fifteen flavonoids, including the subclasses of flavonone, flavone, isoflavone, flavonol, isoflavan, and anthocyanidin, are illustrated in Additional file 1 : Figure S3. “No inhibition” means that the IC 50 value of a compound is greater than 25 μM. Among the examined chemicals, biochanin A is the only estrogenic chemical that inhibits the in vitro enzymatic activity of AeNobo. s.d. standard deviation, - not determined Full size table Non-flavonoidal estrogenic compounds do not inhibit AeNobo Many flavonoids, including daidzein and kaempferol [ 32 ], are known to act as agonists of the estrogen receptor [ 33 , 34 , 35 ]. Therefore, we wondered whether non-flavonoidal estrogenic compounds can also inhibit AeNobo enzymatic activity in vitro. Consequently, we examined 18 non-flavonoidal estrogenic compounds, including well-known environmental disruptors such as bisphenol A and diethylstilbestrol, for the in vitro enzymatic assay using 3,4-DNADCF. Our results revealed that all the non-flavonoidal estrogenic compounds examined in this study failed to inhibit AeNobo in vitro (IC 50 > 25 μM) (Additional file 2 : Table S2), suggesting that estrogenic activity is not a prerequisite for a compound to be classified as an AeNobo inhibitor. Binding mode of flavonoids to AeNobo To reveal the molecular mechanism through which these flavonoids inhibit AeNobo enzymatic activity, we conducted an X-ray crystallographic analysis of AeNobo. Gel filtration chromatography revealed that AeNobo forms a homodimer in solutions (Additional file 1 : Figure S4 A ), suggesting that its dimeric structure is a biological unit similar to DmNobo and other canonical GSTs [ 30 , 31 , 36 ]. Subsequently, we determined the crystal structure of the AeNobo protein in the presence of GSH at 1.51 Å resolution by the molecular replacement method using the structure of DmNobo (PDB ID: 6KEM) [ 37 ] as an initial model (Additional file 1 : Figure S4 B ). We found that the asymmetric unit of AeNobo is composed of four chains: A, B, C, and D. Each chain forms a biological dimer with a symmetry-related subunit by a crystallographic twofold axis (Additional file 1 : Figure S4 C and S4 D ). AeNobo adopts a canonical GST fold (Additional file 1 : Figure S4 D ), which has a well-conserved GSH-binding site (G-site) and a hydrophobic substrate-binding pocket (H-site) adjacent to the G-site (Additional file 1 : Figure S5). Next, crystal structures of AeNobo-GSH complexed with daidzein (AeNobo-GSH-daidzein) and luteolin (AeNobo-GSH-luteolin) were determined at 1.95 Å and 1.50 Å resolution, respectively (Additional file 2 : Table S3). Daidzein and luteolin inhibited AeNobo enzymatic activity with an IC 50 value of 3.87 μM and 3.99 μM, respectively (Fig. 1 C, D); these compounds are known to exhibit larvicidal activity toward Ae. aegypti [ 17 , 38 ]. The subunit structures of AeNobo complex with inhibitors were essentially the same as those of the substrate-free form (Additional file 2 : Table S4). Electron densities of the inhibitors were observed in all four chains (Additional file 1 : Figure S5). Interactions between AeNobo and GSH in the G-site were essentially the same in the presence or absence of daidzein (Additional file 1 : Figure S6 A ) or luteolin (Additional file 1 : Figure S6 B ), suggesting that these flavonoids do not interfere with the interaction between AeNobo and GSH. Although daidzein and luteolin both bind to the AeNobo H-site, the binding orientations of daidzein and luteolin in the H-site are opposite to each other. The A-ring of daidzein is nested deep inside the H-site, but the A-ring of luteolin is located at the entrance of the H-site (Fig. 1 E, F). Nevertheless, AeNobo uses the same aa residues to interact with daidzein and luteolin directly or indirectly via water molecules. For example, these inhibitors interact with Leu-38, His-43, and Glu-113. Additionally, they exhibit hydrophobic interactions with Ile-11, Pro-13, Leu-38, Met-117, Ile-121, and Leu-210 (Fig. 1 E, F). The interaction between Glu-113 of AeNobo and flavonoid inhibitors is essential for inhibition Among the aa residues that interact with daidzein and luteolin, we focused on Glu-113. The Oε of Glu-113 forms a hydrogen bond with the hydroxyl group at C7 of daidzein and two hydroxyl groups at C3′ and C4′ of luteolin (Figs. 1 G, H, and 2 A, B). The hydrogen bonds were observed in all 4 chains (Additional file 1 : Figure S7 A , B ). Glu-113 of AeNobo corresponds to Asp-113 of DmNobo [ 30 ], as supported by the superposition of the two structures (Additional file 1 : Figure S1 B ). The interaction between Asp-113 of DmNobo and a hydroxyl group of 17β-estradiol is essential for the inhibition, as 17β-estradiol does not inhibit the enzymatic activity of the point mutated DmNobo protein in which Asp-113 is substituted with Ala [ 30 ]. As Glu has a carboxyl group in the side chain similar to Asp, we speculated that Glu-113 of AeNobo also has a significant impact on the inhibitory activities of daidzein and luteolin. Fig. 2 Glu-113 is essential for the binding of AeNobo to daidzein and luteolin. A , B The hydrogen bonds between Glu-113 and the hydroxyl residues of C7 of the A-ring of daidzein (DAI; A ) and of C3′ and C4′ of the B-ring of luteolin ( B ) are highlighted from Fig. 1 G and H , respectively. Carbon atoms of daidzein and luteolin are colored orange and light violet, respectively. Oxygen, nitrogen, and sulfur atoms are colored red, blue, and yellow, respectively. C , D Inhibition of the GSH conjugation activities of wild-type AeNobo (WT, blue dots and black solid curves) and the mutated AeNobo substituting Glu-113 with Ala (E113A, red dots) using 3,4-DNADCF in the presence of DAI ( C ) and luteolin ( D ). Each relative activity is defined as the ratio of activity compared between the respective proteins without the flavonoids. All the data points in duplicate assays are indicated. E , F In silico evaluation of the contribution of Glu-113 to the interaction between AeNobo and DAI. MD simulations of the AeNobo-WT or AeNobo-E113A complex with GSH and DAI in a SPC-water model were conducted at 300 K for 1000 ns. These simulations were performed in triplicate. E MD models at several representative time points of AeNobo-WT and AeNobo-E113A with DAI. The lower models are rotated 90° from the upper models. F RMSD of DAI heavy atoms in the MD simulations. Green: WT; blue: E113A mutation model. G Distance between the carboxylate C atom of Glu-113 of AeNobo-WT or Cβ of Ala-113 of AeNobo-E113A and the O7 atom of DAI at each frame Full size image The importance of the Glu-113–flavonoid hydrogen bond for flavonoid binding was biochemically examined with a recombinant mutated AeNobo protein carrying an E113A aa substitution (AeNobo-E113A). While AeNobo-E113A retains GST activity with specific enzymatic activity for the conjugation of GSH and 3,4-DNADCF of 1.41 ± 0.08 μmol·min −1 ·mg −1 in vitro, the enzymatic activity of AeNobo-E113A was not inhibited by any flavonoid, including daidzein and luteolin at a concentration of 25 μM (Table 1 and Fig. 2 C, D). Moreover, molecular dynamics (MD) simulations showed that daidzein easily dissociated from the AeNobo-E113A with an average root mean square deviation (RMSD) of 6.13 Å, while daidzein remained stable in the H-site of wild-type AeNobo (AeNobo-WT) with an average RMSD of 1.13 Å (Fig. 2 E, F, G, Additional file 1 : Figure S8, Additional file 3 : Movie 1, Additional file 4 : Movie 2). These data suggest that the hydrogen bonding between Glu-113 and daidzein is essential for the inhibitory activity of the flavonoids against AeNobo. The hydroxyl groups of flavonoids that form the hydrogen bond with AeNobo are essential for inhibiting AeNobo To further evaluate the significance of the hydrogen bond between Glu-113 of AeNobo and the flavonoids, we took another approach to utilize several chemical derivatives of daidzein and luteolin. Each chemical derivative lacked one or more hydroxyl group(s) on its flavonoidal carbon structure as compared to daidzein and luteolin (Fig. 3 A, B). Fig. 3 Structure-activity relationship of flavonoid derivatives exhibiting inhibitory activities against AeNobo. A , B Chemical structures of daidzein derivatives ( A ) and luteolin derivatives ( B ), which possess isoflavone and flavone nuclei, respectively. Red dashed circles indicate the hydroxyl residues that form hydrogen bonds with Glu-113 of AeNobo. C , D Inhibition of the GSH conjugation activities of AeNobo using 3,4-DNADCF in the presence of daidzein derivatives ( C ) and of luteolin derivatives ( D ). Each relative activity is defined as the ratio of activity compared between the respective proteins without the flavonoids. All the data points in duplicate assays are indicated Full size image We found that 3-(4-hydroxyphenyl)-4 H -chromen-4-one, which lacks a hydroxyl group at C7 compared to daidzein (Fig. 3 A, Table 2 ), did not inhibit the enzymatic activity of AeNobo (Fig. 3 C, Table 2 ). In contrast, 7-hydroxyisoflavone, which lacks a hydroxyl group at C4′ compared to daidzein (Fig. 3 A, Table 2 ), retained inhibitory activity against AeNobo (Fig. 3 C, Table 2 ). These data suggest that the hydroxyl group at C7 of daidzein is critical for the inhibition. Table 2 Inhibitory activity of daidzein derivatives against AeNobo. Daidzein derivatives used in this study are illustrated in Fig. 3 A. Besides IC 50 values, this table shows the presence of the hydroxyl residues (-OH) in the carbon positions of isoflavone nuclei. For example, luteolin possesses the hydroxyl residues at C3′, C4′, C5, and C7 position of the isoflavone nuclei. “No inhibition” means that an IC 50 value of a compound is larger than 25 μM. s.d. standard deviation Full size table In the case of luteolin derivatives, apigenin, which lacks a hydroxyl group at C3′ (Fig. 3 B, Additional file 2 : Table S5), exhibited a substantial decrease in its inhibitory activity having an IC 50 value >25 μM (Fig. 3 D, Additional file 2 : Table S5). Moreover, chrysin, which lacks two hydroxyl groups at C3′ and C4′ (Fig. 3 B, Additional file 2 : Table S5), did not exhibit any inhibitory activity (Fig. 3 D, Additional file 2 : Table S5). In contrast, 7,3′4′-trihydroxyflavone and 5,3′4′-trihydroxyflavone, both of which are luteolin derivatives but lack hydroxyl groups at C5 and C7, respectively (Fig. 3 B, Additional file 2 : Table S5), retained inhibitory activity against AeNobo to a level comparable to that of luteolin (Fig. 3 D, Additional file 2 : Table S5). These data suggest that the hydroxyl groups at C3′ and C4′ of luteolin are critical for the inhibition. Furthermore, the hydroxyl group at C7 of daidzein and the two hydroxyl groups of C3′ and C4′ of luteolin form hydrogen bonds with Glu-113 of AeNobo. Therefore, these results also support our hypothesis that the interaction between Glu-113 and the flavonoids is crucial for the inhibition. Desmethylglycitein is an efficient flavonoidal inhibitor of AeNobo As described above, our crystal structure analysis on luteolin revealed that Glu-113 interacts with two hydroxyl groups. In contrast, daidzein possesses one hydroxyl group that forms a hydrogen bond with Glu-113. Therefore, we hypothesized that daidzein derivatives that possess an additional hydroxyl group on the A-ring show stronger interactions with Glu-113 and thus exhibit more efficient inhibitory activities against AeNobo than daidzein. To test this hypothesis, we utilized genistein (Fig. 3 A) and desmetylglycitein (DMG) (Fig. 4 A), which have an additional hydroxyl group at C5 and C6, respectively, compared to daidzein. We found that genistein exhibited inhibitory activity against AeNobo with an IC 50 value of 1.86 μM (Fig. 3 C and Table 2 ), which is not very different from that of daidzein (Table 2 ). In contrast, DMG displayed the highest inhibitory activity against AeNobo among the tested flavonoids; the IC 50 value of DMG was 0.287 μM, the lowest among all flavonoid inhibitors that were examined in this study (Fig. 4 B and Table 2 ). Fig. 4 Desmethylglycitein (DMG) inhibits AeNobo. A Chemical structures of DMG, also known as 4′,6,7-trihydroxyisoflavone. B Inhibition of the GSH conjugation activities of wild-type AeNobo (WT, blue dots and black solid curves) and the mutated AeNobo substituting Glu-113 with Ala (E113A, red dots) using 3,4-DNADCF in the presence of DMG. Each relative activity is defined as the ratio of activity compared between the respective proteins without DMG. All of the data points in duplicate assays are indicated. C Amino acid residues interacting with DMG. Carbon atoms of DMG are colored gray. Oxygen, nitrogen, and sulfur atoms are colored red, blue, and yellow, respectively. A water molecule interacting with each ligand is represented with a yellow sphere. Amino acid residues located within a 4.0-Å radius of the nearest atom of the flavonoids are shown. Additionally, amino acid residues that form hydrogen bonds within a 3.3-Å radius of the nearest atom of the flavonoids are also shown. Hydrogen bonds are illustrated by dashed yellow lines. The two views are related by a 180° rotation around the bold black line axis. Note that the hydrogen bond interaction between the hydroxyl residue of the B-ring and Arg-41 in chain D is indicated in this figure, while the direct interaction between DMG and Arg-41 is not observed in chains A, B, or C. C′ The hydrogen bonds between Glu-113 and the hydroxyl residues of C6 and C7 of the A-ring of DMG are highlighted. D , E In silico evaluation of the contribution of Glu-113 to the interaction between AeNobo and DMG. MD simulations of the AeNobo-WT or AeNobo-E113A complex with GSH and DMG in a SPC-water model were conducted at 300 K for 1000 ns. These simulations were performed in triplicate. D MD models at several representative time points of AeNobo-WT and AeNobo-E113A with DAI. The lower models are rotated 90° from the upper models. E RMSD of DMG heavy atoms in the MD simulations. Green: WT; blue: E113A mutation model. F Distance between the carboxylate’s C atom of Glu-113 of AeNobo-WT or Cβ of Ala-113 of AeNobo-E113A and the O6 or O7 atom of DMG at each frame. The nearest distances between O6 and O7 atoms are represented in this graph Full size image Next, we determined the crystal structure of AeNobo-WT complexed with GSH and DMG (AeNobo-GSH-DMG). We found that DMG interacts with the H-site of AeNobo (Fig. 4 C) in a manner very similar to daidzein (Fig. 1 G, H, Additional file 1 : Figure S6 A ), except that Arg-118 indirectly interacts with the ketone group of the C-ring of DMG via a water molecule (Fig. 4 C). Furthermore, as expected, DMG has two hydroxyl groups at C6 and C7 of A-ring, both of which form hydrogen bonds with Oε of Glu-113 of AeNobo (Fig. 4 C, C′). The hydrogen bonds were observed in all 4 chains (Additional file 1 : Figure S7 C ). The hydrogen bond between Glu-113 and DMG is essential to its inhibitory activity; the enzymatic activity of AeNobo-E113A was not inhibited by DMG even at a concentration of 25 μM (Fig. 4 B, Table 2 ). MD simulations also demonstrated that DMG easily dissociated from the AeNobo-E113A with an average RMSD of 4.59 Å, while DMG remained stable in the H-site of AeNobo-WT with an average RMSD of 1.32 Å (Fig. 4 D, E, Additional file 1 : Figure S8, Additional file 5 : Movie 3, Additional file 6 : Movie 4). These results suggest that, similar to daidzein, the hydrogen bonding between Glu-113 and DMG is crucial for stable binding of DMG to the H-site. As described above, the inhibitory activity of DMG is stronger than that of luteolin. However, both DMG and luteolin utilize two hydroxyl groups of the A-ring to form the hydrogen bonds with Oε of Glu-113 of AeNobo. In this situation, the stronger inhibitory activity of DMG than that of luteolin cannot be explained only by the interaction with Glu-113. To further understand the action of DMG on AeNobo, we analyzed structural differences between AeNobo-GSH-luteolin and AeNobo-GSH-DMG. We realized that the structural difference between the luteolin and DMG complexes lies in the presence/absence of a CH-π interaction with Phe-39. We observed a CH-π interaction between Phe-39 and the B-ring of DMG (Additional file 1 : Figure S9 A ), but not the A-ring of luteolin (Additional file 1 : Figure S9 B ). This difference arose due to the opposite orientation of DMG as compared to luteolin, as the A-ring of DMG and the B-ring of luteolin were near Glu-113 of AeNobo. As the hydroxyl residue at C7 of luteolin is close to Phe-39, the A-ring of luteolin cannot form a CH-π interaction with Phe-39. This observation raises the possibility that the two hydrogen bonds as well as the CH-π interaction contribute to the inhibitory properties of DMG. The importance of the CH-π interaction between DMG and Phe-39 was confirmed using the AeNobo-F39L variant. While the variant retained GST activity with a specific enzymatic activity of 1.85 ± 0.26 μmol·min −1 ·mg −1 , the inhibitory activity of DMG was reduced compared to that of AeNobo-WT, showing an IC 50 value of 1.16 μM (Additional file 1 : Figure S9 C ). In contrast, the IC 50 value of luteolin to AeNobo-F39L was 1.58 μM (Additional file 1 : Figure S9 D ), which is comparable to the inhibitory activity against AeNobo-WT. These data suggest that the CH-π interaction with Phe-39 contributes to the inhibitory activity of DMG. Desmethylglycitein suppresses Ae. aegypti larval development Daidzein and genistein exhibit larvicidal activity against Ae. aegypti [ 17 ]. As DMG is a stronger inhibitor of AeNobo than daidzein and genistein, we expected that DMG is a more efficient larvicidal reagent than daidzein and genistein. To test this, we conducted larvicidal assays using Ae. aegypti larvae. Three hours after hatching, we placed Ae. aegypti 1st instar larvae in water containing 1–100 ppm of daidzein or DMG in 0.1% EtOH and then counted the number of living larvae 24 h after the treatment. Based on our experimental conditions, 50% lethal dose (LD 50 ) of DMG was observed to be approximately 9.39 ppm, while daidizen, which has been reported as a larvicide for Ae. aegypti [ 17 ], exhibited a LD 50 of 85.8 ppm (Fig. 5 A). These results suggest that the inhibitory activities of the flavonoids correlated with their larvicidal activity. Fig. 5 Larvicidal activity of desmethylglycitein (DMG) on Ae. aegypti. A Survival rates of the 1st instar larvae of Ae. aegypti 24 h after treatments of daidzein (DAI, black dots and lines) and DMG (magenta dots and lines). Control (0.1% ethanol without any flavonoids), and 1, 10, and 100 ppm of flavonoids were used. Each dot represents survival rates of twenty larvae in each independent experiment. Fitting curves were generated using log-logistic equation: Mortality = 1/(1+exp{ b (log([flavonoid concentration])-log(LD 50 )}); b = −0.781911 and LD 50 = 85.83 ppm for DAI; b = −1.081635 and LD 50 = 9.39 ppm for DMG. ** p < 0.01 Student’s t -test. B Representative photos of control and 2.5 ppm DMG-treated Ae. aegypti larvae 24 h after treatments. Scale bar, 1 mm. C The transverse diameter of the Ae. aegypti larval head was measured 24 h after adding 2.5 ppm DMG or control 0.1% DMSO. Raw data are described in Additional file 2 : Table S6. ** p < 0.01 Student’s t -test. Error bars: standard deviations. D RT-qPCR analysis of E74B mRNA level under the conditions described for C . E74B mRNA levels are normalized by rp49 mRNA levels. A mean normalized expression level of E74B is set as 1. * p < 0.05 Student’s t -test. Error bars: standard deviations Full size image We further examined the effect of DMG on Ae. aegypti 1st instar larvae in more detail. In these experiments, we utilized 2.5 ppm DMG as this concentration did not lead to significant lethality (Fig. 5 A) and therefore avoided an artifact of the lethality. First, we identified the larval instars by measuring the transverse diameter of the Ae. aegypti larval head 24 h after the treatment. A previous study described the head diameter of the 1st instar to be approximately 0.3 mm or less, while that of the mature 2nd instar was approximately 0.45 mm [ 39 ]. Using this indicator, we found that a head diameter of 2.5 ppm DMG-treated larvae was significantly smaller than that of the control larvae (Fig. 5 B, C, Additional file 2 : Table S6). More specifically, the head diameter was less than 0.3 mm in 30 of 57 DMG-treated larvae, however only four of the 50 control larvae (Additional file 2 : Table S6). These results suggest that DMG treatment leads to retarded development at the 1st instar. Furthermore, in these animals, the mRNA level of E74B , an ecdysone-inducible gene [ 40 ], tended to be reduced in DMG-treated larvae when compared to control larvae (Fig. 5 D), which was consistent with the expected effect of DMG on ecdysteroid biosynthesis. We also conducted larvicidal assays using D. melanogaster . When we placed D. melanogaster 1st instar larvae in food containing 10 ppm DMG, pupariation rate and pupation timing were not affected when compared with a control group (Additional file 1 : Figure S10). Furthermore, most larvae became adults. This result suggests that to some extent DMG exhibits a target species selectivity. Taken together, we identified DMG as the most active flavonoidal larvicide discovered to date that suppresses Ae. aegypti larval development. Discussion In this study, we showed that several flavonoids, including genistein, luteolin, and DMG, inhibit AeNobo enzymatic activity. Additionally, our X-ray crystallographic analysis and MD simulation revealed that Glu-113 and Phe-39 of AeNobo, particularly the former, are crucial for the interaction between AeNobo and flavonoids. Consistent with this observation, the point-mutated AeNobo proteins, such as AeNobo-G113A and AeNobo-F39L, were less inhibited by the flavonoids. Moreover, we found that DMG, which exhibits the strongest inhibitory activity against AeNobo in vitro in this study, shows larvicidal activity against Ae. aegypti . DMG is a more efficient larvicidal reagent than daidzein, which was identified as a larvicide for Ae. aegypti in a previous study [ 17 ]. To the best of our knowledge, this is the first study to identify a chemical compound that potentially inhibits a mosquito ecdysteroidogenic enzyme. Additionally, this study is the first to determine the crystal structure of a complex of insect GST and flavonoids. We found that DMG inhibits the AeNobo enzymatic activity and exhibits larvicidal activity against Ae. aegypti , which might be due to the suppression of ecdysteroid biosynthesis. To date, several studies in mosquitoes have revealed that ecdysteroid signaling pathway regulates both mosquito abundance and competence, indicating that insecticides targeting any biological events related to ecdysteroids may be an asset for mosquito vector control [ 41 ]. Therefore, DMG is recognized as a potential IGR to target enzymes involved in the ecdysteroid biosynthesis pathway. However, since IGRs should specifically inhibit insect-specific biological processes, it is crucial to ensure that the IGRs do not impair any essential biological processes in organisms other than pests of interest. In this sense, the concern is that many flavonoids, including daidzein and luteolin, are well known to exert estrogen-like activities, as they can bind to estrogen receptors (ERs) and increase the proliferation of estrogen-sensitive cells [ 33 , 34 , 35 ]. However, our experiment using non-flavonoidal estrogenic compounds, including bisphenol A and diethylstilbestrol (Table 1, Additional file 2 : Table S2), suggests that the estrogenic activity of compounds is not associated with the inhibition of AeNobo enzymatic activity. Moreover, DMG does not exhibit estrogenic activity because it does not promote ER binding activity to estrogen response elements of enhancer DNA region [ 33 ]. While the structure of DMG complexed with ERs has not been reported, the structure of genistein complexed with ERβ, deposited in Protein Data Bank (PDB ID:1X7J) [ 42 , 43 ], gives us a hint for understanding why DMG does not exhibit the estrogenic activity. In the structure of ERβ-genistein complex, Ile373 of ERβ, which is close to the C5 of genistein, occupies the binding pocket beside the C5. Therefore, there might be a steric hindrance between a hydroxyl group at C6 of DMG and Ile373 of ERβ, although we remain cautious in our speculations because the ligand-binding pocket of ERs is known to show enough structural plasticity to adapt to their ligands [ 44 ]. Taken together, these results strongly indicate that there is no critical correlation between the estrogen-like activity of flavonoids and their inhibitory activity against AeNobo. In this study, we showed that 2.5 ppm DMG treatment resulted in larval growth retardation and the reduced expression of the ecdysone-inducible gene E74B , suggesting an inhibitory effect of DMG on ecdysteroid biosynthesis in vivo. However, a 24-h treatment of DMG above 10 ppm evoked a high lethality, which appears to be different from the phenotype observed in loss of nobo function mutants of D. melanogaster and B. mori , in which molting is inhibited but organismal death does not occur immediately [ 23 , 24 , 25 ]. Therefore, further investigation is required to elucidate the specific inhibitory activity of DMG to AeNobo in vivo. Previous studies have identified DMG as an inhibitor of several enzymes in mammals, including phosphatidylinositol-3 kinase (PI3K) [ 45 ], protein kinase C α [ 46 ], cyclin-dependent kinase 1 and 2 [ 47 ], and the prolyl isomerase Pin1 [ 48 ]. In all cases, DMG directly binds to these proteins. Particularly, DMG inhibits PI3K activity via direct binding in an ATP-competitive manner [ 45 ]. By inhibiting these enzymes, DMG exhibits several beneficial effects on mammalian health, such as anti-cancer activity [ 47 , 48 ], suppression of adipogenesis [ 45 ], suppression of osteoclast [ 49 ], antibacterial activity [ 50 , 51 ], protective effect on cell death [ 52 ], and improvement of learning and memory [ 53 ]. Investigating how DMG binds to enzymes other than AeNobo and understanding the differences in binding modes between AeNobo and other enzymes at the molecular level would be beneficial. The prevalence of resistance to mosquito pesticides that are currently being used worldwide, such as pyrethroid deltamethrin and temephos, are high in several areas [ 4 ]. Although some new strategies have been investigated for controlling the growing mosquito population [ 54 , 55 ], the continuous need for the development of new insecticides targeting different molecules remains urgent. Our discovery of DMG as the most efficient AeNobo inhibitor provides a new strategy for the development of a novel environment-friendly IGR that controls mosquito population by inhibiting ecdysteroid biosynthesis. In the future, it would be intriguing to identify DMG-derivatives that inhibit AeNobo with a greater efficiency and to examine whether the DMG-derivatives exhibit higher larvicidal activities against Ae. aegypti than DMG. To further develop such highly active inhibitors, we plan to focus on a DMG-derivative that interacts with the aa of H-sites other than Glu-113 and Phe-39. In the crystal structures of AeNobo, some aa residues at the H-site, such as Ile-11, Met-117, Arg-118, and Ile-121, were found to be mobile in all subunits, indicating that these aa residues contribute only marginally to the interaction with DMG. Therefore, it should be interesting to design DMG-derivatives with hydrophobic functional groups that can interact with the H-site aa residues. Specifically, a DMG-derivative targeting Arg-118 might be a great seed compound for developing a mosquito-specific pesticide because Arg-118 is conserved among Nobo proteins from various mosquito species, whereas the 118th aa residue of Nobo proteins from other insects is serine or tyrosine [ 23 , 30 ]. Therefore, our structural analysis will provide a basis for further consideration of DMG-based IGR development in the future. Conclusion Here, we identified flavonoids, members of a polyphenolic class of secondary plant metabolites, as potential inhibitors of the ecdysteroidogenic regulator, Noppera-bo, in the yellow fever mosquito Aedes aegypti . Our biochemical and structure biological analyses revealed an essential mode of interaction between the flavonoids and the Noppera-bo protein underlying their inhibitory activity. Finally, we confirmed that DMG, the most efficient flavonoid for the inhibition of Noppera-bo, shows high larvicidal activity against Ae. aegypti . Methods Transgenic Drosophila melanogaster insects and genetics Drosophila melanogaster flies were reared on standard agar-cornmeal medium at 25 °C under a 12 h:12 h light-dark cycle. The nobo KO strain used has been previously described [ 23 ]. phm-GAL4#22 (Research Resource Identifier (RRID):BDSC_80577) [ 56 , 57 ] was kindly gifted by Michael B. O’Connor (University of Minnesota, USA). The GAL4/UAS system [ 58 ] was used to overexpress the AeNobo gene in D. melanogaster . pUAST-attB plasmid [ 59 ], carrying the AeNobo coding sequence, was built using the custom gene synthesis service of VectorBuilder, Inc. Transformants were generated using the φC31 integrase system in the P{CaryP}attP40 strain (RRID:BDSC_79604) [ 60 ]. The w + transformants of pUAST-attB were established using standard protocols. Viability of nobo KO animals, expressing the nobo transgene driven by phm-GAL4#22 , was examined as described previously [ 23 ]. Flavonoid chemicals The following flavonoids were used in this study: biochanin A (>98%, Tokyo Chemical Industry, B4098), (+)-catechin hydrate (98%, Tokyo Chemical Industry, C0705), chrysin (98%, Alfa Aesar, L14178), cyanidin chloride (98%, Fujifilm Wako Chemicals, 030-21961), daidzein (DAI; ≥98%, Nagara Science, NH010102), desmethylglycitein (DMG; >95%, Tokyo Chemical Industry, T3473), S -equal (≥97%, Merck, SML2147), fisetin (>96%, Tokyo Chemical Industry, T0121), genistein (>98%, Tokyo Chemical Industry, G0272), 2′-hydroxyflavanone (>98%, Tokyo Chemical Industry, H1024), kaempferol (≥98%, Cayman Chemical Company, 11852), luteolin (95%, Fujifilm Wako Chemicals, 127-06241), myricetin (>97%, Tokyo Chemical Industry, M2131), naringenin (>98%, Alomone Labs, N-110), petunidin (≥98%, Cayman Chemical Company, 19755), quercetin (96.5%, Fujifilm Wako Chemicals, 512-58344), tamarixetin (>99%, Extrasynthese, 1140S), 5,3′,4′-trihydroxyflavone (>85%, Toronto Research Chemicals, T896685), and 7,3′,4′-trihydroxyflavone (>85%, Toronto Research Chemicals, T896780). Estrogenic compounds The following estrogenic compounds were used in this study: 2-Amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (≥98%, Sigma-Aldrich, 22590), 1-benzyl 2-butyl benzene-1,2-dicarboxylate (98%, eNovation, D584181), 16-benzylidene estrone (95%, OTAVA, 7569822), Biochanin A (>90%, InterBioScreen, BB_NC-02653), Biosphenol A (4,4′-propane-2,2-diyldiphenol) (90%, Vitas-M, STK801675), bis(2,4-dihydroxyphenyl)methanone (>90%, ChemBridge, 5222210), butyl 4-hydroxybenzoate (>90%, ChemDiv(LB), 0099-0145), 1,1-Dichloro-2,2-bis(4-chlorophenyl)ethene (90%, Sigma-Aldrich, 123897), diethylstilbestrol (4-[( E )-4-(4-hydroxyphenyl)hex-3-en-3-yl]phenol) (>98%, Tokyo Chemical Industry, D0526), Ferutinine (>90%, InterBioScreen, STOCK1N-32042), ( S )-5-(4-hydroxy-3,5-dimethylphenyl)-1-methyl-2,3-dihydro-1H-inden-1-ol (90%, Sigma-Aldrich, SML1876), 4-[(1 S ,5 R )-5-(hydroxymethyl)-8-methyl-3-oxabicyclo[3.3.1]non-7-en-2-yl]phenol (>90%, InterBioScreen, STOCK1N-10587), 4-[(1 R ,5 R )-5-(hydroxymethyl)-6,8,9-trimethyl-3-oxabicyclo[3.3.1]non-7-en-2-yl]phenol (>90%, InterBioScreen, STOCK1N-13438), 4-[(1 R ,5 R )-4,4,8-trimethyl-3-oxabicyclo[3.3.1]non-7-en-2-yl]phenol (>90%, InterBioScreen, STOCK1N-00708), propyl 4-hydroxybenzoate (>90%, ChemDiv(LB), 0099-0143), Resveratrol (>90%, InterBioScreen, BB_NC-02570), 4-[2,2,2-trichloro-1-(4-hydroxyphenyl)ethyl]phenol (90%, Vitas-M, STK996193), 4-(2,4,4-trimethylpentan-2-yl)phenol (>90%, Chemspace, PB425466960), α-zearalanol (>90%, InterBioScreen, STOCK1N-99337), and Zearalenone (>90%, InterBioScreen, STOCK1N-03962). Plasmid construction of Escherichia coli protein expression system Aedes aegypti (SMK strain, 4th instar larvae, eight animals) cDNA was obtained by reverse transcription using ReverTra Ace qPCR RT Master Mix (Toyobo). AeNobo coding region was amplified from Ae. aegypti cDNA using PCR. The forward primer 5′-ATGTCCAAACCGGTGCTGTATTAC-3′ and the reverse primer 5′-CTATTTTTTCATTACAGCATGAAGTCTC-3′ were used for touchdown PCR. Touchdown PCR was conducted as follows: first, denaturation was performed for 10 s at 98 °C, followed by annealing for 30 s at 68 °C and extension for 1 min at 68 °C for 13 cycles. Next, denaturation was performed for 10 s at 98 °C, followed by annealing for 30 s at 55 °C and extension for 1 min at 68 °C for 30 cycles. The PCR product was ligated to pBluescript SK(-) to check and confirm the AeNobo sequence. After sequence verification, AeNobo was amplified from the vector through PCR using KOD-plus Neo polymerase (Toyobo) under the following conditions: pre-denaturation at 94 °C for 2 min, denaturation at 98 °C for 10 s, and extension at 68 °C for 30 s for 40 cycles. The product was extracted and ligated to pCold III (Takara Bio) for expression of the AeNobo-WT protein in Escherichia coli . pCold III plasmids expressing the AeNobo-E113A and AeNobo-F39L were constructed using KOD-Plus-Mutagenesis Kit (Toyobo). The following primers were used: E113A-F (5′-CGTCGATCGTAATGCGAGGCTTGATC-3′) and E113A-R (5′-CTCTTTGGAACAGAACGGCATTGTTG-3′) for AeNobo-E113A and F39L-F (5′-AGAGAGAGAACATCTTTTGGAAG-3′) and F39L-R (5′-AAGAGGCGAACCAGTTTGAGTTC-3′) for AeNobo-F39L. We conducted inverse PCR using KOD-plus polymerase, pColdIII/AeNobo-WT plasmid, and the primer pair described above under the following conditions: pre-denaturation at 94 °C for 2 min, denaturation at 98 °C for 10 s, and extension at 68 °C for 6 min for 6 cycles. The PCR products were treated with Dpn I at 37 °C for 1 h, followed by self-ligation using T4 polynucleotide kinase and Ligation High. After the transformation of DH5α bacteria using the ligation products, we extracted plasmid DNA from the colonies and verified their sequences to confirm whether the appropriate point mutations were introduced. Protein expression and purification Recombinant DmNobo protein was produced using an E. coli expression system as previously described [ 29 , 30 ]. Similarly, recombinant AeNobo protein was produced using an E. coli expression system as follows: pCold III-AeNobo plasmid was transformed into the E. coli BL21 Star (DE3) strain (Thermo Fisher Scientific) for 30 min at 4 °C. The transformant was plated in Luria-Bertani (LB) medium supplemented with 100 μg/mL ampicillin and incubated at 37 °C overnight. Next, a bacterial colony from the plate was inoculated into 200 mL of LB supplemented with 100 μg/mL ampicillin (LB-amp medium) and shaken at 37 °C overnight for preculture. The preculture was transferred to 6 L of LB-amp medium for the main culture and shaken at 37 °C. After the optical density of the culture reached 1.0, Nobo protein expression was induced by incubation with 0.1 mM isopropyl β-D-1-thiogalactopyranoside at 15 °C overnight. Next, the bacterial cells were harvested via centrifugation at 4,000 × g for 15 min. The bacterial pellet was stored at −80 °C. The pellet from the 3-L culture was suspended in a lysis buffer (140 mM NaCl, 20 mM Tris-HCl at pH 8.0, and 1 mM dithiothreitol [DTT]). The cells were disrupted via sonication for 2 min at 70% amplitude with output 7 using the ULTRA5 HOMOGENIZER VP-305 (TAITEC) on ice. The soluble lysate was fractionated using centrifugation at 35,000 × g for 30 min. The supernatant was mixed with 10 mL of Glutathione Sepharose 4B beads (Cytiva) for 1 h at 4 °C for glutathione affinity purification. The beads were then collected and washed in lysis buffer. Proteins bound to the beads were eluted using 50 mL of elution buffer (10 mM GSH, 140 mM NaCl, 20 mM Tris-HCl at pH 8.0, and 1 mM DTT). Next, the eluent was concentrated to 5 mL and fractionated using size exclusion column chromatography with a HiLoad Superdex200 26/60 instrument (Cytiva) equilibrated using a size exclusion buffer (150 mM NaCl, 25 mM Tris-HCl at pH 8.0, and 5 mM DTT) at a flow rate of 1 mL/min. The purity of the fractions was evaluated using sodium dodecyl sulfate-polyacrylamide gel electrophoresis followed by Coomassie Brilliant Blue staining. Peak fractions were collected, and their buffer was replaced with another buffer (150 mM NaCl, 25 mM Tris-HCl at pH 8.0, 5 mM DTT, 10 mM GSH) by ultrafiltration conducted twice with an Amicon Ultra-15 30,000 MWCO instrument (Merck); proteins were then concentrated to 45 mg/mL. Protein concentration was measured by spectrophotometry using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific) at an extinction coefficient (ε 280 ) of 0.852 M −1 cm −1 . Finally, the protein was stored at −80 °C. Measurement of specific activity of AeNobo protein in vitro In vitro GST assays using 3,4-DNADCF were performed as described previously [ 29 ]. The stock solutions of AeNobo-WT, AeNobo-E113A, and AeNobo-F39L were 174.6, 227.8, and 291.4 ng/mL, respectively, in solution A (2 mM GSH, 100 mM sodium phosphate buffer at pH 6.5, 0.01% Tween20). Decreasing concentrations of AeNobo-WT, AeNobo-E113A, and AeNobo-F39L, ranging from 174.6 to 3.0 ng/mL, from 227.8 to 4.0 ng/mL, and from 291.4 to 5.0 ng/mL, respectively, were prepared by 2/3-fold serial dilution with solution A. The AeNobo dilution series was mixed with an equal volume of solution B (100 mM sodium phosphate buffer at pH 6.5, with 2mM 3,4-DNADCF in 0.2% dimethyl sulfoxide (DMSO) as a co-solvent) in each well of a 96-well plate to initiate the catalytic reaction of DmNobo. In these wells, the final concentrations of AeNobo-WT, AeNobo-E113A, and AeNobo-F39L ranged from 87.3 to 1.5 ng/mL, from 113.9 to 2.0 ng/mL, and from 145.7 to from 2.5 ng/mL, respectively. The glutathione-conjugated product was excited at 485 nm wavelength, and the fluorescence intensity at 535 nm or 538 nm wavelength ( F measured ) was measured every 30 s for 20 min using an Infinite 200 PRO instrument (Tecan) or Fluoroskan Ascent™ FL (Thermo Fisher Scientific). The specific activity of AeNobo enzymes was determined as previously described [ 30 ]. GST activity inhibition assay The IC 50 value was measured using an in vitro assay system as described previously [ 29 ]. A dilution series of compounds, ranging from 2.5 mM to 0.127 μM for DMG and from 2.5 mM to 4.9 μM for other compounds, was prepared by 2-fold serial dilution in DMSO. Five microliters of each diluted compound solution was mixed with 245 μL of solution A (100 mM sodium phosphate buffer at pH 6.5, 0.01% Tween 20, 2 mM GSH, and 50 ng/mL AeNobo-WT, 100 ng/mL AeNobo-E113A, and 200 ng/mL F39L). One hundred microliters of the mixture was dispensed into wells of a 96-well plate. One hundred microliters of solution B (0.2 μM 3,4-DNADCF and 100 mM sodium phosphate buffer at pH 6.5) was added to each well. In summary, the final reaction system comprised 100 mM sodium phosphate buffer (pH 6.5), 1 mM GSH, 0.005% Tween 20, 0.1 μM 3,4-DNACDF, and 25 ng/mL AeNobo, 50 ng/mL AeNobo-E113A, or 100 ng/mL AeNobo-F39L protein. The fluorescence intensity derived from 4-GS-3-NADCF, a product of this reaction, was measured for 3 min. IC 50 values were estimated as described previously [ 30 ]. The enzymatic assays under each condition were performed at least twice independently. High-throughput screening of 9600 compounds To identify inhibitors of AeNobo enzymatic activity, a high-throughput screening was performed as described in a previous study [ 29 ]. Briefly, a core library of 9600 compounds obtained from the Drug Discovery Initiative, The University of Tokyo, was utilized for screening. In this screen, the enzymatic activity of AeNobo was detected using 3,4-DNADCF [ 29 ]. First, 1 μL of solution A (11.2 ng/mL AeNobo protein, 100 mM sodium phosphate at pH 6.5, 2 mM GSH, and 0.005% Tween 20) was dispensed into each well of a 1536-well plate (1536 Black SV/NB/FI, #784900, Greiner Bio-One) together with 0.01 μL of compounds at a concentration of 2 mM. Then, 1 μL of solution B (4 μM 3,4-DNADCF, 100 mM sodium phosphate at pH 6.5, and 0.005% Tween 20) was added into each well. In summary, the reaction system comprised 5.6 ng/mL AeNobo protein, 2 μM 3,4-DNADCF, 1 mM GSH, 0.005% Tween 20, and 100 mM sodium phosphate at pH 6.5, together with each compound at 10 μM. The plate was incubated for 30 min at 25–27 °C and 2 μL of 10 mM N-ethylmaleimide was added into each well to stop the reaction. For the first screen, 90 compounds that inhibited the enzymatic activity of AeNobo by more than 50% were selected. For the second screen, we assayed the IC 50 values of the compounds against the enzymatic activity of AeNobo. Compounds with IC 50 values lower than 10 μM were defined as hit compounds. Crystallization A 100 mM GSH stock solution was prepared in a buffer composed of 150 mM NaCl, 25 mM Tris-HCl at pH 8.0, and 5 mM DTT. The GSH stock solution was diluted to 30 mM GSH by adding an AeNobo protein solution for co-crystallization. The AeNobo protein solution was centrifuged at 13,500 × g for 30 min to remove protein aggregates. An initial crystallization assay was performed using a Protein Crystallization System (PXS) [ 61 ] following the sitting drop vapor diffusion crystallization method with 0.2 μL of each protein solution and one of the reservoir solutions from the following kits: Crystal Screen 1 & 2 (Hampton Research, Aliso Viejo, CA, USA), Index (Hampton Research), PEGIon (Hampton Research), PEGIon2 (Hampton Research), Wizard I & II (Molecular Dimensions, Suffolk, UK), PEGs II Suite (Qiagen), Protein Complex Suite (Qiagen), Stura FootPrint Screen (Molecular Dimensions), and MembFac (Hampton Research) at 20 °C. Under these conditions, crystals formed only when the reservoir solution was composed of 30% (w/v) PEG 4000, 0.1 M Tris-HCl at pH 8.5, and 0.2 M magnesium chloride. The conditions were optimized using the hanging drop vapor diffusion method, through which crystals formed in drops of 1 μL, each containing 45 mg/mL AeNobo protein and a reservoir solution (32.5% (w/v) PEG 4000, 0.1 M Tris-HCl, pH 7.5, 0.5 M calcium chloride) at 20 °C. To obtain a structure complexed with flavonoids, AeNobo crystals were soaked in 30 mM luteolin or DMG suspensions in the reservoir solution for 1 day at 20 °C. As luteolin did not completely dissolve in the reservoir solution at 30 mM, AeNobo crystals were soaked in a reservoir solution saturated with luteolin for 1 day at 20 °C. X-ray crystallography Crystals were soaked in a cryoprotectant solution (30% (w/v) Polyvinylpyrrolidone K 15 Average Molecular Wt. 10000 (Tokyo Chemical Industry, P0471)/reservoir solution), picked with cryo-loops (MiTeGen), flash frozen in liquid nitrogen, and packed in Uni-pucks (Molecular Dimensions). X-ray diffraction experiments for crystals of AeNobo-GSH, AeNobo-GSH-daidzein, AeNobo-GSH-luteolin, and AeNobo-GSH-DMG complexes were performed at beamlines BL-17A, BL-5A, NE-3A, and BL-5A, respectively, at the Photon Factory, High Energy Accelerator Research Organization (KEK), Tsukuba, Japan. The collected datasets were processed and scaled using XDS (RRID:SCR_015652) [ 62 ] and AIMLESS (RRID:SCR_015747) [ 63 ], respectively. Space groups were determined using POINTLESS (RRID:SCR_014218) [ 64 ]. Phases for AeNobo-GSH were calculated using the molecular replacement method with the DmNobo structure (PDB ID: 6KEM) [ 37 ] as a template, and those for AeNobo-GSH-daidzein, AeNobo-GSH-luteolin, and AeNobo-GSH-DMG were calculated using the AeNobo-GSH structure. Model building and crystallographic refinement were performed by COOT (RRID:SCR_014222) and PHENIX.REFINE (RRID:SCR_016736) [ 65 , 66 ]. The crystallographic statistics are summarized in Additional file 2 : Table S3. MD simulation The structures of AeNobo-GSH-daidzein and AeNobo-GSH-DMG were processed to assign bond orders and hydrogenation. The ionization states of each compound and GSH at pH 7.0 ± 2.0 were predicted using Epik [ 67 ], and H-bond optimization was conducted using PROPKA [ 68 ]. Energy minimization was performed in Maestro (Schrödinger) using the OPLS3e force field [ 69 ]. Each E113A mutation model for MD simulation was constructed in Maestro and treated using the same protocol. MD simulations were prepared using the Molecular Dynamics System Setup Module of Maestro. All structures were subjected to energy minimization and placed in an orthorhombic box with a buffer distance of 10 Å to create a hydration model, and the SPC water model [ 70 ] was used for the hydration model. NaCl (0.15 M) served as the counter ion to neutralize the system. The MD simulations were performed using the Desmond software, version 2.3 (Schrödinger) (RRID:SCR_014575). The cutoff radii for van der Waals and the time step, initial temperature, and pressure of the system were set to 9 Å, 2.0 femtoseconds, 300 K, and 1.01325 bar, respectively. The sampling interval during the simulation was set to 100 ps. Finally, we performed MD simulations using the NPT ensemble for 1 μs. All trajectories from MD simulations were aligned to the initial structure with protein Cα, and ligand RMSD values were calculated based on ligand heavy atoms. Mosquito rearing The Ae. aegypti strain used in this study, which originated from the Liverpool strain, was a gift from Ryuichiro Maeda (Obihiro University of Agriculture and Veterinary Medicine). Five hundred pupae were harvested in a plastic cup and placed within a nylon mesh cage (bottom 27 cm × 27 cm, top 25 cm × 25 cm, height 27 cm). A 50-mL glass flask, inserted with a filter paper (#1001-125, Whatman), containing 10% sucrose solution was placed in the nylon mesh cage. The cage rearing the emerged adults was kept in an incubator (MIR-254-PJ, Panasonic Co.) set at 27 °C with humidity over 90% in a standard 12 h:12 h light-dark cycle. The sucrose solution was changed every 3–4 days. Adult females (at 7–14 days after eclosion) were blood fed and allowed to lay eggs on a wet filter paper, 3 to 4 days after engorgement. Eggs laid on the filter paper were washed once with RO water and kept in a plastic container with wet paper for further egg maturation. After a week, the lid of the container was left slightly open to slowly dry the eggs for storage. Aedes aegypti larvicidal assay The dried eggs on filter papers were soaked in distilled water. Three hours after soaking, the first instar larvae were transferred to 30 mL of fresh distilled water in a 50-mL plastic cup with a lid containing air holes. In the rearing water, 2 mg of the powdered goldfish food (Hikari Medium Grain, Kyorin Co., Ltd.) was added to each cup as food for Ae. aegypti larvae. In each cup, we poured the 30 mL rearing water containing 1 ppm, 10 ppm, or 100 ppm of the compounds with 0.1% ethanol at the final concentration, followed by placing 20 larvae. Twenty-four hours after the addition of flavonoids, the number of living and dead larvae was recorded. Larvacidal assays under each condition were performed 5 times independently. The larval instars 24 h after the addition of DMG were identified by measuring the transverse diameter of the Ae. aegypti larval head [ 39 ]. Photographs of whole bodies of larvae were taken and the head diameter was then measured using ImageJ software [ 71 ]. Reverse transcription-quantitative PCR (RT-qPCR) Preparation of control and DMG-treated larvae was conducted as described above in the previous section “ Aedes aegypti larvicidal assay.” Twenty-four hours after the addition of flavonoids, 20-30 Ae. aegypti larvae for each sample were homogenized in RNAiso Plus (Takara Bio Inc.) to extract total RNA. RNA was reverse transcribed to synthesize cDNA using ReverTra Ace qPCR RT Master Mix with gDNA Remover (Toyobo). cDNA samples were used as templates for qPCR using THUNDERBIRD SYBR qPCR Mix (Toyobo) on a Thermal Cycler Dice Real Time System (Takara Bio Inc.). The mRNA level of E74B was normalized to that of an endogenous control ribosomal protein 49 gene ( rp49 ), and the relative fold change was calculated. The normalized E74B expression level was compared using the ΔΔCt method. The primers for E74B were AeaegE74B-Fwd (5′-GCCTTGGAATTCCACTCACAAA-3′) and AeaegE74B-Rev (5′-GGTCTGGTGAACGGACTACACC-3′). The primers for rp49 were Aeaeg-rp49-Fwd (5′-TCGGCAGTCTTGCCAACCCTGA-3′) and Aeaeg-rp49-Rev (5′-AGCTTATCATACCGACGTTCCGAA-3′). Drosophila melanogaster survival assay A 0.1% (w/v) DMG stock solution was prepared in 100% DMSO. Ten microliters of the DMG stock solution or 10 μL of DMSO alone was mixed with 10 g of standard cornmeal medium and 1 mL of autoclaved water. The volume of the mixed food was approximately 10 mL, and therefore, the food contained 0.1% DMSO with or without 10 ppm DMG. Two grams of the food were dispensed into each 12-mL plastic vial (Sarstedt). D. melanogaster wild-type Canton-S were given the opportunity to lay eggs on a grape juice agar with yeast paste for 12 h. After egg collection, the embryos were reared at 25 °C for 24 h, and then 20 1st instar larvae were transferred to each vial and reared at 25°C. The numbers of pupae were counted daily, and the timing of pupation was recorded. Availability of data and materials The X-ray data and coordinates presented in this paper are deposited in the Protein Data Bank ( ), under the following PDB IDs: 7EBT, 7EBU, 7EBV, and 7EBW. Other datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Abbreviations AeNobo: Aedes aegypti Noppera-bo DMG: Desmethylglycitein DmNobo: Drosophila melanogaster Noppera-bo DMSO: Dimethyl sulfoxide 3,4-DNADCF: 3,4-Dinitrobenzamidedichlorofluorescein ER: Estrogen receptor GSH: Glutathione GST: Glutathione S -transferase IC 50 : Concentration of 50% inhibition IGR: Insect growth regulator LD 50 : 50% lethal dose MD: Molecular dynamics Nobo: Noppera-bo PI3K: Phosphatidylinositol-3 kinase qPCR: Quantitative polymerase chain reaction RMSD: Root mean square deviation RT: Reverse transcription WT: Wild-type
When most people think of flavonoids, natural compounds found in plants and other organisms, their nutritional benefits probably come to mind first. But these compounds may have another health benefit: Researchers from Japan have discovered that certain flavonoids inhibit development in mosquitoes that can spread disease. In a study published this month in BMC Biology, researchers from the University of Tsukuba have revealed that particular flavonoids inhibit an enzyme involved in the formation of a key insect hormone in the yellow fever mosquito, Aedes aegypti. Mosquito-borne diseases are a major component of the worldwide burden of infectious disease in humans. Aedes aegypti is from a group of mosquitoes that can spread a number of viruses that cause infectious diseases in humans, including dengue fever, yellow fever, and Zika. In the wild, A. aegypti has begun to show resistance to insecticides, revealing a need for new types of pesticides for targeting this species. "Flavonoids—a type of metabolic product from plants, fungi, and other organisms—can interfere with insect development and physiology, and have the ability to kill larvae of A. aegypti," says senior author of the study, professor Ryusuke Niwa. "Flavonoids are thought to be relatively safe for the environment, as well as human and animal health." To investigate how flavonoids can kill mosquito larvae, the researchers analyzed the activities of several flavonoids in A. aegypti, including daidzein, which has previously been identified as a larvicide for this species. The team found that the flavonoids inhibit the activity of glutathione S-transferase Noppera-bo (Nobo); in A. aegypti, Nobo is an enzyme involved in the biosynthesis of the hormone ecdysone. Ecdysone is an insect steroid hormone, or ecdysteroid, required for the initiation of metamorphosis and regulation of molting. Because ecdysteroids are key to the life cycle of insects, chemical inhibitors of enzymes involved in making these hormones, including Nobo, are thought to be insect growth regulators (IGRs) that disrupt development in insects without affecting other organisms. "We also discovered that, of the flavonoids we tested, desmethylglycitein (DMG) was the most efficient Nobo inhibitor in this species, even more so than daidzein," says professor Niwa. "DMG showed larvicidal activity against A. aegypti, and indicated promise for DMG-based insecticides in the future." The high prevalence of resistance in mosquitoes to current insecticides in some areas urgently requires the development of new insecticides with different chemical structures and targeting pathways from those currently in use. The results of this study offer a new avenue for developing new IGRs that are environmentally friendly and can be used for the control of mosquito populations by inhibiting the biosynthesis of ecdysteroids. The article, "Molecular action of larvicidal flavonoids on ecdysteroidogenic glutathione S-transferase Noppera-bo in Aedes aegypti," was published in BMC Biology.
10.1186/s12915-022-01233-2
Medicine
Study shows distinct types of cerebellar neurons control motor and social behaviors
Meike E. van der Heijden et al, Glutamatergic cerebellar neurons differentially contribute to the acquisition of motor and social behaviors, Nature Communications (2023). DOI: 10.1038/s41467-023-38475-9 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-38475-9
https://medicalxpress.com/news/2023-05-distinct-cerebellar-neurons-motor-social.html
Abstract Insults to the developing cerebellum can cause motor, language, and social deficits. Here, we investigate whether developmental insults to different cerebellar neurons constrain the ability to acquire cerebellar-dependent behaviors. We perturb cerebellar cortical or nuclei neuron function by eliminating glutamatergic neurotransmission during development, and then we measure motor and social behaviors in early postnatal and adult mice. Altering cortical and nuclei neurons impacts postnatal motor control and social vocalizations. Normalizing neurotransmission in cortical neurons but not nuclei neurons restores social behaviors while the motor deficits remain impaired in adults. In contrast, manipulating only a subset of nuclei neurons leaves social behaviors intact but leads to early motor deficits that are restored by adulthood. Our data uncover that glutamatergic neurotransmission from cerebellar cortical and nuclei neurons differentially control the acquisition of motor and social behaviors, and that the brain can compensate for some but not all perturbations to the developing cerebellum. Introduction Cerebellar injury in preterm infants is often associated with movement disorders, language impairments, and social deficits 1 , 2 . Preterm injuries affect the exponential phase of granule cell proliferation, although they often also alter the early development of all glutamatergic neurons in the cerebellum. The resulting defects lead to long-term changes in gray matter volume in the cerebellar cortex that are correlated to the severity of neural deficits in infants 3 , 4 , 5 , 6 . Cerebellar cortical injuries further impact the development and function of downstream cerebellar nuclei neurons, which serve as the predominant output from the cerebellum and link it to the rest of the motor network 7 . Accordingly, cerebellar defects can also impair the development and function of the cerebral cortex 8 , 9 . The combined injury to the cerebellum and neocortex may help explain the broad neural deficits observed in many infants that experience cerebellar injury during the perinatal period. Accumulating evidence suggests that the site of injury within the developing cerebellum may determine behavioral outcomes. When damage is confined to the cerebellar nuclei neuron axons that project and travel through the superior cerebellar peduncle, affected patients (typically children) can develop posterior fossa syndrome, which is hallmarked by ataxia, mutism, and changes in social interactions 10 , 11 . Intriguingly, over time, patients largely recover these impaired neural functions with only minor residual symptoms, most commonly persisting in their motor coordination 12 , 13 . Patients and apes with lesions localized to the cerebellar nuclei mainly demonstrate motor symptoms with limited recovery following the injury 14 , 15 . When the cerebellar cortex is the primary site of the lesion, deficits in social cognition, language, and motor performance arise, but they often persist following the initial injury with limited recovery, especially in the non-motor domains affecting cognition, sociability, language, and affect 1 , 2 . Similarly, studies in rodents have provided compelling evidence that disrupting cerebellar cortical function during development can lead to motor impairments, altered vocalizations, and social deficits 16 , 17 , 18 , 19 , 20 , 21 . These clinical outcomes illustrate that while damage to the cerebellar cortex or to the downstream cerebellar nuclei neurons during infancy is sufficient to impair motor function, language, and social behavior, there are instances when the cerebellum can remarkably overcome perturbations and restore functions. Importantly, the degree of compensation may be linked to the cerebellar neurons that are primarily affected by the lesion, suggesting a unique role for each cerebellar neural subtype in the regulation of cerebellar-associated behaviors. These studies have inspired the need for a deeper examination of how the relatively few neuron types in the cerebellum contribute to a wide range of motor and non-motor functions. However, it remains largely unknown where in the circuit cerebellar-associated behaviors originate, whether the same neuronal subtypes contribute equally to the acquisition of these diverse behaviors, and whether the perturbation of all neuronal subtypes can be equally compensated for during development. Additionally, it remains specifically unexplored how the cerebellar nuclei contribute to the acquisition of different behaviors during postnatal life. Dissecting how cerebellar cortical and cerebellar nuclei neurons contribute to the acquisition of different cerebellar-dependent behaviors requires the use of non-invasive and cell-type specific manipulations during circuit development. Fortunately, the cellular architecture of the cerebellar anlage lends itself to precise genetic approaches. The embryonic cerebellum arises from two distinct lineages that interact to form the cerebellar circuits 22 . The unique identities of the lineages can be used to manipulate GABAergic or glutamatergic cerebellar neurons. The Ptf1a lineage that originates in the ventricular zone gives rise to GABAergic neurons, including GABAergic cerebellar nuclei neurons and Purkinje cells 23 . In contrast, the Atoh1 lineage is derived from the rhombic lip and gives rise to glutamatergic neurons, including glutamatergic granule cells and cerebellar nuclei neurons (Fig. 1a, b, c ) 24 , 25 . Granule cells are the most abundant neuron type in the cerebellum, and they provide the predominant synaptic input to Purkinje cells 26 . In turn, Purkinje cells send convergent projections to their principal targets–glutamatergic and GABAergic cerebellar nuclei neurons 27 . The cerebellar nuclei neurons form the main output of the cerebellum through parallel pathways that project throughout the brain and spinal cord 28 . Fig. 1: Conditional Vglut2 deletion from Atoh1 lineage neurons. a Schematic showing how conditional deletion of VGluT2 only affects fast neurotransmission in VGluT2-expressing neurons. b Schematic of cerebellar connectivity and vesicular transporter expression for the glutamate subtypes (VGluT1 and VGluT2) in the cerebellar circuit in P7 control mice and Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. c Same as b , but in adult mice. For a – c VGluT1 (blue); VGluT2 (orange); Control mice (black); Atoh1 Cre/+ ;Vglut2 fl/fl mice (reddish purple). d Expression of Vglut2 (green) with DAPI (purple, left) or tdTomato (purple, right) in the cerebellar nuclei (CN) and granule cell layer (gcl) of adult Atoh1 Cre/+ ;Rosa26 lsl-tdTomato mice. e Expression of Vglut2 (green) and DAPI (purple) in the CN and gcl of adult control mice and Atoh1 Cre/+ ;Vglut2 fl/fl mice. d , e insets are 125 by 125 µm high magnification images. f VGluT2 + synapses (green) and DAPI (purple) in the molecular layer (ml) and gcl of P7 control and Atoh1 Cre/+ ;Vglut2 fl/fl mice, e-gcl = external granule cell layer containing actively proliferating granule cell precursor cells. g VGluT2 + synapses (green) and DAPI (purple) in the ml and gcl of adult control mice and Atoh1 Cre/+ ;Vglut2 fl/fl mice. d – e are shown at the same scale. f , g are shown at the same scale. Images are representative of N = 3 mice. Full size image We previously showed that we could take advantage of the embryonic Atoh1 lineage to specifically manipulate the neurogenesis of glutamatergic cerebellar neurons 29 . In the central nervous system, Atoh1 functions as a pro-neural gene that is necessary for the neurogenesis of glutamatergic neurons; these neurons are born along the most dorsal portion of the developing brainstem and spinal cord 30 , 31 . The contribution of Atoh1 lineage neurons to behavior has relied mainly on the use of conditional knockout mice because Atoh1 lineage neurons are essential for respiration, and as a consequence, Atoh1 null mice die at birth 24 , 32 . We recently showed that preventing the neurogenesis of glutamatergic, Atoh1 lineage granule cells and cerebellar nuclei neurons impairs the early postnatal acquisition of cerebellar-dependent behaviors, including motor coordination and vocalizations in a social isolation paradigm 29 . A primary finding in our study was that the neurogenesis of granule cells was also essential for the maturation of Purkinje cell activity. However, by using an approach that eliminated neurogenesis, we could not delineate whether the resulting behavioral deficits were due to granule cell loss, abnormal Purkinje cell signaling, glutamatergic cerebellar nuclei neuron loss, or a combination thereof. To investigate whether the motor impairments and social deficits in Atoh1 conditional knockout mice are due to the loss of functional output from glutamatergic cerebellar neurons, we set out to examine Atoh1 lineage neuron function using a gene silencing approach. Specifically, we deleted the gene encoding a vesicular transporter for glutamate ( Vglut2 ) to eliminate the transport of glutamate into the synaptic vesicles of Atoh1 lineage neurons. This manipulation results in an effective silencing of fast neurotransmission in the genetically targeted neurons (Fig. 1a ) 33 . Then, to better define how VGluT2-mediated neurotransmission from the nuclei neurons contributes to cerebellar-dependent behaviors, we deleted Vglut2 from Ntsr1 -expressing cells, an approach that predominantly targets the glutamatergic nuclei neurons 34 . In addition, we leveraged an interesting developmental transition from VGluT2- to VGluT1-mediated neurotransmission that uniquely occurs in granule cells to investigate whether restoration of cerebellar cortical function improves neural deficits (Fig. 1a, b, c ). This allowed us to assess whether the cerebellar-dependent motor and social deficits are improved following the natural restoration of granule cell neurotransmission in the developing cerebellar circuit. Finally, we used Ntsr1 Cre ;Vglut2 fl/fl conditional knockout mice to investigate whether neural functions are restored by developmental compensation regardless of changes in vesicular subtype switching. Together, this combination of genetic approaches provides us with in vivo cell-type specific methods for precisely lesioning neural function during development. Using this strategy, we investigated how different cerebellar neural subtypes contribute to the acquisition of diverse cerebellar-dependent behaviors and demonstrated the exceptional ability of the cerebellum to regain neural functions after developmental perturbations. Results Conditional Vglut2 deletion from the Atoh1 lineage affects early postnatal granule cells and glutamatergic cerebellar nuclei neurons throughout life We selectively deleted the vesicular transporter for glutamate ( Vglut2 ) from the Atoh1 lineage, which resulted in a lack of VGluT2 protein in pre-synaptic vesicles of the Atoh1 lineage, Vglut2 -expressing neurons 33 . As a result, when an action potential arrives at the synapse, synaptic vesicles fuse to the pre-synaptic membrane but do not functionally affect the postsynaptic cells because no neurotransmitter is released into the synaptic cleft (Fig. 1a ). In the current genetic manipulation, glutamatergic cerebellar nuclei neurons are affected throughout life (Fig. 1b, c ). In contrast, granule cells express Vglut2 only until the second postnatal week in mice 35 , whereafter they switch their glutamate transporter to the type one transporter ( Vglut1 ), and therefore the effects of our genetic manipulation do not actively and directly continue through adulthood. Unlike Atoh1 null and cerebellum-specific Atoh1 conditional knockout mice 24 , 29 , 32 , Atoh1 Cre/+ ;Vglut2 fl/fl mice were viable and survived into adulthood, likely because Atoh1 Cre/+ ;Vglut2 fl/fl mice had normal respiratory rhythms (Supplementary Fig. 1 ). Indeed, in situ hybridization (ISH) assays in adult animals showed that nearly all cerebellar nuclei neurons from the Atoh1 lineage express Vglut2 in adulthood (in Atoh1 Cre/+ ;Rosa lsl-tdTomato/+ mice 95 ± 0.8% tdTomato + neurons are also Vglut2 + , N = 3 mice, n = 18 sections), but no Vglut2 expression was found in the granule cell layer (Fig. 1d ). We validated that our genetic manipulation considerably reduced Vglut2 expression as Vglut2 mRNA intensity was significantly reduced in Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice compared to control mice as measured by an ISH assay (normalized signal intensity; Control: 1 ± 0.08, N = 3, n = 21; Atoh1 Cre/+ ;Vglut2 fl/fl : 0.24 ± 0.04, N = 3, n = 16; LMM: p < 0.001) (Fig. 1e ). Immunohistochemical staining for the VGluT2 protein confirmed that the genetic manipulation selectively reduced the proportion of VGluT2 + synapses in the molecular layer of P7 Atoh1 Cre/+ ;Vglut2 fl/fl mice throughout the cerebellar cortex (Fig. 1f ). We visualized VGluT2 + synapses in crus I and lobule V because these cerebellar regions have been associated with social and motor functions, respectively 19 , 36 . In contrast, we did not detect differences in the quantity and distribution of VGluT2 + synapses between adult control and conditional knockout mice (Fig. 1g ) since, at this age, only the climbing fibers express VGluT2 in the molecular layer 21 , 35 . The lower density of VGluT2 + synapses in the granule cell layer of crus I relative to lobule V is in line with previous observations of differential innervation of spinocerebellar mossy fibers across the cerebellar cortex 37 , 38 . These results demonstrated the specificity of the conditional Vglut2 deletion from the Atoh1 lineage in the cerebellum and established the temporal dependence of VGluT2 in the Atoh1 lineage cerebellar neurons. Additionally, we found that Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice have no major differences in zonal stripe patterning of VGluT2 + mossy fiber synapses in the anterior or posterior zones of the cerebellum, regions into which the spinocerebellar projections are targeted (Supplementary Fig. 2 ) 37 , 38 , 39 . This is in line with previous reports that Atoh1 lineage neurons in the spinal cord are not the predominant source of spinocerebellar projections 40 , and further suggests that loss of neurotransmission from granule cells and nuclei neurons does not affect overall zonal stripe formation in our conditional knockout mice. Finally, we did not observe changes in cerebellar morphology (Supplementary Fig. 3 ) or interneuron localization (Supplementary Fig. 4 ) in our Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice, confirming that our genetic manipulation is restricted to the elimination of Vglut2 expression in developing granule cells and cerebellar nuclei neurons throughout life. Purkinje cells in early postnatal Atoh1 Cre/+ ;Vglut2 fl/fl mice have abnormal firing activity Next, we sought to confirm that the deletion of Vglut2 from the Atoh1 lineage neurons caused functional deficits in the cerebellar cortex during the early postnatal period. To achieve this, we performed in vivo single-unit recordings from Purkinje cells, which form the predominant downstream target of granule cells and are the sole output of the cerebellar cortex 22 (Fig. 2a ). We randomly sampled spiking activity from a wide range of anatomical areas during our recordings, including but not limited to crus I and lobules III/IV/V (Supplementary Fig. 5 ). Purkinje cell firing patterns progress through a considerable series of maturation events during the second postnatal week in mice, in a process that is dependent on the neurogenesis of Atoh1 lineage neurons 29 . During the in vivo recordings, Purkinje cells were identified by their unique spiking activity that consists of two distinct action potential types (Fig. 2b, c ). Purkinje cells can generate simple spikes without synaptic input, and in the intact cerebellar circuit, simple spike firing frequency and patterns are modulated by inputs from local cerebellar cortical GABAergic interneurons and granule cells 41 , 42 . In contrast, complex spikes are mediated by a strong glutamatergic input from climbing fibers that originate in the inferior olive and can be recognized by their large initial waveform that is followed by ~3–5 smaller amplitude spikelets 43 , 44 . Fig. 2: Purkinje cell firing activity is abnormal in P10 Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. a Schematic of an in vivo single-unit extracellular Purkinje cell recording in an anaesthetized mouse. b Representative trace of a Purkinje cell recording in a control mouse (black and blue lines represent complex spikes). c Representative trace of a Purkinje cell recording in an Atoh1 Cre/+ ;Vglut2 fl/fl mouse (reddish purple, blue lines represent complex spikes). In b , c Inferior olive (IO) evoked complex spike (CS) in blue. The y -axis is constant across the panel. The x -axis (timescales) are the same for ( b , c ) . d The frequency of Simple Spikes (SS) is different between control and Atoh1 Cre/+ ;Vglut2 fl/fl mice ( p = 0.005; d = 0.81). e No differences were found in SS CV (spike pattern, p = 0.977; d = 0.04), f SS CV2 (spike regularity, p = 0.407; d = 0.26), g CS Frequency ( p = 0.548, d = 0.22), or h CS CV (spike pattern, p = 0.171, d = 0.40). i Complex spikes occurred more regularly in control than in Atoh1 Cre/+ ;Vglut2 fl/fl mice CS CV2 (spike regularity, p = 0.039, d = 0.60). For d – i , large open circles represent the mouse average; small, closed circles represent the cell average; data points from control mice in black, data points from Atoh1 Cre/+ ;Vglut2 fl/fl mice in reddish purple. A linear mixed model analysis with genotype as a fixed variable and mouse number as a random variable was used to test for statistical significance in ( d – i ). Control: N Mice = 5, n Cells = 24 cells; Atoh1 Cre/+ ;Vglut2 fl/fl : N mice = 5, n Cells = 21. Source data and detailed statistical results are available and provided as a Source Data file. Panel ( a ) was adapted from White & Sillitoe, 2017, “Genetic silencing of olivocerebellar synapses causes dystonia-like behavior in mice,” Nature Communications 21 under CC BY 4.0. Full size image We postulated that the Vglut2 deletion from the Atoh1 lineage granule cells would reduce simple spike activity. We further investigated whether Vglut2 elimination from granule cells would influence Purkinje cell spike pattern (CV) and regularity (CV2). In line with our hypothesis, we found that loss of Vglut2 from the Atoh1 lineage resulted in a reduction of simple spike firing rate (Fig. 2d ) but no change in simple spike pattern or regularity (Fig. 2e, f ). We did not observe any differences in complex spike firing rate or pattern (Fig. 2g, i ), but there was a slight increase in the irregularity of complex spikes (Fig. 2h, i ), which may occur through secondary effects resulting from the loss of granule cell signaling and their influence on climbing fiber maturation 29 , 45 , 46 . Together, these results confirm that loss of neurotransmission in granule cells changes cerebellar cortical function. Postnatal developing Atoh1 Cre/+ ;Vglut2 fl/fl mice have disrupted motor function and impaired social vocalizations After validating that our genetic approach was specific in its ability to target developing synapses, we next set out to examine whether genetically silencing Vglut2 -expressing, Atoh1 lineage neurons during development impairs the acquisition of cerebellar-dependent behaviors in the early neonatal period (Fig. 3a ). We investigated the acquisition of motor functions and social behaviors that were also impaired upon blocking the neurogenesis of glutamatergic Atoh1 lineage neurons 29 . Atoh1 Cre/+ ;Vglut2 fl/fl mice in the home cage often displayed abnormal dystonic movements and postures. To test how motor performance was affected in these mice, we tested two postnatally acquired motor reflexes that we previously showed are severely impaired in mice with postnatal dystonia 47 . First, we measured the negative geotaxis reflex by placing the mice head down on a negative slope and timing the latency for them to rotate upward (Fig. 3b ). We found that Atoh1 Cre/+ ;Vglut2 fl/fl mice were less proficient at this reflex and had a longer turn latency than control littermates at P7 and P11 (Fig. 3b ). Second, we measured the surface righting reflex by placing mice on their backs and measuring the time it took for them to turn onto their four paws (Fig. 3c ). We found that Atoh1 Cre/+ ;Vglut2 fl/fl mice were severely impaired in this reflex and required a longer time to turn than littermates at all timepoints measured (Fig. 3c , P7-P11). Finally, we measured social vocalizations in a social isolation paradigm. When separated from the nest, pups vocalize to attract attention from their mother. We found that Atoh1 Cre/+ ;Vglut2 fl/fl mice made fewer calls at P7 and P9 (Fig. 3d ), and their calls were shorter at P7 (Fig. 3e ). Altogether, our results confirmed that eliminating glutamatergic neurotransmission from Vglut2 -expressing Atoh1 lineage neurons impaired the acquisition of reflexive motor functions and social vocalizations in P7–P11 mice. Fig. 3: Motor behavior and social behavior are abnormal in early postnatal Atoh1 Cre/+ ;Vglut2 fl/fl mice. a Schematic of circuit modifications in Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. VGluT1 (blue); VGluT2 (orange); Control mice (black); Atoh1 Cre/+ ;Vglut2 fl/fl mice (reddish purple). b The time to turn upward on a negative slope was measured. Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice required a longer time to turn compared to control littermates at P7 ( p < 0.001; d = 0.97) and P11 ( p = 0.015; d = 0.59) but not at P9 ( p = 0.431; d = 0.19). c The time to right themselves onto their four paws was measured. Atoh1 Cre/+ ;Vglut2 fl/fl mice required a longer time to turn compared to control littermates at P7 ( p < 0.001; d = 1.49), P9 ( p < 0.001; d = 1.26), and P11 ( p < 0.001; d = 1.09). d The number of vocalizations after separation from the nest was measured. Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout pups made fewer vocalizations than their control littermates at P7 ( p < 0.001; d = 0.90) and P9 ( p = 0.004; d = 0.69) but not at P11( p = 0.373; d = 0.20). e The duration of vocalizations after separation from the nest was measured. Atoh1 Cre/+ ;Vglut2 fl/fl pups made shorter vocalizations compared to control littermates at P7 ( p = 0.024; d = 0.55), but not at P9 ( p = 0.700; d = 0.10) or P11 ( p = 0.677; d = 0.10). Control: N = 38 (18f/20m); Atoh1 Cre/+ ;Vglut2 fl/fl : N = 30 (17f/13m). Dots represent the means for each mouse, horizontal lines represent the group means, and shaded areas represent the distributions of the data. Data points from control mice in black, and data points from Atoh1 Cre/+ ;Vglut2 fl/fl mice in reddish purple. A linear mixed model analysis with genotype as a fixed variable and mouse number as a random variable was used to test for statistical significance in ( b – e ). A post hoc analysis was performed to test for the statistical differences at each time point. Source data and detailed statistical results are available and provided as a Source Data file. Panels ( b , d ) were adapted from Van der Heijden et al., 2022, “Quantification of Behavioral Deficits in Developing Mice With Dystonic Behaviors,” Dystonia 47 under CC BY 4.0. Full size image The Atoh1 - Vglut2 intersectional lineage defines a population of midbrain-projecting nuclei neurons and other pre-cerebellar nuclei Previous studies have shown that social behavior may be modulated by the cerebellum through direct projections from glutamatergic cerebellar nuclei to the ventral tegmental area (VTA) 48 and di-synaptic connectivity via the ventral posteromedial thalamus (VPM) to the prelimbic area in the medial prefrontal cortex 36 . Other studies have also suggested that these long-range projections from glutamatergic cerebellar nuclei neurons may mediate non-motor behaviors 27 , 49 . We therefore tested whether the Vglut2 -expressing, Atoh1 lineage neurons make long-range projections to areas previously implicated in non-motor behavior. To investigate the complete population of neurons that were affected by our manipulation, we used an intersectional reporter allele ( Rosa fsf-lsl-tdTomato ) that expresses tdTomato only after FlpO- ( Atoh1 FlpO/+ ) and Cre- ( Vglut2 IRES-Cre ) mediated excision of two stop cassettes. In these mice, only the neurons with a history of Atoh1 and Vglut2 expression express tdTomato (Fig. 4 ). We found tdTomato + projections in all regions known to receive strong inputs from the cerebellar nuclei and which were previously implicated in cerebellar-dependent non-motor function, including the VPM (Fig. 4a, b ) and the VTA (Fig. 4e ). We also observed projections to other brain regions that are important for motor behavior, in areas that are known to receive strong inputs from cerebellar nuclei neurons, including the zona incerta (ZI) (Fig. 4c ) and red nucleus (Fig. 4d, e ). Furthermore, we confirmed the presence of tdTomato + neurons in the granule cell layer and cerebellar nuclei (Fig. 4g–j ). We also found tdTomato + axons in the superior and middle cerebellar peduncles, which are the white matter tracts through which cerebellar nuclei neurons send projections to these brain regions (Fig. 4f ). We provide high-power images in the supplemental material for all brain regions with identified tdTomato + axons or cell bodies (Supplementary Fig. 6 ). In conclusion, we demonstrated that our genetic manipulation affected neurons with glutamatergic projections that originate from Atoh1 lineage neurons and terminate in brain regions that are involved in motor function and social behaviors. Thus, we showed that in the Atoh1 lineage, Vglut2 -expressing neurons project directly to brain regions that have been previously implicated as key cerebellar targets for mediating diverse behaviors. Fig. 4: Intersectional lineage labeling of Atoh1, Vglut2 neurons. tdTomato + neuron labeling in Atoh1 FlpO/+ ;Vglut2 IRES-Cre/+ ;Rosa26 fsf-lsl-tdTomato mice. Cell bodies are shown in dark orange, projections are shown in light orange. Abbreviations: CoN cochlear nucleus, dscp decussation of dorsal superior cerebellar peduncle, DLL dorsal lateral lemniscus, DN dentate nucleus, eCN external cuneate nucleus, FN fastigial nucleus, GCL granule cell layer, H Hippocampus, IC inferior colliculus, icp inferior cerebellar peduncle, IN interposed nucleus, IO inferior olive, iRt intermediate reticular nucleus, ITR intertrigeminal region, KF Kölliker Fuse, LDT lateral dorsal tegmental nucleus, lRt lateral reticular nucleus, mcp medial cerebellar peduncle, mRt midbrain reticular nucleus, nCC non-Clark’s column, PB parabrachial nucleus, PBG parabigeminal nucleus, PN pontine nuclei, pRt pontine reticular nucleus, PSV principal sensory nucleus of the trigeminal, RN red nucleus, RTN retrotrapezoid nucleus, sct spinocerebellar tract, SPFp subparafascicular nucleus, parvicellular part, vlPAG ventral lateral periaqueductal gray, VPM ventral posteromedial nucleus of the thalamus, VPL ventral posterolateral nucleus of the thalamus, vst ventral spinothalamic tract, VTA ventral tegmental area, ZI zona incerta. Brain images in ( a – k ) are shown at the same magnification. The spinal cord image in ( l ) is shown at a different magnification. The light orange brain regions mean that only tdTomato + projections were observed, and the vermillion brain regions mean the cell bodies (and the projections) were observed. Images are representative of N = 3 mice. Full size image Granule cells express Vglut1 , but not Vglut2 , in adult mice Next, we wanted to investigate whether some of the behavioral deficits observed in the early postnatal Atoh1 Cre/+ ;Vglut2 fl/fl mice (Fig. 3 ) would be restored in adult mice after the granule cells, but not nuclei neurons, switch their transporter subtype from Vglut2 to Vglut1 (Fig. 5a–c ). We used in situ histochemistry to confirm that Atoh1 -lineage granule cells, but not nuclei neurons, express Vglut1 in adult mice (Fig. 5d ). We further show that in control and Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice, the vesicular transporter subtype switching only occurred in granule cells, but not nuclei neurons (Fig. 5e ). Finally, we confirm that the expression of VGluT1 in the molecular layer of crus I and lobule V in P7 (Fig. 5f ) and adult (Fig. 5g ) is not different between control and Atoh1 Cre/+ ;Vglut2 fl/fl mice. Together these results confirm that a molecular switch from Vglut2 to Vglut1 expression in granule cells occurs after early postnatal development, that this switch is independent of our genetic manipulation, and that our genetic manipulation does not drive a similar molecular switch in glutamatergic cerebellar nuclei neurons (Fig. 5 ). Fig. 5: Vglut1 expression in control and Atoh1 Cre/+ ;Vglut2 fl/fl mice. a Schematic showing how conditional deletion of VgluT2 does not affect fast neurotransmission in VgluT1-expressing neurons. b Schematic of cerebellar connectivity and vesicular transporter expression for the glutamate subtypes (VgluT1 and VgluT2) in the cerebellar circuit in P7 control mice and Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. c Same as b . but in adult mice. For a – c VGluT1 (blue); VGluT2 (orange); Control mice (black); Atoh1 Cre/+ ;Vglut2 fl/fl mice (reddish purple). d Expression of Vglut1 (green) with DAPI (purple, left) or tdTomato (purple, right) in the cerebellar nuclei (CN) and granule cell layer (gcl) of adult Atoh1 Cre/+ ;Rosa26 lsl-tdTomato mice. e Expression of Vglut1 (green) and DAPI (purple) in the CN and gcl of adult control mice and Atoh1 Cre/+ ;Vglut2 fl/fl mice. d , e Insets are 125 by 125 µm high magnification images. f VgluT1 + synapses (green) and DAPI (purple) in the molecular layer (ml) and gcl of P7 control and Atoh1 Cre/+ ;Vglut2 fl/fl mice, e-gcl = external granule cell layer containing actively proliferating granule cell precursor cells. g VgluT1 + synapses (green) and DAPI (purple) in the ml and gcl of adult control mice and Atoh1 Cre/+ ;Vglut2 fl/fl mice. d , e are shown at the same scale. f , g are shown at the same scale. Images are representative of N = 3 mice. Full size image Purkinje cells in adult Atoh1 Cre/+ ;Vglut2 fl/fl mice have normal in vivo firing activity There is experimental evidence showing that restoring cerebellar function during development may be sufficient to rescue social deficits in mouse models for autism spectrum disorders 50 . We, therefore, set out to investigate whether the natural developmental transition from VGluT2- to VGluT1-mediated neurotransmission in granule cells (Fig. 5 ) is associated with the normalization of Purkinje cell firing rates, representing the restoration of normal cerebellar cortical function in adult Atoh1 Cre/+ ;Vglut2 fl/fl mice. When we performed single-unit recordings of Purkinje cells in the adult mutant mice (Fig. 6 ), we did not observe differences in the simple spike or complex spike firing rate, pattern, or regularity. These findings showed that even though Vglut2 elimination from granule cells during the developmental period initially caused an abnormal simple spike firing rate (Fig. 2d ), the Purkinje cell firing rate later normalized after granule cells switched their transporter type to Vglut1 , which persists into adulthood (Fig. 6 ). The occurrence of Purkinje cell firing rate normalization supports the prediction that in adults, the glutamatergic cerebellar nuclei neurons, but not neurons that are located upstream in the cerebellar cortex, are functionally affected by VGluT2 loss. Fig. 6: Purkinje cell firing activity in adult mice. a Schematic of the in vivo single-unit extracellular Purkinje cell recording setup in anesthetized mice. b Representative trace of a Purkinje cell recording in a control mouse (black and blue lines represent complex spikes). c Representative trace of a Purkinje cell recording in an Atoh1 Cre/+ ;Vglut2 fl/fl mouse (reddish purple, blue lines represent complex spikes). In b , c , Inferior olive (IO) evoked a complex spike (CS) in blue. The y -axis is constant across the panel. The x -axes (timescales) are the same for ( b , c ). d No differences were found in the SS Firing rate ( p = 0.490; d = 0.40), e SS CV (spike pattern, p = 0.766; d = 0.07), f SS CV2 (spike regularity, p = 0.659; d = 0.11), g CS Firing rate ( p = 0.489, d = 0.49), h CS CV (spike pattern, p = 0.598, d = 0.19), or i CS CV2 (spike regularity, p = 0.246, d = 0.48). For d – i , large open circles represent the mouse average, and small, closed circles represent the cell average. Data points from control mice in black, and data points from Atoh1 Cre/+ ;Vglut2 fl/fl mice in reddish purple. A linear mixed model analysis with genotype as a fixed variable and mouse number as a random variable was used to test for statistical significance in ( d – i ). Control: N = 5 mice, n = 18 cells; Atoh1 Cre/+ ;Vglut2 fl/fl : N = 5, n = 23. Source data and detailed statistical results are available and provided as a Source Data file. Panel ( a ) was adapted from White & Sillitoe, 2017, “Genetic silencing of olivocerebellar synapses causes dystonia-like behavior in mice,” Nature Communications 21 under CC BY 4.0. Full size image Adult Atoh1 Cre/+ ;Vglut2 fl/fl mice have altered motor function but no social behavior deficits Next, we set out to determine whether the normalization of cerebellar cortical function was paired with the restoration of cerebellar-dependent behaviors (Fig. 7 ). We measured motor function in adult mice using assays that reveal behavioral impairments in mouse models with cerebellar deficits. First, we measured tremors. We previously found that eliminating GABAergic neurotransmission from Purkinje cells reduces tremor 51 , whereas eliminating glutamatergic neurotransmission from climbing fibers or impairing the neurogenesis of Atoh1 neurons increases tremor power in mice 29 , 52 . Changes in tremor intensity are thus a reliable, functional readout for assessing cerebellar neuron manipulations in freely behaving mice. We found that the Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice have a modest reduction in the average, but not peak, tremor power compared to their control littermates (Fig. 7a–c ). This reduction in tremor power is similar to what has been observed previously in mice lacking GABAergic neurotransmission from Purkinje cells 51 . Fig. 7: Motor behavior and social behavior in adult Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. a – c A tremor in freely moving mice. Control: N = 10 (5 female/5 male); Atoh1 Cre/+ ;Vglut2 fl/fl : N = 13 (6 f/7 m). a Power spectrum of tremor in control and Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. b Mean tremor power between 0 and 30 Hz was lower in Atoh1 Cre/+ ;Vglut2 fl/fl mice compared to control mice ( p = 0.039; d = 0.90). c Maximum tremor power was not different between Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockouts and control mice ( p = 0.276; d = 0.63). d – g Ambulatory activity in the open field assay. Control: N = 11 (6 f/5 m); Atoh1 Cre/+ ;Vglut2 fl/fl : N = 13 (6 f/7 m). Atoh1 Cre/+ ;Vglut2 fl/fl mice had lower (in d ) total movement time ( p = 0.03; d = 0.86), f horizontal activity count ( p = 0.008; d = 1.03), and g vertical activity count ( p = 0.004; d = 1.11) compared to control mice, but no difference in ( e ) total distance traveled ( p = 0.116; d = 0.65). h Rotarod performance. Control: N = 10 (6 f/4 m); Atoh1 Cre/+ ;Vglut2 fl/fl : N = 13 (6 f/7 m). Atoh1 Cre/+ ;Vglut2 fl/fl mice fell off the rotarod faster than control mice ( p = 0.017), but the group difference was only significant on the first day ( p = 0.016; d = 0.83) and third training day ( p = 0.025; d = 0.73), but not the second training day ( p = 0.062; d = 0.58). i Three-chamber socialization assay. Control: N = 11 (6 f/5 m); Atoh1 Cre/+ ;Vglut2 fl/fl : N = 13 (6 f/7 m). There was no difference between Atoh1 Cre/+ ;Vglut2 fl/fl mice and control mice in the total time spent with a mouse ( p = 0.404; d = 0.08) or an object ( p = 0.948; d = 0.13), and both groups spent more time with the mouse than the object (control: p < 0.001; Atoh1 Cre/+ ;Vglut2 fl/fl : p < 0.001 in a two-sided paired t -test). A two-sided unpaired t -test was used to test for statistical significance unless otherwise noted. Black circles and squares represent data points from control mice; reddish purple circles and squares represent data points from Atoh1 Cre/+ ;Vglut2 fl/fl mice. Squares represent data points from male mice, circles represent data points from female mice. Source data and detailed statistical results are available and provided as a Source Data file. Full size image Second, we tested the ambulatory activity of the conditional knockouts in the open field assay, as alterations in cerebellar circuit function often result in changes in ambulatory activity in the open field 29 , 52 , 53 . Indeed, we observed that compared to control littermates, the Atoh1 Cre/+ ;Vglut2 fl/fl mice move less often (Fig. 7d ), although the total distance traveled is not different (Fig. 7e ), and they have fewer horizontal and vertical activity events (Fig. 7f, g ). These measurements show that the adult Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice initiate fewer spontaneous motor events in an open field. Third, we tested whether motor performance and learning are impaired in the adult Atoh1 Cre/+ ;Vglut2 fl/fl mice. We tested the latency to fall on an accelerating rotarod as this behavior is often impaired in mice with cerebellar deficits 21 , 42 , 53 , 54 . We indeed found that motor performance and motor learning are impaired in Atoh1 Cre/+ ;Vglut2 fl/fl mice when compared to control littermates (Fig. 7h ). Altogether, cerebellar-dependent motor function is impaired in adult Atoh1 Cre/+ ;Vglut2 fl/fl mice when fast neurotransmission from glutamatergic nuclei, but not granule cells, is genetically eliminated. Fourth, we tested whether, despite these collective impairments in motor function, social behaviors were restored after the normalization of cerebellar cortical function by the transporter switch. We tested sociability in the three-chamber assay because previous studies have shown that sociability in this assay is impaired in many mouse models with cerebellar deficits 36 , 48 , 55 . We did not observe differences in the social approach assay in Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice compared to their control littermates (Fig. 7i ). Together, our findings demonstrate that while chronic alteration of fast neurotransmission from Atoh1 lineage glutamatergic cerebellar nuclei neurons continues to impair the proper execution of motor functions into adulthood, it still permits the restoration of social behaviors associated with the normalization of cerebellar cortical function. The Ntsr1 Cre and Vglut2 co-expressing neurons define a subset of glutamatergic cerebellar nuclei neurons Based on the behavioral and electrophysiological findings in the Atoh1 Cre/+ ;Vglut2 fl/fl mice, we next wondered whether the early social deficits and delayed acquisition of normal social behaviors in adult Atoh1 Cre/+ ;Vglut2 fl/fl mice were solely mediated by the progressive normalization of cerebellar cortical function. If so, then selectively silencing the neurotransmission from glutamatergic cerebellar nuclei neurons while leaving cerebellar cortical function intact would be predicted to influence the acquisition of early motor behaviors but not social vocalizations. To address this question, we examined the developmental contribution of Ntsr1 Cre expressing cerebellar nuclei neurons towards the acquisition of behaviors during development. Previous studies have shown that these glutamatergic neurons send axonal projections to the thalamus and VTA, among other midbrain and brainstem regions 28 , 34 , 56 . In agreement with previous studies, we found Ntsr1 Cre -driven YFP expression in the cerebellar nuclei neurons (Fig. 8a inset iii). We also found YFP expression in layer 6 neurons of the cerebral cortex (Fig. 8a inset i) and sparse labeling in the striatum (Fig. 8a inset iv), midbrain, (Fig. 8a inset v), and medulla (Fig. 8a inset vi). While there was some Vglut2 expression in overlapping areas of the midbrain and medulla, we did not observe any co-expression of YFP and Vglut2 in the same neurons in any brain region other than the cerebellar nuclei (Fig. 8a inset iii). Fig. 8: Ntsr1 Cre marks a subset of glutamatergic cerebellar nuclei neurons. a Expression of Ntrs1 Cre mapped through Cre-dependent YFP expression and Vglut2 in sagittal brain slices. Insets show high magnification of YFP and Vglut2 expression in (i) the cerebral cortex, (ii) the thalamus, (iii) cerebellar nuclei, (iv) the striatum, (v) the midbrain, and (vi) the medulla. We only observe co-expression of YFP (green, right panels) and Vglut2 (purple, right panels) in the same neurons in the cerebellar nuclei. b Expression of Ntrs1 Cre mapped through Cre-dependent YFP expression (green) in glutamatergic ( Vglut2 expressing, purple) cerebellar nuclei neurons (coronal sections). The green arrowhead indicates a YFP , but not Vglut2 , expressing neuron; the white arrowhead indicates a YFP and Vglut2 expressing neuron; and the purple arrowhead indicates a Vglut2 , but not YFP , expressing neuron. c Quantification of Vglut2 + (purple), YFP + (green), and Vglut2 + & YFP + double-positive neurons (white) in the DN, IN, and FN. N = 3 mice, n = 6 nuclei (3 brain sections) per mouse. Source data are provided as a Source Data file. DN dentate nucleus, IN interposed nucleus, FN fastigial nucleus. Images are representative of N = 3 mice. Full size image We found Ntsr1 Cre -driven YFP expression in some, but not all, Vglut2 + cerebellar nuclei neurons (Fig. 8a–c ). Additionally, Ntsr1 Cre -driven YFP expression was not homogeneous across cerebellar nuclei (Fig. 8b, c ). We counted YFP + and Vglut2 + neurons in the dentate, interposed, and fastigial nuclei within the cerebellum (Fig. 8b, c ). We found that more than half of Vglut2 + neurons in the dentate nucleus were also YFP + (56.6 ± 0.01 %), nearly three-quarters of Vglut2 + neurons in the interposed nucleus were also YFP + (73.7 ± 1.3%), and around a sixth of Vglut2 + neurons in the fastigial nucleus were also YFP + (16.5 ± 1.6%). Overall, around half of all Vglut2 + cerebellar nuclei neurons were also YFP + (52.5 ± 1.1%) (Fig. 8 ). Next, we set out to confirm that these glutamatergic, Ntsr1 Cre - expressing nuclei neurons send long range projections to areas associated with motor and non-motor behaviors. To this end, we crossed the Ntsr1 Cre mice to Atoh1 FlpO/+ ;Rosa fsf-lsl-tdTomato mice to obtain mice that expressed tdTomato solely in the Ntsr1 Cre expressing cerebellar nuclei neurons (Supplementary Fig. 7 ). We confirmed that the tdTomato is expressed in the cerebellar nuclei neurons in a pattern that is similar to the YFP expression in the in situ experiments (Supplementary Fig. 7a ). Specifically, we found the highest density of tdTomato + neurons in the interposed nuclei, some tdTomato + neurons in the dentate nuclei, and a few tdTomato + neurons in the fastigial nuclei (Supplementary Fig. 7b ). We also see some tdTomato + projections traveling to the molecular layer within the cerebellar cortex (Supplementary Fig. 7b ), which may explain the presence of fiber tracks in the cerebellar cortex observed in a previous description of this mouse line 57 . Additionally, we observed tdTomato + projections in the thalamus, red nucleus, midbrain, and brainstem (Supplementary Fig. 7c–k ). This includes some sparse labeling of projections in regions of TH + VTA neurons and in the inferior olive, which is in agreement with previous studies 28 , 48 . Together, these studies show that Ntsr1 Cre -expressing cerebellar nuclei neurons are integrated into circuits that were previously associated with motor and non-motor cerebellar-dependent behaviors. Early postnatal Ntsr1 Cre/+ ;Vglut2 fl/fl mice exhibit motor deficits with normal social behavior We next sought to test whether Vglut2- dependent fast neurotransmission from these Ntsr1 Cre cerebellar nuclei neurons is required for the acquisition of motor function and social vocalizations during the postnatal period in mice. To do so, we generated Ntsr1 Cre ;Vglut2 fl/fl conditional knockout mice to eliminate glutamatergic neurotransmission from a select population of the Ntsr1 Cre cerebellar nuclei neurons (Fig. 9a ). Fig. 9: Motor behavior and social behavior in early postnatal Ntsr1 Cre ;Vglut2 fl/fl mice. a Schematic of circuit modifications in Ntsr1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. VGluT1 (blue); VGluT2 (orange); Control mice (black); Ntsr1 Cre/+ ;Vglut2 fl/fl mice (sky blue). b Time to turn upward on a negative slope was measured. Ntsr1 Cre/+ ;Vglut2 fl/fl mice required a longer time to turn compared to control littermates at P7 ( p < 0.001; d = 1.34), P9 ( p < 0.001; d = 1.43), and P11 ( p < 0.001; d = 1.91). c Time to right onto four paws was measured. Ntsr1 Cre/+ ;Vglut2 fl/fl mice required a longer time to turn compared to control littermates at P7 ( p < 0.001; d = 2.13), P9 ( p < 0.001; d = 3.63), and P11 ( p < 0.001; d = 3.53). d Number of vocalizations after separation from the nest was measured. Ntsr1 Cre/+ ;Vglut2 fl/fl conditional knockout pups made a similar number of vocalizations as control littermates at P7 ( p < 0.205; d = 0.38), and P9 ( p = 0.553; d = 0.18), but made more calls at P11 ( p = 0.003; d = 0.87). e Duration of vocalizations after separation from the nest was measured. Ntsr1 Cre/+ ;Vglut2 fl/fl conditional knockouts made calls of the same duration as compared to control littermates at P7 ( p = 0.161; d = 0.42), P9 ( p = 0.576; d = 0.17), and P11 ( p = 0.240; d = 0.35). For b and c , Control: N = 54 (22 f/32 m); Ntsr1 Cre/+ ; Vglut2 fl/fl : N = 25 (10 f/15 m). For d , e , Control: N = 30 (11 f/19 m); Ntsr1 Cre/+ ;Vglut2 fl/fl : N = 18 (7 f/11 m). Dots repr e sent means for each mouse, horizontal lines represent group means, and shaded areas represent the distribution of the data. Data points from control mice in black, and data points from Ntsr1 Cre/+ ;Vglut2 fl/fl mice in sky blue. A linear mixed model analysis with genotype as a fixed variable and mouse number as a random variable was used to test for statistical significance in ( c – e ). A post hoc analysis was performed to test for the statistical differences at each time point. Source data and detailed statistical results are available and provided as a Source Data file. Panels ( b , d ) were adapted from Van der Heijden et al., 2022, “Quantification of Behavioral Deficits in Developing Mice With Dystonic Behaviors,” Dystonia 47 under CC BY 4.0. Full size image We tested whether the Ntsr1 Cre ;Vglut2 fl/fl mice have abnormal motor behavior in the early postnatal period by testing the integrity of their postnatal motor reflexes. We found that similar to the Atoh1 Cre ;Vglut2 fl/fl mice, Ntsr1 Cre ;Vglut2 fl/fl mice have a longer latency to turn upward in the negative geotaxis reflex compared to control littermates on P7, P9, and P11 (Fig. 9b ). Additionally, the Ntsr1 Cre ;Vglut2 fl/fl mice also have a longer latency to turn onto their four paws in the righting reflex when compared to control littermates on P7, P9, and P11 (Fig. 9c ). These findings altogether confirm that fast glutamatergic neurotransmission from the cerebellar nuclei neurons is essential for the proper acquisition of motor behavior during postnatal development in mice. We also examined the composition of pup vocalizations in a social isolation paradigm. Surprisingly, we did not observe fewer or shorter vocalizations in the Ntsr1 Cre ;Vglut2 fl/fl conditional knockout mice compared to control littermates (Fig. 9d, e ), which is in contrast to what we observed in age-matched Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice (Fig. 3d, e ). We did observe a greater number of vocalizations in the Ntsr1 Cre ;Vglut2 fl/fl mice at P11 (Fig. 9d ), which was not observed in control littermates or age-matched Atoh1 Cre/+ ;Vglut2 fl/fl conditional knockout mice (Fig. 3d ). These observations suggest that early postnatal glutamatergic neurotransmission from the cerebellar nuclei neurons is not essential for the acquisition of social vocalizations as assessed in 2-week-old mice. Adult Ntsr1 Cre/+ ;Vglut2 fl/fl mice do not exhibit motor dysfunction or social behavior deficits Finally, we set out to investigate whether the motor phenotypes observed in the early postnatal Ntsr1 Cre/+ ;Vglut2 fl/fl mice would remain impaired or be restored by adulthood. These experiments would unveil whether this subset of glutamatergic cerebellar nuclei neurons is essential for the acquisition of motor behaviors throughout life, or if developmental compensation may occur over time. It also provides an opportunity to partially disentangle how chronic impairment of only the glutamatergic cerebellar nuclei, but not the cerebellar cortex (as in Atoh1 Cre/+ ;Vglut2 fl/fl mice), may affect adult behaviors. We found that compared to control mice, adult Ntsr1 Cre/+ ;Vglut2 fl/fl mice had no deficits in ambulatory activity (Fig. 10a–d ), rotarod motor learning (Fig. 10e ), or social preference in the 3-chamber assay (Fig. 10f ). Thus, despite severe early postnatal deficits in motor reflexes (Fig. 9 ), adult Ntsr1 Cre/+ ;Vglut2 fl/fl mice show no changes in motor function. These results underscore the remarkable ability of the brain to compensate for some developmental perturbations in cerebellar function. Fig. 10: Motor behavior and social behavior in adult Ntsr1 Cre/+ ;Vglut2 fl/fl conditional knockout mice. a – d Ambulatory activity in the open field assay. Control: N = 11 (7 f/4 m); Ntsr1 Cre/+ ;Vglut2 fl/fl : N = 11 (7 f/4 m). Ntsr1 Cre/+ ;Vglut2 fl/fl mice were not different from control mice in a total movement time ( p = 0.135; d = 0.64), b total distance traveled ( p = 0.106; d = 0.69), c horizontal activity count ( p = 0.931; d = 0.04), or d vertical activity count ( p = 0.410; d = 0.41) compared to control mice. e Rotarod performance. Control: N = 11 (7 f/3 m); Ntsr1 Cre/+ ;Vglut2 fl/fl : N = 11 (7 f/4 m). There was no difference in latency to fall between Ntsr1 Cre/+ ;Vglut2 fl/fl mice and control mice (p = 0.786). f . 3-chamber socialization assay. Control: N = 11 (7 f/4 m); Ntsr1 Cre/+ ;Vglut2 fl/fl : N = 11 (7 f/4 m). Ntsr1 Cre/+ ;Vglut2 fl/fl mice spend less time interacting with both the mouse ( p = 0.014; d = 1.01) and the object ( p = 0.004; d = 1.15) than control mice, but both groups spent more time with the mouse than the object (control: p < 0.001; Ntsr1 Cre/+ ;Vglut2 fl/fl : p < 0.001 in the paired t -test). A two-sided unpaired t -test was used to test for statistical significance unless otherwise noted. g Summary of results. Black circles and squares represent data points from control mice; sky blue circles and squares represent data points from Ntsr1 Cre/+ ;Vglut2 fl/fl mice. Squares represent data points from male mice, circles represent data points from female mice. Source data and detailed statistical results are available and provided as a Source Data file. Full size image Discussion In this study, we combined intersectional mouse genetics and age-appropriate behavioral paradigms to investigate how perturbing the function of neurons in the cerebellar cortex and cerebellar nuclei causes acute and prolonged motor dysfunction and social deficits. We found that developmental elimination of Vglut2 -dependent glutamatergic neurotransmission from Atoh1 lineage neurons severely impacts motor behavior and social vocalizations in the early postnatal period (Fig. 10g ). By the time these Atoh1 Cre/+ ;Vglut2 fl/fl mutant mice reach adulthood, the natural Vglut2 to Vglut1 transporter switch within granule cells and the concomitant normalization of cerebellar cortical function coincide with normal social behaviors and only mild motor deficits. When we eliminated neurotransmission from Ntsr1 expressing glutamatergic nuclei neurons, we did not observe social deficits but did detect severe motor deficits in early postnatal mice that fully resolved with age. Together, we found that glutamatergic neurotransmission from cerebellar cortical versus cerebellar nuclei neurons differentially control the acquisition of motor and social behaviors and that the brain can compensate for some, but not all, perturbations that occur in the developing cerebellum (Fig. 10g ). Contribution of glutamatergic cerebellar neurons to social behaviors Considering the breadth of work that reports on the presence of aberrant social behaviors after different cerebellar manipulations 19 , 36 , 48 , 55 , it may be contradictory, at first thought, that social behaviors are not widely affected when glutamatergic cerebellar output nuclei neurons are genetically silenced. We recently found that eliminating neurotransmission at glutamatergic climbing fiber synapses, which terminate extensively on the Purkinje cell dendrites, results in a robust reduction of vocalizations in P7–P11 pups 47 . These data are consistent with previous studies that used developmental or acute changes in Purkinje cells to study abnormal social preference in adult mice 17 , 19 , 50 , 55 , 58 . However, these studies all induced changes in the main cerebellar cortical output neurons, Purkinje cells, which have extensive connections to both the GABAergic and the glutamatergic neurons in the cerebellar nuclei. Ultimately, from those studies, one cannot differentiate whether each functional subtype of cerebellar nuclei neuron might be equally essential for driving social behaviors. Several previous studies have directly manipulated cerebellar nuclei neurons and measured the secondary effect on social approach in adult mice. One study found that exciting cerebellar inputs into the VTA results in increased social approach 48 . Based on slice physiology, Carta and colleagues postulate that these cerebellar inputs to the VTA are likely glutamatergic in nature, but their experimental manipulation did not focus on the neurotransmitter identity of the projecting neurons, which may provide monosynaptic GABAergic or glutamatergic nuclei projections to the VTA 28 . Another study suggested that inhibiting neural activity in the dentate nucleus of the cerebellum can alleviate abnormal social preference in a mouse model for autism 36 . Yet, like the work by Carta and colleagues, Kelly and colleagues used experimental approaches that do not experimentally distinguish the neurotransmitter identity of the affected cell types. Similarly, the inactivation of neurons in the fastigial cerebellar nucleus resulted in altered social interactions in rats 59 . These observations are in support of changes in affect and sociability in children with posterior fossa syndrome and related developmental conditions, who often experience perturbations or injuries to the superior peduncle that contains the bulk of output projections from the GABAergic and glutamatergic cerebellar nuclei neurons 60 , 61 . Thus, although it has been routinely suspected that the glutamatergic cerebellar nuclei play a role in sociability, the relative contribution of the GABAergic versus the glutamatergic cerebellar nuclei neurons to different cerebellar-dependent behaviors remains unknown. Although anatomical studies suggest that the glutamatergic nuclei neurons may contribute to various non-motor functions 27 , 49 , 62 , we found that eliminating the neurotransmission from most or all glutamatergic nuclei neurons leads to only a temporary social deficit or no observable change in sociability, respectively. Our results suggest, perhaps expectedly, that motor and non-motor functions are mediated through parallel glutamatergic and GABAergic pathways emerging from the cerebellar nuclei. Future studies could selectively modulate signaling from the GABAergic cerebellar nuclei to better investigate whether these neurons have dedicated and/or independent roles that are required for the establishment of cerebellar-dependent social behaviors. Alternatively, the signals needed for social behavior may be encoded by complementary connections with Purkinje cells that have simultaneous and parallel interactions with both the glutamatergic and the GABAergic cerebellar nuclei neurons. Given these considerations, we propose the following. Social behaviors require normal function and the development of Purkinje cells. When the Purkinje cells do not function normally, they propagate their aberrant signals through downstream glutamatergic and GABAergic cerebellar nuclei neurons. Glutamatergic nuclei neurons may modulate social behaviors when acutely inhibited or excited, but their signaling is not essential for the acquisition of social behaviors because parallel signaling from GABAergic cerebellar nuclei neurons may be sufficient to propagate information regarding social interaction from Purkinje cells to other brain regions 28 . However, our experimental approach does not delineate whether these GABAergic projection neurons are equally sufficient in regulating social behaviors when the glutamatergic neuron function is perturbed after circuit development or maturation. Future studies are needed to further disentangle the contributions of each nuclear cell type to social behavior throughout life. Developmental compensation of motor functions We found a robust and significant difference in the severity of motor deficits when we compared early postnatal conditional knockout mice to adult Atoh1 Cre/+ ;Vglut2 fl/fl and Ntsr1 Cre/+ ;Vglut2 fl/fl mice. Although we performed our experiments in a blinded fashion, the motor abnormalities in early postnatal Atoh1 Cre/+ ;Vglut2 fl/fl and Ntsr1 Cre/+ ;Vglut2 fl/fl mice were often readily visible as abnormal movements and postures that persist even in the resting state. In contrast, no overt signs of motor dysfunction were discernable in adult Atoh1 Cre/+ ;Vglut2 fl/fl or Ntsr1 Cre/+ ; Vglut2 fl/fl mice. One possible explanation could be that the vesicular subtype switch in granule cells from Vglut2 to Vglut1 resulted in a normalization of Purkinje cell firing patterns after the early postnatal period (Fig. 2 vs. Fig. 6 ), which may, in part, explain the difference in the severity of motor dysfunction between young and adult Atoh1 Cre/+ ;Vglut2 fl/fl mice. Yet, the transporter switch does not explain why Ntsr1 Cre/+ ;Vglut2 fl/fl mice also have discernable motor deficits in the early postnatal period despite not having a primary defect in granule cell development. Another potential explanation for the persistent motor abnormalities in Atoh1 Cre/+ ;Vglut2 fl/fl mice could be the dysfunction of Atoh1 lineage neurons within the spinal cord (Fig. 4l ). However, previous work has shown that elimination of neurotransmission from all (VGluT1 and VGluT2 expression) Atoh1 lineage neurons within the spinal cord does not cause motor dysfunction 40 . We interpret these data to mean that spinal cord dysfunction is not the main driver of the phenotypes observed in adult Atoh1 Cre/+ ;Vglut2 fl/fl mice. Therefore, in this manipulation, it is likely that other cellular and circuit mechanisms are the potential sources that promote changes in the severity of motor impairments. One possible explanation for the differences in severity between developing and adult conditional knockout mice is that the developmental genetic perturbation of Vglut2 deletion from glutamatergic nuclei neurons can be compensated for by other neurons in the motor circuit. Those cells could be parallel pathways from GABAergic nuclei neurons, as mentioned above, or part of a compensatory mechanism by mature granule and Purkinje cells 21 . In Ntsr1 Cre/+ ;Vglut2 fl/fl mice, these compensatory cells could also include the non- Ntsr1 expressing glutamatergic cerebellar nuclei neurons, which may also partially explain why their early postnatal motor deficits can be completely recovered by adulthood. Alternatively, the compensation may be initiated by a change in brain-wide networks that control movement during development versus adulthood 63 , 64 , 65 . In neonates, neural activity in the motor cortex resembles that of neurons in the sensory cortex, with neural activity occurring after movements rather than preceding movements 64 . During this period, it is the red nucleus, which receives strong monosynaptic inputs from the cerebellum, that acts as the predominant source of motor output 66 . The motor cortex starts controlling movement after cerebellar function further matures, and a cerebellar-dependent internal model of movement arises in thalamic nuclei that receive dense innervation from the cerebellum and relay information to the motor cortex 63 , 65 . This cerebellum-involved switch in motor initiation from the red nucleus to the motor cortex may explain why perturbations in cerebellar output pathways cause more severe motor deficits during early postnatal development compared to adulthood. Therefore, a temporally dependent mechanism may ultimately explain why the early postnatal Atoh1 Cre/+ ;Vglut2 fl/fl and Ntsr1 Cre/+ ;Vglut2 fl/fl conditional knockout mice both develop with severely impaired motor reflexes, whereas, in comparison, their adult counterparts present with only mild or completely absent motor dysfunction. In addition to the difference in the severity of motor deficits between young and old Atoh1 Cre/+ ;Vglut2 fl/fl and Ntsr1 Cre/+ ;Vglut2 fl/fl mice, there is also an intriguing discrepancy between the motor function between the two conditional knockouts during adulthood; adult Atoh1 Cre/+ ;Vglut2 fl/fl mice display mild motor impairments while Ntsr1 Cre/+ ;Vglut2 fl/fl mice exhibit no obvious motor deficits. One potential explanation for this difference could be the additional dysfunction of Atoh1 lineage neurons within the spinal cord of Atoh1 Cre/+ ;Vglut2 fl/fl mice (Fig. 4l ). However, previous work has shown that elimination of neurotransmission from all (VGluT1 and VGluT2 expression) Atoh1 lineage neurons within the spinal cord does not cause motor dysfunction 40 , making this an unlikely driver of the observed motor outcomes. The other difference between Atoh1 Cre/+ ;Vglut2 fl/fl and Ntsr1 Cre/+ ;Vglut2 fl/fl mice is the extent of affected glutamatergic nuclei neurons (Fig. 1 versus Fig. 8 ). In Ntsr1 Cre/+ ;Vglut2 fl/fl mice, we found that about 50% of glutamatergic cerebellar nuclei neurons are genetically silenced (Fig. 8 ) while nearly 100% of them are silenced in Atoh1 Cre/+ ; Vglut2 fl/fl mice (Fig. 1 ). In both mice during adulthood, only the glutamatergic cerebellar nuclei neurons are affected, and the granule cells of Atoh1 Cre/+ ;Vglut2 fl/fl mice that were silenced during development now appear to function normally as confirmed by in situ histochemistry (Fig. 5 ) and electrophysiology (Fig. 6 ). Together, our results suggest that the difference in the recovery of motor function between the two conditional knockouts in adulthood can likely be attributed to compensation mediated by unaffected cells in the motor circuit, especially non- Ntsr1 expressing glutamatergic cerebellar nuclei neurons. Perhaps some threshold level of glutamatergic cerebellar nuclei neuron function is required to fully recover motor function as shown in Ntsr1 Cre/+ ; Vglut2 fl/fl mice, which can only be partially recovered by parallel GABAergic nuclei neurons in Atoh1 Cre/+ ;Vglut2 fl/fl mice. It may also be possible that the dysfunction of granule cells in early postnatal developing Atoh1 Cre/+ ;Vglut2 fl/fl mice may have induced long-term changes in motor circuitry that could not be fully resolved in adulthood by compensatory mechanisms, but this is less likely as our data shows that cerebellar cortical function normalizes by this time (Fig. 6 ). This highlights the importance of glutamatergic cerebellar nuclei neurons in mediating motor function, and the incredible ability of the cerebellum to overcome some, but not all, perturbations of its circuitry. Thus, compensating for abnormal glutamatergic cerebellar nuclei function in diseases with motor symptoms may be key to designing treatments for recovering motor deficits. Assessing neural function versus neurogenesis Our genetic silencing approach provides a compelling in vivo demonstration of how the function of Atoh1 lineage neurons can be assessed using a functional rather than an anatomical lesion. Previous studies inferred the function of these glutamatergic neurons based on anatomical location and synaptic connectivity 25 , 67 or tested the function of Atoh1 lineage neurons using local deletion of the pro-neural Atoh1 gene, which causes an abnormal or a complete lack of anatomical development of these neurons 29 , 32 , 68 , 69 , 70 . These studies have uncovered fundamental and groundbreaking mechanisms for neonatal lethality in Atoh1 null mice 32 , 70 . However, we, and others 71 , have previously demonstrated that the lack of Atoh1 lineage neurons can have substantial effects on the development of surrounding neurons 29 that may exacerbate the behavioral effects upon losing Atoh1 lineage neurons. Instead, our genetic approach is precise (it only affects Atoh1 lineage or Ntsr1 expressing neurons) and specific (only the glutamatergic transporter Vglut2 is deleted). We have used the elimination of vesicular transporters to study the contribution of specific circuit components in several previous studies. In these studies, we did not detect changes in synaptic connectivity or developmental compensation that overcome the functional elimination of neurotransmission 21 , 33 , 54 , 72 . Therefore, we propose that our genetic methods allow for testing how synaptic VGluT2-dependent neurotransmission only in neurons within the intersectional domain influences circuit function and mouse behavior. Our study confirms that Atoh1 lineage cerebellar nuclei neurons are necessary for the refinement of motor behavior, and we also unveil that Atoh1 granule cells contribute to social vocalizations during postnatal development. This study provides in vivo experimental evidence showing that glutamatergic neurons in the cerebellum are critical for the acquisition of motor function and social behaviors. Specifically, the glutamatergic neurons in the cerebellar cortex and nuclei are differentially required for the early postnatal development of motor behavior and social behaviors. These data also raise the possibility that parallel signaling from the GABAergic nuclei neurons may provide some compensation for functional lesions of the glutamatergic nuclei neurons during development, leading to improvements in early motor and social deficits. By leveraging the ability to restore cerebellar-dependent behaviors after normalizing cerebellar cortical function, tapping into cerebellar neurons that communicate social and motor signals to extra-cerebellar regions may provide an ideal therapeutic target to restore motor and non-motor functions after developmental brain injury and disease. Methods Animals All mice included in the experiments for this manuscript were housed in a Level 3, AALAS-certified facility. The Institutional Animal Care and Use Committee (IACUC) of Baylor College of Medicine (BCM) reviewed and approved all studies that involved mice. We used the following mice for our experiments: Ai14 ( Rosa lsl-tdTomato ; JAX: 007914); 73 Ai32 ( Rosa fsf-ChR2-YFP , JAX: 012569); 74 Ai65 ( Rosa fsf-lsl-tdTomato ; JAX: 021875); 74 Atoh1 Cre ; 75 Atoh1 FlpO (JAX: 036541); 32 Ntsr1 Cre (MMRRC: 030648); 57 Vglut2 IRES-Cre (JAX: 028863) ; 76 Vglut2 fl (JAX: 012898) 72 . We used ear clippings from pre-weaned pups for genotyping and identification of transgenic alleles. Mice of both sexes were used in all experiments. We considered the day the pups were born as postnatal day 0 (P0). We did not observe gross differences in weight between control and Atoh1 Cre ;Vglut2 fl/fl conditional knockout mice (Adult: control: N = 7, weight = 32.2 ± 2.4 g; Atoh1 Cre ;Vglut2 fl/fl : N = 6, weight = 28.7 ± 3.6 g; p = 0.413; P14: control: N = 4, weight = 9.75 ± 0.4 g; Atoh1 Cre ;Vglut2 fl/fl : N = 4, weight = 8.9 ± 0.4 g; p = 0.161; P10: control: N = 5, weight = 6.2 ± 0.4 g; Atoh1 Cre ;Vglut2 fl/fl : N = 6, weight = 5.7 ± 0.3 grams; p = 0.397). The ages of the adult mice were between two and fourteen months old. All mice were kept under a 14 h/10 h light/dark cycle, a daily temperature between 68 and 72 F, and humidity between 30 and 70%. Tissue processing We collected brain and spinal cord tissue for analyses as previously described 29 . We anesthetized mice with Avertin and tested them for effective anesthesia by toe or tail pinch. Analgesia was also provided. We then accessed the chest cavity and penetrated the heart with a small butterfly needle for perfusion. We first perfused with 1 M phosphate-buffered saline (PBS pH 7.4) to flush out blood until the liver turned clear. Next, we perfused the mouse body with 4% paraformaldehyde (PFA) to fix the tissue. After the tail and hind paws were stiff from fixation, we decapitated the mouse and dissected the brain from the skull or spinal cord from the spinal column. We post-fixed brain and spinal cord tissue in 4% PFA overnight or until the tissue was used for cryoprotection. We cryoprotected the tissue by serial sucrose gradients (15% → 30% sucrose in PBS) at 4 °C, each step until the tissue sinks. Finally, we froze the tissue in an optimal cutting temperature (OCT) solution and stored it at −80 °C. Brain sections were cut into 40 µm free-floating tissue sections, and spinal cord sections were cut into 25 µm sections on the slide. Cut sections were stored at 4 °C until it was used for immunohistochemistry (tissue was stored for a maximum of two months). Tissues from mice expressing the tdTomato allele were stored in aluminum foil at all steps to prevent photobleaching. Immunohistochemistry We stained free-floating tissue sections as previously described 29 . We blocked tissue sections in 500 μL 10% normal goat or normal donkey serum in 0.1% Triton-X in PBS (PBS-T) for 2 h. We then incubated tissue sections overnight at room temperature in 500 μL blocking solution with primary antibodies at desired concentrations. After washing the tissue sections three times in PBS-T, we incubated the tissue for two hours in 500 μL PBS-T with secondary antibodies at desired concentrations. Subsequently, we incubated tissue sections with DAPI (1:500; Sigma-Aldrich; #D9542) in PBS for 10 min. After a final two washes, we mounted the sections using VECTASHIELD Vibrance® mounting medium (Vector Laboratories; #H1700). All mounted slides were stored at 4 °C until they were imaged. We used the following primary antibodies: guinea-pig (gp)-anti-VGluT2 (1:500; Synaptic Systems; #135404), rabbit (rb)-anti-VGluT1 (1:500; Synaptic Systems; #135302), rb-anti-Parvalbumin (PV) (1:1000; Swant; #PV 28), rb-anti-Neurogranin (NG) (1:500; Chemicon; #AB5620), rb-anti-carbonic anhydrase 8 (Car8) (1:500; Proteintech; #12391-1-AP), and sh-anti-tyrosine hydroxylase (TH) (1:500; Millipore; #AB1542). We used the following secondary antibodies which were conjugated to an Alexa-488 fluorophore: goat-anti-gp (1:1000; Invitrogen; #A11073), goat-anti-rb (1:1000; Invitrogen; # A32731), or goat-anti-sh (1:1000; Invitrogen; # A11015). In situ hybridization (ISH) ISH was performed by the RNA ISH Core at Baylor College of Medicine using an automated robotic platform. ISH was used to visualize Vglut2 , Vglut1, YFP , and tdTomato expression in unfixed, fresh frozen tissue cut in 25 μm-thick coronal brain sections. Digoxigenin (DIG)-labeled mRNA antisense probes against Vglut2 , Vglut1, YFP , and tdTomato were generated using an RNA DIG-labeling kit from Roche. The specific sequences of the antisense probes that we used were as follows: Vglut2: GGTGCTGGAGAAGAAGCAGGACAACCGAGAGACCATCGAGCTGACAGAGGACGGTAAGCCCCTGGAGGTGCCTGAGAAGAAGGCTCCGCTATGCGACTGCACGTGCTTCGGCCTGCCGCGCCGCTACATCATAGCCATCATGAGCGGCCTCGGCTTCTGCATATCCTTCGGCATCCGCTGTAACCTGGGCGTGGCCATCGTGGACATGGTCAACAACAGCACTATCCACCGCGGAGGCAAAGTTATCAAGGAG Vglut1: CAGAGCCGGAGGAGATGAGCGAGGAGAAGTGTGGCTTTGTTGGCCACGACCAGCTGGCTGGCAGTGACGAAAGTGAAATGGAGGACGAGGCTGAGCCCCCAGGGGCGCCCCCCGCGCCGCCTCCGTCCTACGGGGCCACACACAGCACAGTGCAGCCTCCGAGGCCCCCGCCCCCTGTCCGGGACTACTGACCACGGGCCTCCCACTGTGGGGCAGTTTCCAGGACTTCCACTCCATACACCTCTAGCCTGAGCGGCAGTGTCGAGGAACCCCACTCCTCCCCTGCCTCAGGCTTAAGATGCAAGTCCTCCCTTGTTCCCAGTGCTGTCCGACCAGCCCTCTTTCCCTCTCAACTGCCTCCTGCGGGGGGTGAAGCTGCACACTAGCAGTTTCAAGGATACCCAGACTCCCCTGAAAGTCGTTCTCCGCTTGTTTCTGCCTGTGTGGGCTCAAATCTCCCCTTTGAGGGCTTTATTTGGAGGGACAGTTCAACCTCTTCCTCTCTTGTGGTTTTGAGGTTTCACCCCTTCCCCCAAGACCCCAGGGATTCTCAGGCTACCCCGAGATTATTCAGGTGGTCCCCTACTCAGAAGACTTCATGGTCGTCCTCTATTAGTTTCAAGGCTCGCCTAACCAATTCTACATTTTTCCAAGCTGGTTTAACCTAACCACCAATGCCGCCGTTCCCAGGACTGATTCTCACCAGCGTTTCTGAGGGA YFP: AGCTGACCCTGAAGTTCATCTGCACCACCGGCAAGCTGCCCGTGCCCTGGCCCACCCTCGTGACCACCCTGACCTACGGCGTGCAGTGCTTCAGCCGCTACCCCGACCACATGAAGCAGCACGACTTCTTCAAGTCCGCCATGCCCGAAGGCTACGTCCAGGAGCGCACCATCTTCTTCAAGGACGACGGCAACTACAAGACCCGCGCCGAGGTGAAGTTCGAGGGCGACACCCTGGTGAACCGCATCGAGCTGAAGGGCATCGACTTCAAGGAGGACGGCAACATCCTGGGGCACAAGCTGGAGTACAACTACAACAGCCACAACGTCTATATCATGGCCGACAAGCAGAAGAACGGCATCAAGGTGAACTTCAAGATCCGCCACAACATCGAGGACGGCAGCGTGCAGCTCGCCGACCACTACCAGCAGAACACCCCCATCGGCGACGGCCCCGTGCTGCTGCCCGACAACCACTACCTGAGCACCCAGTCCGCCCTGAGCAAAGACCCCAACGAGAAGCGCGATCACATGGTCCTGCTGGAG tdTomato: ATCAAAGAGTTCATGCGCTTCAAGGTGCGCATGGAGGGCTCCATGAACGGCCACGAGTTCGAGATCGAGGGCGAGGGCGAGGGCCGCCCCTACGAGGGCACCCAGACCGCCAAGCTGAAGGTGACCAAGGGCGGCCCCCTGCCCTTCGCCTGGGACATCCTGTCCCCCCAGTTCATGTACGGCTCCAAGGCGTACGTGAAGCACCCCGCCGACATCCCCGATTACAAGAAGCTGTCCTTCCCCGAGGGCTTCAAGTGGGAGCGCGTGATGAACTTCGAGGACGGCGGTCTGGTGACCGTGACCCAGGACTCCTCCCTGCAGGACGGCACGCTGATCTACAAGGTGAAGATGCGCGGCACCAACTTCCCCCCCGACGGCCCCGTAATGCAGAAGAAGACCATGGGCTGGGAGGCCTCCACCGAGCGCCTGTACCCCCGCGACGGCGTGCTGAAGGGCGAGATCCACCAGGCCCTGAAGCTGAAGGACGGCGGCCACTACCTGGTGGAGTTCAAGACCATCTACATGGCCAAGAAGCCCGTGCAACTGCCCGGCTACTACTACGTGGACACCAAGCTGGACATCACCTCCCACAACGAGGACTACACCATCGTGGAA Nissl staining Nissl staining was performed with the tissue sections mounted and dried overnight on the slides. The next day, the mounted sections were immersed in 100% xylene two times for five minutes each time. Subsequently, they were placed through a rehydration series of 3 immersions in 100% ethanol, followed by 95% ethanol, 70% ethanol, and tap water, with each step lasting 2 min. Afterward, the sections were immersed in cresyl violet solution for approximately two minutes. They were then dehydrated following a reversed order of the rehydration series and followed by a final immersion in xylene, with each step lasting up to one minute. Coverslips were mounted on the slides immediately after using Cytoseal XYL mounting media (Thermo Scientific, Waltham, MA, USA, #22-050-262). Microscopy and image processing We acquired photomicrographs of cerebellar sections using a Leica DM4000 B LED microscope with a DPX365FX camera or using a Zeiss Axio Imager.M2 microscope with an AxioCam MRc5 camera. We stitched the whole mount images for the intersectional lineage tracing together using the Adobe Photoshop Photomerge function. We adjusted color brightness and balance using ImageJ software, and we cropped the images to the desired size for figures using Adobe Illustrator. In vivo electrophysiology We performed in vivo electrophysiological recordings of Purkinje cells according to previously described publications 29 , 77 . Prior to surgery, mice were anesthetized using a mixture of ketamine (80 mg/kg) and dexmedetomidine (16 mg/kg). During the surgery, the mouse was placed on a heated surgery pad and received additional isoflurane (1%) when necessary. The mouse’s head was stabilized by ear bars in a stereotaxic surgery rig. When the mouse was fully anesthetized, we removed the hair from the head and made an incision in the skin over the anterior part of the skull. Next, we used a dental drill to make a large craniotomy over the cerebellar cortex. The craniotomy spanned from around bregma −5.60 mm to −6.64 mm, and from around 0.5–3 mm lateral to bregma. Purkinje cells across several lobules were randomly sampled from this craniotomy, preventing any potential bias from local activity patterns (Supplementary Fig. 5 ). We recorded neural activity using tungsten electrodes and digitized the signals into Spike2 (CED). Purkinje cells were identified based on their depth within the cerebellum (0–2 mm from the brain surface) and the presence of clear complex spikes. We used both male and female mice for our recordings. Analysis of in vivo electrophysiology All electrophysiological recordings were spike sorted in Spike2. We separately sorted out Purkinje cell simple spikes and complex spikes. Complex spikes were recognized by their large action potential waveform that is followed by 3–5 smaller spikelets. We only included in our final analyses Purkinje cell recordings with clearly identifiable complex spikes, a good signal-to-noise ratio, and a minimal recording duration of 60 s 78 . For this study, we defined “firing rate” as the average number of spikes per second. We defined “CV” as the standard deviation of the ISI divided by the mean of the ISI. We defined “CV2” as the mean of the difference between all adjacent ISI divided by the sum of all adjacent ISI. Behavioral analyses—breathing assay We tested respiratory parameters in room air as described in our previous publication 32 . In short, we placed mice in plethysmography chambers to acclimate for one hour. After the habituation period, we used Phonemah software (DSI) to acquire breath waveforms and used custom-written MATLAB (Mathworks) code to derive Tidal Volume, Respiratory Frequency, and Minute Volume parameters. Behavioral analysis—negative geotaxis We tested the negative geotaxis reflex at P7, P9, and P11 as previously described and shown 47 . Mice were placed head down on a negative incline (−35°) that was covered with a sterile Poly-Lined drape. We measured the time until mice turned 90° in either direction. Mice were tested three times at each time point. We suspended the test if mice did not turn within 60 s or fell down the slope. We recorded the falls that occurred within 60 s. Behavioral analysis—surface righting reflex We tested the surface righting reflex at P7, P9, and P11, as previously described and shown 47 . Mice were placed in the supine position in an empty, clean cage. We measured the time until the mice turned onto their four paws. Mice were tested three times at each time point. We suspended the test if mice did not turn within 1 min (60 s). Behavioral analysis—ultrasonic vocalizations We measured pup ultrasonic vocalizations in a social isolation task at P7, P9, and P11. The ultrasonic vocalizations of each animal were monitored for 2 min in a sound-attenuating chamber (Med Associates Inc.). For Atoh1 Cre/+ ;Vglut2 fl/fl mice and littermate controls, vocalizations were recorded using a CM16 microphone (Avisoft Bioacoustics). Sound was amplified and digitized using UltraSoundGate 416H at a 250 kHz sampling rate, and bit depth of 16 while Avisoft RECORDER software was used to collect the recordings. For Ntsr1 Cre ;Vglut2 fl/fl mice and littermate controls, we measured vocalizations using a Noldus microphone and UltraVox XT software. Due to minimal congruency between the number of vocalizations recorded between these two recording systems 79 , we only compared mutants to control littermates whose vocalizations were measured using the same system. Behavioral analysis—open field assay We measured open-field activity using automated Fusion software. Mice were placed in the center of an open field (40x40x30 cm chamber) that has photo beams for detecting horizontal and vertical activity. The chamber was placed in a room with the light set to 200 lux and ambient white noise to 60 dB. Each mouse was allowed to explore the chamber for 30 min. Behavioral analysis—3-chamber assay We used the three-chamber test 80 , 81 to assess the sociability of adult mice. The apparatus consisted of three chambers—a center chamber with doorways to two side chambers, all of equal dimensions (42.5 cm length; 17.5 cm, width; 23 cm height). For three days prior to testing, the age- and sex-matched mice to be used as partners for Atoh1 Cre/+ ;Vglut2 fl/fl , Ntsr1 Cre ;Vglut2 fl/fl , and their control littermates during the social interaction test were placed under a wire cup for 1 h each day. The assay consisted of a 10-min habituation session followed by a 10-min test phase performed in dim lighting conditions (15 lux). During the habituation session, the test subject was placed in the center chamber and allowed to freely explore for 10 min. Once the subject returned to the center chamber, the doorways to the side chambers were blocked with plexiglass walls. Before the test phase starts, a novel partner mouse (pre-habituated to the apparatus 3 days prior) is placed under a wire cup in one side chamber, while a novel object (Lego block of similar size and color) is placed under a wire cup in the other side chamber. The plexiglass walls covering the doorways are then lifted, and the test session begins as the test subject is allowed to freely investigate all three chambers for 10 min. The amount of time spent interacting with either the novel object or novel mouse during the two phases was scored manually by an experimenter blinded to genotype during the assay using ANY-Maze. Additional activity data that was acquired from each chamber during the two sessions was calculated automatically using ANY-Maze. The placement of the novel mouse and novel object within the side chambers was randomized to prevent chamber bias. Behavioral analysis—rotarod We assessed rotarod performance using a previously established protocol 53 . As is standard for this task, mice were placed on an accelerating rod on which they locomote according to the motion of the rod (4–40 rpm over 5 min) (ENV-576M and ENV-571M, Med Associates, Inc.). Time was stopped and noted when one of three events occurred; the mouse fell off, the mouse made two consecutive rotations while hanging on the rod without walking, or the mouse successfully stayed on the rod for the total duration of the trial (5 min). We recorded the trial duration for four trials per day for three consecutive days. Behavioral analysis—tremor We recorded tremors using our custom-built tremor monitor 51 . The tremor monitor consists of a lightweight plastic container that is suspended in the air by eight elastic cords, one on each corner. The container is large enough for mice to move around and explore freely, providing us with readings related to movement. An accelerometer is mounted onto the underside of the container, where it functions to convert the detected movements into an electrical signal that is digitized using a Brownlee amplifier and records using Spike2 software. We also used the Spike2 software to detect the tremor power using a Fourier transform analysis based on two full minutes of mouse movements. For the recordings, mice were placed in the tremor monitor and were allowed to acclimate for three minutes. After this period, the recordings for analysis were initiated once the animals started actively exploring and utilizing the space. Statistical analysis All statistical analyses were performed using MATLAB (Mathworks). When performing a two-way (sex*genotype) ANOVA, we did not observe an effect of sex or an interaction effect between sex and genotype on pup behavior. We, therefore, combined mice from both sexes in the experiments and statistical analyses. We performed a two-tailed T-test to assess differences between control and conditional knockout mice when only a one-time point was involved. For analyses of pup behavior and rotarod performance, we performed a repeated measures ANOVA analysis with a Tukey post hoc analysis to test for differences at each time point. For the analyses of electrophysiological recordings in Purkinje cells, we used a linear mixed model analysis with genotype as a fixed variable and mouse number as a random variable. We calculated Cohen’s d for effect size (and reported these in the figure legends) by dividing the absolute difference in the parameter average of each group by the combined standard deviation. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding author. The data generated in this study are available in the source data provided in this paper. Source data are provided in this paper. Code availability The code used for this study is available from the corresponding author.
The cerebellum, a major part of the hindbrain in all vertebrates, is important for motor coordination, language acquisition, and regulating social and emotional behaviors. A study led by Dr. Roy Sillitoe, professor of Pathology and Neuroscience at Baylor College of Medicine and investigator at the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children's Hospital, shows two distinct types of cerebellar neurons differentially regulate motor and non-motor behaviors during development and in adulthood. The study, published in Nature Communications, provides the first in vivo evidence supporting the critical role of a specific subset of excitatory glutamatergic neurons in acquiring motor and sensory/emotional behaviors. Further, it shows that neurons present in different regions of the cerebellum contribute differently to motor versus non-motor behaviors during development and in adulthood. Cerebellar circuits are established by two major types of neurons The cerebellar nuclei are present in the deepest layer of the cerebellum. These nuclei are encased by an outer highly convoluted sheet of tissue called the cerebellar cortex, which contains most of the other types of neurons in the cerebellum. The cerebellar cortex receives information from most parts of the body and other brain regions. These inputs are integrated by many types of cerebellar neurons and the deep-set cerebellar nuclei—the sole output structures in the cerebellum—then send those signals to the other parts of the brain. During development, cerebellar injury in preterm infants is often associated with movement disorders, language impairments, and social deficits. However, growing evidence in patients and animal models suggests that the site of injury and its relative severity determines the type and extent of the resulting symptoms. Unraveling the function of two types of cerebellar neurons "Our goal in undertaking this study was to determine if excitatory neurons in the cerebellar cortex and cerebellar nuclei act differentially to establish and maintain motor and social behaviors during developmental stages and in adulthood," lead author, Dr. Meike van der Heijden, a postdoctoral fellow in the Sillitoe lab at the time of the study, said. "Several recent studies have hinted at discrete roles for various cerebellar neuronal types and these findings inspired us to conduct a deeper examination of how relatively few neuron types in the cerebellum contribute to a wide range of motor and non-motor functions. When we embarked on this study, very little was known about how circuit cerebellar-associated behaviors originate, and whether the same neuronal subtypes contribute equally to the acquisition of these diverse behaviors." Dr. van der Heijden and graduate student in the Sillitoe lab, Alejandro G. Rey Hipolito, focused on the excitatory glutamatergic neuronal lineages in the cerebellum because it is commonly believed that these neuronal lineages drive the majority of cerebellar behaviors. Dissecting the functional contributions of these two neuronal lineages in the acquisition of different cerebellar-dependent behaviors requires the use of non-invasive and cell-type- specific manipulations during circuit development. They employed a combination of intersectional genetics and behavioral paradigms which allowed them to address this question with unparalleled precision and specificity in mice models of various developmental ages. Neurons of the cerebellar cortex control social behaviors whereas cerebellar nuclei neurons regulate motor function The team found silencing the excitatory lineages in the cerebellar cortex and cerebellar nuclei in early postnatal stages by genetically removing the Vglut2 gene from Atoh1-expressing neurons caused severe impairments in both motor and social vocalization behavior in early prenatal stages. However, by the time these Atoh1mutant mice reach adulthood, natural molecular transitions result in the normalization of the cerebellar cortex function, which to their surprise coincided with the restoration of social behaviors and only mild motor deficits in these mice. This finding indicated that early social deficits and delayed acquisition of normal social behaviors in these mice were likely due to the progressive normalization of the function of cerebellar cortex neurons. To test if this hypothesis was true, they eliminated neurotransmission from a subset of glutamatergic nuclei neurons using Ntsr1-cre driver. Upon repeating the same behavioral paradigms, they did not observe any social deficits but observed severe motor deficits in early postnatal mice that were fully resolved with age. "Together, several major novel findings emerged from our experiments," co-first author, Alejandro Rey Hipolito, said. "First, we were surprised to find that silencing the excitatory neurons did not impair all the cerebellar functions. Second, we observed that glutamatergic neurotransmission from cerebellar cortical versus cerebellar nuclei neurons regulates the acquisition of motor and social behaviors differentially—the cerebellar cortex neurons control the acquisition of social skills whereas the cerebellar nuclei affect the establishment of motor behaviors. Finally, it appears that the brain is able to compensate for some, but not all, perturbations that occur in the developing cerebellum." "This study has not only led to several important discoveries about the roles of different cerebellar neurons but has opened several interesting questions about the role of inhibitory GABAergic nuclei neurons in compensating for the loss of excitatory glutamatergic neurons and restoring the function, which we intend to explore in the future," Dr. Sillitoe, added. "Moreover, these findings offer several exciting and new possibilities to regulate specific cerebellar lineages to restore motor and non-motor functions after brain injury and disease."
10.1038/s41467-023-38475-9
Biology
More tomatoes, faster: Accelerating tomato engineering
Sarika Gupta et al, Modification of plant regeneration medium decreases the time for recovery of Solanum lycopersicum cultivar M82 stable transgenic lines, Plant Cell, Tissue and Organ Culture (PCTOC) (2016). DOI: 10.1007/s11240-016-1063-9
http://dx.doi.org/10.1007/s11240-016-1063-9
https://phys.org/news/2016-08-tomatoes-faster-tomato.html
Abstract Tomato ( Solanum lycopersicum ) has rapidly become a valuable model species for a variety of studies including functional genomics. A high-throughput method to obtain transgenic lines sooner than standard methods would greatly advance gene function studies. The goal of this study was to optimize our current transformation method by investigating medium components that would result in a decreased time for recovery of transgenics. For this study, 6-day-old cotyledon explants from Solanum lycopersicum cultivar M82 in vitro-grown seedlings were infected with the Agrobacterium tumefaciens strain LBA4404 containing the binary vector pBI121. This vector contains the β-glucuronidase reporter gene and the neomycin phosphotransferase II selectable marker gene that confers resistance to kanamycin. Modification of our standard plant regeneration medium with indole-3-acetic acid (IAA) at concentrations of either 0.05 or 0.1 mg/l decreased the recovery time for transgenic lines by 6 weeks as compared to our standard medium that contains zeatin as the only plant growth regulator. We observed 50 and 54 % transformation efficiency on plant regeneration medium containing 0.05 and 0.1 mg/l IAA, respectively. Moreover, addition of 1 mg/l IAA to the root induction medium resulted in earlier root development than medium that did not contain IAA. Addition of IAA to the plant regeneration and rooting media did not have any negative effects on plant development. Recovery of transgenic lines in a shorter time results in higher throughput for the introduction of gene constructs and has the potential to decrease the time and resources needed to complete investigations of gene function. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Tomato, Solanum lycopersicum , is a member of the Solanaceae family, which contains approximately 3000 plant species and includes some of the most economically important food crops. It is native to South America and was brought to Europe in the 1500s and then to North America in the 1800s (Jones 1998 ). Tomato is a perennial plant that has two different growth habits, determinate and indeterminate. There are two different market types of tomatoes, fresh market and processing. According to the Agricultural Marketing Resource Center, in 2014 the US dollar value for fresh market tomatoes was 1.14 and 1.325 billion for processing types, which are used to make products such as juice, sauces, and ketchup (Bombarely et al. 2011 ). In addition to being an economically important food crop, tomato is an excellent source of health beneficial nutrients including beta-carotene and lycopene. Over the years, utilization of tomato as a model plant species has increased because of readily available resources such as mutant populations (Emmanuel and Levy 2002 ), bioinformatics tools (Bombarely et al. 2011 ), and a high quality reference genome (Consortium 2012 ). In addition, since the very first report of Agrobacterium -mediated transformation of tomato by McCormick et al. ( 1986 ), there have been other reports of successful transformations of different genotypes (Chyi and Phillips 1987 ; Fillatti et al. 1987 ; Frary and Earle 1996 ; Park et al. 2003 ; Sun et al. 2006 ; Van Eck et al. 2006 ) and methods to improve transformation efficiency (Dan et al. 2016 ). A key aspect for the adoption of a model plant species is the availability of efficient transformation methodology. This was certainly the case for Arabidopsis , which is by far the most widely used model for plant research programs (Somerville and Koornneef 2002 ). While there are several methods available for plant transformation, Agrobacterium tumefaciens -mediated transformation has become the most extensively used method (Gelvin 2003 ; Pitzschke and Hirt 2010 ). Despite its effectiveness for gene transfer in tomato, there is still need for improvement. Improving methodology to decrease the time from introduction of a gene construct of interest to recovery of stable transgenics would improve the throughput and shorten the timeframe for studies that utilize tomato transgenic lines. We were interested in finding an approach to decrease the time to obtain transgenic lines of the processing type tomato M82 because this genotype is used for gene function studies in our lab as well as others (Brooks et al. 2014 ; Xu et al. 2015 ). We chose to start by investigating supplementation of our standard plant regeneration and rooting media with a growth regulator that had the potential to speed up plant development (Van Eck et al. 2006 ). Cytokinins and auxins are important hormones that influence growth and developmental processes in plants. Interactions between cytokinins and auxins have been shown to be necessary for the shoot apex growth (Gupta and Rashotte 2012 ; Shimizu-Sato et al. 2009 ). Auxin has also been shown to play a role in the specification of the root apical meristem (Friml et al. 2003 ; Gupta and Rashotte 2012 ; Sabatini et al. 1999 ). The hormonal interactions can be utilized in the area of tissue culture to leverage the presence of the hormones in the medium. In this study, we report the effects of the addition of the auxin, indole-3-acetic acid (IAA) on the recovery time of M82 transgenic lines. Materials and methods Plant material Seeds of Solanum lycopersicum cv M82 were surface sterilized in 20 % (v/v) bleach solution containing Tween-20 for 20 min followed by 3 rinses in sterile water. Seeds were germinated in Magenta GA7 boxes (Caisson Labs, Logan, UT) that contained 50 ml of Murashige and Skoog (MS) (Murashige and Skoog 1962 ) (Caisson Labs) based medium containing 2.15 g/l MS salts, 100 mg/l myo-inositol, 2 mg/l thiamine, 0.5 mg/l pyridoxine, 0.5 mg/l nicotinic acid, 10 g/l sucrose and 8 g/l Sigma agar (Sigma-Aldrich, St. Louis, MO). Cultures were maintained at 24 °C under a 16 h light/8 h dark photoperiod at 57–65 µE m −2 s −1 . One day prior to infection with Agrobacterium , cotyledon explants and feeder layer plates were prepared. Feeder layers were prepared before cutting the explants by dispensing 2 ml of a 1-week-old NT1 suspension culture onto KCMS medium [4.3 g/l MS salts, 100 mg/l myo-inositol, 1.3 mg/l thiamine, 0.2 mg/l 2,4-dichlorophenoxy acetic acid, 200 mg/l KH 2 PO 4 , 0.1 mg/l kinetin, 30 g/l sucrose, 5.2 g/l Agargel (Sigma-Aldrich), pH 6.0]. The suspension was covered with a sterile 7 cm Whatman filter paper. Explants were excised from 6-day-old seedlings before the first true leaves emerged. To prepare the explants, seedlings were placed on a sterile paper towel moistened with sterile water. Cotyledons were excised at the petioles, cut into approximately 1 cm sections, placed adaxial side down on the KCMS feeder layer plates, and maintained at 24 °C under a 16 h light/8 h dark photoperiod at 57–65 µE m −2 s −1 . Bacterial strain and binary vector Electroporation was used to introduce the pBI121 vector (Chen et al. 2003 ) into the Agrobacterium tumefaciens strain LBA4404. A single, well-formed colony from the selection plate was transferred to 50 ml of YEP selective medium that contained 50 mg/l kanamycin and maintained in a shaking incubator at 28 °C for 18–24 h or the length of time needed to reach an OD 600 of 0.6–0.7. The Agrobacterium suspension was centrifuged at 8000 rpm for 10 min at 20 °C. The pellet was resuspended in 50 ml of 2 % MSO medium (4.3 g/l MS salts, 100 mg/l myo-inositol, 0.4 mg/l thiamine, and 20 g/l sucrose) by vortexing. Agrobacterium -mediated transformation Cotyledon explants were incubated in the Agrobacterium /2 % MSO suspension for 5 min, transferred to a sterile paper towel to allow excess suspension to briefly drain, placed back onto the feeder plates with the adaxial sides down, and co-cultivated in the dark at 19 °C for 48 h. Explants, adaxial side up, were transferred to our standard plant regeneration selective medium designated 2ZK that contained 4.3 g/l MS salts, 100 mg/l myo-inositol, 1 ml/l Nitsch vitamins (1000×), 20 g/l sucrose, 2 mg/l trans-zeatin, 75 mg/l kanamycin, 300 mg/l timentin, and 5.2 g/l Agargel. One week later, the explants were transferred onto 2ZK medium containing IAA at either 0, 0.01, 0.05, 0.1, or 0.5 mg/l IAA. After 2 weeks, explants were transferred onto 1ZK medium that contained 4.3 g/l MS salts, 100 mg/l myo-inositol, 1 ml/l Nitsch vitamins (1000×), 20 g/l sucrose, 1 mg/l trans-zeatin, 75 mg/l kanamycin, 300 mg/l timentin, 5.2 g/l Agargel, and IAA at either 0 mg/l, 0.01 mg/l, 0.05 mg/l, or 0.1 mg/l IAA, or 0.5 mg/l in plates or Magenta GA7 boxes depending upon the size of the shoots regenerating from the cotyledon explants. When shoots were approximately 3 mm tall, they were excised from the cotyledon explants and transferred to selective rooting medium designated RMK [4.3 g/l MS salts, 1 ml/l Nitsch vitamins (1000×), 30 g/l sucrose, pH 6.0, 8 g/l Difco Bacto agar (Becton, Dickinson and Company, Franklin Lakes, NJ), 75 mg/l kanamycin, 300 mg/l timentin, and IAA at either 0 or 1 mg/l in Magenta GA7 boxes]. Unless otherwise noted, the pH of all media was adjusted to 5.8 before autoclaving. For all media, the trans-zeatin, IAA, kanamycin, and timentin were dispensed from filter sterilized stock solutions into autoclaved medium that was allowed to cool to 55 °C. Cotyledon explant cultures were transferred to freshly prepared medium every 2 weeks. GUS histochemical assay Histochemical assay of β-glucuronidase (GUS) activity was performed on leaves from putative transgenic and control (non-transformed) plants. Leaves were vacuum infiltrated for 20–30 min in buffer (0.8 g/l 5-bromo-4-chloro-3-indolyl-β− d -glucuronide (X-Gluc), 0.1 M Na 2 HPO 4 , 0.1 M NaH 2 PO 4 phosphate, 10 mM ethylenediamine tetraacetic acid (EDTA), 1.6 mM potassium-ferricyanide and 1.6 mM potassium-ferrocyanide, 5 % v/v Triton X-100, and 20 % v/v methanol) before incubation at 37 °C overnight. The chlorophyll was removed from the leaves by 3–4 washes with 70 % ethanol at room temperature. The leaves were examined with a Leica S8APO stereomicroscope outfitted with a digital camera. Polymerase chain reaction analysis To confirm the presence of the neomycin phosphotransferase II selectable marker gene ( nptII ), DNA was extracted from leaves of putative transgenic lines and controls (non-transformed) with the Qiagen DNeasy plant mini kit (Hilden, Germany) as per the manufacturer’s instructions. Primers used to detect nptII were forward 5′-GGC TGG AGA GGC TAT TC-3′ and reverse 5′-GGA GGC GAT AGA AGG CG-3′. The diagnostic amplicon size expected with these primers is approximately 700 bp. The PCR program started with a one-step cycle of 2 min at 95 °C, followed by 29 cycles of 30 s at 94 °C, 45 s at 57 °C, 50 s at 72 °C, and a 10 min final extension at 72 °C. DNA was separated and visualized by electrophoresis through a 1 % agaraose, ethidium bromide-stained gel. Experimental design A total of five different experiments were performed. Three biological replicates were used for each IAA concentration in each experiment. A total of 750 cotyledon explants were used per IAA concentration investigated. The standard error was calculated. Results Optimization of IAA concentration for recovery of stable transgenic lines After the co-cultivation period that followed infection with Agrobacterium , cotyledon explants were transferred to our standard selective plant regeneration medium designated 2ZK that contains 2 mg/l trans-zeatin as the only plant growth regulator. 1 week later, the explants were transferred to 2ZK supplemented with different IAA concentrations (0, 0.01, 0.05, 0.1, 0.5 mg/l) to determine if the addition of IAA would decrease the time from infection with Agrobacterium to recovery of stable transgenic lines. We continued to use this same series of IAA concentrations in the subsequent selective plant regeneration medium designated 1ZK. Medium supplemented with IAA resulted in shoots that were more fully developed earlier in the culture process as compared to medium without IAA (Fig. 1 A). In Fig. 1 A, cotyledon cultures shown in a–e represent controls that were not infected with Agrobacterium . We observed that as the IAA concentration increased, the level of plant regeneration from the controls decreased (Fig. 1 A, a–d). Cotyledon explants infected with Agrobacterium and cultured on medium containing IAA exhibited the same pattern of shoot development as the cotyledons not infected, in that we observed more well-developed shoots at an early stage of culture post infection (Fig. 1 A, f–i). For our standard method without IAA (Fig. 1 A, e), the level of plant regeneration is significantly less in comparison with medium that contained IAA. Fig. 1 Results for the recovery of Solanum lycopersicum cv M82 stable transgenics from Agrobacterium tumefaciens -infected cotyledon explants cultured on plant regeneration medium supplemented with different concentrations of indole-3-acetic acid (IAA). A Agrobacterium tumefaciens -infected cotyledon explants (approximately 5 weeks post infection) cultured on selective plant regeneration medium containing the following amounts of IAA in mg/l ( f ) 0.01, ( g ) 0.05, ( h ) 0.1, ( i ) 0.5, and ( j ) 0. Images ( a )–( e ) represent the corresponding non-infected controls for each IAA concentration, respectively. B Histochemical analysis for GUS expression in leaves taken from independent transgenic lines designated 1–9 recovered from selective plant regeneration medium that contained 0.1 mg/l IAA. GUS expression was not observed in the non-transformed controls. C Agarose gel of PCR products showing the expected ~700 bp product amplified from the nptII selectable marker gene in 10 independent transgenic lines ( lanes 1 – 10 ). These lines were recovered from selective plant regeneration medium that contained 0.1 mg/l IAA. C the control Full size image Fig. 2 Schematic representation of the optimized Agrobacterium tumefaciens -mediated transformation methodology for Solanum lycopersicum cv M82. See the “ Materials and Methods ” for details on seed sterilization and all media compositions Full size image In general, earlier emergence of well-developed shoots from Agrobacterium -infected cotyledon explants on medium containing IAA translated to the recovery of whole rooted plants in less time as compared with medium that did not contain IAA (Table 1 ). Medium containing either 0.05 or 0.1 mg/l IAA resulted in the shortest time, 11 weeks, for recovery of stable transgenic lines. There appeared to be a threshold of IAA concentration and effect on recovery time because at 0.5 mg/l IAA the time was similar to our standard method. We observed a similar decrease in time when transformations of other tomato genotypes were performed by different lab members who tested plant regeneration medium that contained 0.1 mg/l IAA (data unpublished). Table 1 Results for recovery of stable transgenic lines of Solanum lycopersicum cv M82 from Agrobacterium tumefaciens -infected cotyledon explants cultured on selective plant regeneration medium supplemented with different indole-3-acetic acid (IAA) concentrations Full size table Effect of IAA on transformation efficiency and rooting The formula below was used to calculate transformation efficiency (TE): \(\frac{\text{Total number of rooted shoots}}{\text{Total number of cotyledon explants infected with }Agrobacterium}\ \times \ 100\) Overall, the TE was lower when medium containing IAA was used as compared to the TE of 88 % when medium containing trans-zeatin as the only growth regulator was used (Table 1 ). When putative transgenic lines were approximately 3–4 cm tall, they were removed from the cotyledon explants and transferred to either our standard selective rooting medium (RMK) without IAA or RMK supplemented with 1 mg/l IAA designated RMIK. We chose this concentration based on previous work with tomato transgenic lines recovered from a few genotypes that did not root as well as M82 on our standard rooting medium (data not published). We observed that shoots cultured on RMIK resulted in the emergence of roots after 6–7 days as compared to 11–14 days on RMK. The addition of IAA to the medium did not result in any phenotypic differences of the plants as compared to medium that did not contain IAA. Characterization of putative transgenic lines The first level of analysis to confirm the recovered plants from Agrobacterium -infected cotyledons were transgenic was a histochemical assay for the GUS reporter protein. Whole leaves from plants rooted on RMK were used for the analysis. All leaves exhibited GUS activity, although we observed variation in the level of intensity with some leaves exhibiting a darker coloration than others (Fig. 1 B). GUS activity was not observed in leaves from non-transformed control plants. To further confirm the recovered plants were indeed stable transgenic lines, we did PCR analysis for the presence of the nptII selectable marker gene in plants found to be positive for GUS activity. Total genomic DNA was isolated from the leaves of the GUS-positive lines and non-transformed control plants. PCR amplification of the nptII gene was detected in plants that were also GUS positive. No amplified product was detected in DNA from the control, (non-transgenic) plants (Fig. 1 C). Modified protocol Based on our findings, we now follow a modified protocol as outlined in Fig. 2 for Agrobacterium -mediated tomato transformations. The modified protocol takes into account recovery time and TE. IAA concentrations of 0.05 and 0.1 mg/l IAA both resulted in a 6-week decrease for recovery of stable transformants, however, we chose to use 0.1 mg/l IAA in our modified protocol because of the 54 % TE (Table 1 ). In addition to M82, we have applied this protocol to other tomato genotypes including the most closely related wild species, Solanum pimpinellifolium , and also observed a decrease in time for recovery of transgenic lines as compared to our previous tomato transformation methodology (data not published). Discussion For development of stable transformation methodology, the foremost factors to be considered are transformation efficiency and the time from infection with Agrobacterium tumefaciens until the recovery of transgenic lines. Various parameters have been investigated to reach a high transformation efficiency for tomato including application of lipoic acid to reduce tissue necrosis caused by Agrobacterium infection of the MicroTom genotype (Dan et al. 2016 ). Methods that provide both high efficiency and the shortest time to recovery of transgenic lines lead to a high-throughput pipeline that allows earlier evaluation of gene function. In turn, a high-throughput pipeline decreases the amount of labor and resources needed, which can translate into significant financial savings. The focus of our study was to investigate medium components that had the potential to decrease the time for recovery of stable tomato transgenic lines. Our standard method, which has a high transformation efficiency at approximately 90 %, takes 17 weeks for recovery of transformants. The interest in optimization of our methods stemmed from an increased need for transgenic lines because tomato has become the model species of choice for many studies that include ripening, abiotic and biotic tolerance, and nutritional content (Gonzali et al. 2009 ; Martel et al. 2011 ; Nguyen et al. 2010 ; Sun et al. 2010 ). In addition, with the recent demonstration of successful genome editing by CRISPR/Cas9 in tomato, the interest in applying this technology for the study of gene function will increase (Brooks et al. 2014 ; Ito et al. 2015 ). Therefore, a transformation methodology that can deliver modified lines in a shorter time frame will help to advance these studies. Our standard protocol is a modified version of methods reported by Fillatti et al. ( 1987 ) in which zeatin is the only growth regulator incorporated into the plant regeneration medium (Van Eck et al. 2006 ). We chose to start our investigation by examining additional growth regulators that, in combination with zeatin, would greatly reduce the time for recovery of stable transgenic lines but not have a significant negative effect on transformation efficiency. In a literature search, we found several reports that demonstrated a positive effect on tomato plant regeneration and transformation efficiency when indole-3-acetic acid (IAA) was incorporated into zeatin-containing plant regeneration medium (Gubis et al. 2004 ; Park et al. 2003 ; Yasmeen 2009 ). However, they did not report any effects observed on the time required to recover transgenic plants. We found that addition of either 0.05 or 0.1 mg/l IAA to our standard plant regeneration medium that contains trans-zeatin as the only growth regulator decreased the time for recovery of stable transgenic lines from 17 to 11 weeks. Previous reports have demonstrated that shoot apical meristem development involves interactions among cytokinin signaling pathway components, auxin, and several families of transcription factors (Gupta and Rashotte 2012 ). It is possible that the addition of IAA to our standard plant regeneration medium facilitates interactions among the cytokinin signaling and auxin regulated genes, which results in faster shoot development from the cotyledon explants. Although there was a reduction in transformation efficiency with the addition of IAA from approximately 90 % to about 50 %, this level is acceptable considering transgenic lines can be evaluated significantly earlier than when our standard method was used. This decrease in time allows researchers to test their material earlier and make changes to their approaches sooner if results are unsatisfactory for their genes of interest. In addition to supplementation of the standard plant regeneration medium with IAA, we also investigated effects of adding IAA to the rooting medium, which was not a component in our standard rooting medium. Inclusion of IAA in in vitro rooting medium has been reported for tomato, however, it is not routinely added because tomato readily develops roots in culture medium without growth regulators (Frary and Earle 1996 ). Our interest was to determine if supplementation decreased the time to rooting, which we did observe. Auxin is produced in both shoots and roots and the auxin produced in the roots helps in root development (Overvoorde et al. 2010 ; Petersson et al. 2009 ; Stepanova et al. 2008 ). It is possible that IAA, when exogenously added, increases the levels of auxin in the plants, hence resulting in the cells differentiating earlier to form roots. However, research needs to be conducted to confirm this hypothesis. Conclusions Interest in tomato as a model has increased over the years and we have seen a rise in the number of research groups that require stable transgenic lines for various studies. Modification of our standard plant regeneration medium through the addition of either 0.05 or 0.1 mg/l AA shortened the recovery of transgenic lines by 6 weeks for the M82 tomato cultivar. Application of this modification for transformation of other tomato genotypes in our lab also resulted in a decreased time for recovery of stable transgenic lines. A shorter recovery time for stable transgenic lines is highly desirable for functional studies to allow earlier determination of the genes and networks involved in phenotypes of interest. A decrease in recovery time would also provide a higher throughput process, which has the potential for cost savings related to labor and resources. Optimization studies of standard transformation methodologies for different plant species should always be considered in order to alleviate bottlenecks for generation of stable transgenic lines (Altpeter et al. 2016 ). Availability of efficient transformation methods is especially critical with the rapid development of genome editing technologies, which will result in an increased demand for generation of transgenic lines for basic research studies that can lead to crop improvement.
Tomatoes are already an ideal model species for plant research, but scientists at the Boyce Thompson Institute (BTI) just made them even more useful by cutting the time required to modify their genes by six weeks. While looking for ways to make tomatoes and other crop plants more productive, BTI Assistant Professor Joyce Van Eck and former postdoctoral scientist Sarika Gupta developed a better method for "transforming" a tomato—a process that involves inserting DNA into the tomato genome and growing a new plant. By adding the plant hormone auxin to the medium that supports growth of tomato cells, they can speed up the plant's growth, ultimately accelerating the pace of their research. They describe this advance in a study published in Plant Cell, Tissue and Organ Culture. Typically, transformation works by using a soil bacterium called Agrobacterium tumefaciens to insert a new segment of DNA into the cells of tomato seedling tissues. The transformed cells are transplanted onto plant regeneration medium, which contains nutrients and hormones that cause the tissue to grow into a tiny new plant. These plantlets are then transferred to root induction medium where they grow roots, before being planted in soil and hardened in the greenhouse. In the new method, the Van Eck lab adds auxin to the regeneration and rooting media. The addition reduces the length of the procedure from 17 weeks to just 11. A researcher transfers a tomato plantlet into root induction medium to encourage the growth of roots. Credit: Sheryl Sinkow "If you can speed up the plant development, which is what the auxin is doing, you can decrease the time it takes to get genetically engineered lines," said Van Eck. Researchers in the Van Eck lab perform tomato transformations routinely, as a research method to understand how individual genes affect tomato growth and development. Their new protocol not only saves time, but uses fewer materials, and saves money. The researchers can then finish experiments sooner and potentially run more projects at once. The project came out of a collaboration with Cold Spring Harbor Laboratory to identify gene pathways that could be used to breed crops with higher yields. "We're looking at the genes and gene networks involved in stem cell proliferation, meristem development and flowering and branching," said Van Eck, "with the end goal being that maybe genes that we identify in tomato, which is strictly being used as a model, might help us understand what can be done to increase yield in other crops."
10.1007/s11240-016-1063-9
Chemistry
Study yields first snapshots of water splitting in photosynthesis
Serial time-resolved crystallography of photosystem II using a femtosecond X-ray laser, Nature, DOI: 10.1038/nature13453 Journal information: Nature
http://dx.doi.org/10.1038/nature13453
https://phys.org/news/2014-07-yields-snapshots-photosynthesis.html
Abstract Photosynthesis, a process catalysed by plants, algae and cyanobacteria converts sunlight to energy thus sustaining all higher life on Earth. Two large membrane protein complexes, photosystem I and II (PSI and PSII), act in series to catalyse the light-driven reactions in photosynthesis. PSII catalyses the light-driven water splitting process, which maintains the Earth’s oxygenic atmosphere 1 . In this process, the oxygen-evolving complex (OEC) of PSII cycles through five states, S 0 to S 4 , in which four electrons are sequentially extracted from the OEC in four light-driven charge-separation events. Here we describe time resolved experiments on PSII nano/microcrystals from Thermosynechococcus elongatus performed with the recently developed 2 technique of serial femtosecond crystallography. Structures have been determined from PSII in the dark S 1 state and after double laser excitation (putative S 3 state) at 5 and 5.5 Å resolution, respectively. The results provide evidence that PSII undergoes significant conformational changes at the electron acceptor side and at the Mn 4 CaO 5 core of the OEC. These include an elongation of the metal cluster, accompanied by changes in the protein environment, which could allow for binding of the second substrate water molecule between the more distant protruding Mn (referred to as the ‘dangler’ Mn) and the Mn 3 CaO x cubane in the S 2 to S 3 transition, as predicted by spectroscopic and computational studies 3 , 4 . This work shows the great potential for time-resolved serial femtosecond crystallography for investigation of catalytic processes in biomolecules. Main The first X-ray structure of PSII was determined to a resolution of 3.8 Å in 2001 (ref. 5 ) revealing the protein’s architecture and the overall shape and location of the OEC. In 2011, Shen and co-workers achieved a breakthrough in the structural elucidation by dramatically improving crystal quality, enabling determination at 1.9 Å resolution 6 . This structure showed the OEC at near atomic resolution. However, the OEC was probably affected by X-ray damage, a fundamental problem in X-ray crystallography. The X-ray damage problem may be overcome through the use of serial femtosecond crystallography (SFX) 2 , 7 , 8 , an advance enabled by the advent of the X-ray free electron laser (XFEL). In SFX, a stream of microcrystals in their mother liquor is exposed to intense 120 Hz femtosecond XFEL pulses, thereby collecting millions of X-ray diffraction ‘snapshots’ in a time-frame of hours. Each X-ray FEL pulse is so intense that it destroys the sample; however, the pulse duration is so short that diffraction is observed before destruction occurs 9 . Conventional X-ray structures correspond to a temporal and spatially averaged representation of biomolecules, leading to a ‘static’ picture. To capture dynamic processes such as water oxidation in PSII, time-resolved X-ray data can be collected using SFX 10 , 11 , 12 . Conformational changes may be observed at a time-resolution ranging from femtoseconds to microseconds by combining visible laser excitation with the SFX setup and varying time delays between the optical pump and the X-ray probe snapshot. As partial reflections from crystals in random orientations are recorded, many snapshots must be collected for adequate sampling of the full reflections and three-dimensional reconstruction. A time-resolved pump-probe experiment was performed in 2010 using PSI-ferredoxin crystals as a model system, in which changes in diffraction intensities, consistent with a light-induced electron transfer process in the PSI-ferredoxin complex and dissociation of the PSI-ferredoxin complex were seen 10 . The catalytic reaction in PSII is a dynamic process. The oxygen evolution reaction is catalysed by the oxygen-evolving complex (OEC), in which the electrons are extracted from the OEC in four sequential charge separation events through the S-state cycle (Kok cycle), as shown in Fig. 1a (see ref. 1 for a review). SFX diffraction and X-ray emission spectroscopy (XES) were reported investigating the dark S 1 state and the single flash (S 2 state) of PSII 13 . The XES data show that the electronic structure of the highly radiation sensitive Mn 4 CaO 5 cluster does not change during femtosecond X-ray exposure 13 . However, the quantity and quality of X-ray diffraction data was insufficient to determine if any structural changes occurred. Figure 1: Experimental schemes for the time-resolved serial femtosecond crystallography experiments on photosystem II. a , S-state scheme of the oxygen-evolving complex depicting changes the oxidation state of the 4 manganese ions of the Mn 4 CaO 5 cluster in the S-state cycle (*note that the oxidation states of the Mn atoms in the S 4 state are still under debate). The scheme also indicates the reduction of the plastoquinone (PQ) to plastoquinol (PQH 2 ) in the Q B site. The blank boxes represent the unoccupied PQ B binding site. b, Experimental setup. The crystal-stream of photosystem II, was exposed to two subsequent optical laser pulses at 527 nm before interacting with the femtosecond X-ray FEL pulses. With a FEL frequency of 120 Hz and triggering of the laser at 60 Hz, X-ray diffraction patterns from crystals in the dark state and ‘light’ double-flash state alternate. c , Laser excitation scheme. The first 527 nm laser pulse excited the crystals 110 μs after the trigger pulse. The delay time between the first and second 527 nm laser pulse was 210 μs, with X-ray diffraction data collected 570 μs after the second laser pulse. PowerPoint slide Full size image We report on microsecond time-resolved SFX experiments conducted at the CXI instrument 14 at the Linac Coherent Light Source (LCLS) 15 . The experimental setup is shown in Fig. 1b, c . We developed a multiple-laser illumination scheme that progressively excites the OEC in dark-adapted PSII nano/microcrystals by two laser pulses from the dark S 1 state via the S 2 state to the double-flash putative S 3 state. Not all PSII centres progress to the next S-state by a single saturating flash which could lead to heterogeneities. Therefore the S-state reached in the double-flash experiment is indicated as ‘putative S 3 state’ here. The diffraction patterns collected from dark and illuminated crystals were sorted into two data sets. Using the ‘hit finding’ program Cheetah 16 , 71,628 PSII diffraction images were identified from the dark diffraction patterns and 63,363 were identified from the double-flash patterns, see Extended Data Fig. 2 for examples. From these hits, 34,554 dark state patterns and 18,772 double-flash patterns were indexed and merged to reduce all stochastic errors using the CrystFEL software suite 17 (see Extended Data Table 2a, b ). The data were indexed as orthorhombic, with unit-cell parameters of a = 133 Å, b = 226 Å, and c = 307 Å for the dark state, and a = 137 Å, b = 228 Å, and c = 309 Å for the double-flash state (for error margins see Table 1 ). The distributions of unit cell dimensions are shown in Extended Data Fig. 3 and Extended Data Table 2a, b . The data clearly supports an increase in unit cell dimensions in the double-flash state, with the largest difference detected for the a axis. Two factors may explain the change in unit cell constants, lower indexing rates and a slight decrease in resolution of diffraction: crystal degradation upon laser illumination or significant structural changes upon the transition from the dark state to the double-flash state, which may represent the putative S 1 to S 3 transition. To distinguish between these two possibilities, we collected data with triple-flash excitation of the PSII crystals, where at least part of the PSII centres may have reached the putative transient S 4 state. Preliminary data evaluation of the triple-flash data set (that is, putative S 4 state) shows similar unit cell dimensions and crystal quality as the dark S 1 state (see Extended Data Fig. 3 and Extended Data Table 2a ). This suggests that conformational changes induced in PSII by the double-flash excitation (that is, during the putative S 1 to S 3 transition) are reversed after excitation with the third flash (in the putative S 3 to S 4 transition), as discussed in the Methods. Table 1 Statistics of the dark (S 1 ) and double-flash (putative S 3 ) data sets collected at CXI, LCLS Full size table Diffraction data from the dark and double-flash states were evaluated to 5 Å and 5.5 Å resolution, respectively; the data refinement statistics are shown in Table 1 . As each diffraction pattern represents a thin cut through reciprocal space by the Ewald sphere, only partial reflections were recorded. A high multiplicity of observations is therefore needed for each Bragg reflection to obtain full, accurate structure factors. The average multiplicity per reflection was 617 for the dark state data set and 383 for the double-flash data set over the whole resolution range (see Extended Data Tables 1a, b ). Extended Data Table 2c shows a comparison of the data statistics of this work with the S 1 and S 2 data in ref. 13 . The data were phased by molecular replacement using a truncated version of the 1.9 Å structure (PDB accession code 3ARC) 6 . Rigid body refinement (phenix.refine) was performed for both the dark and double-flash structures (see Methods for further details on molecular replacement and refinement). To reduce model bias, we calculated omit maps and simulated annealed maps (SA-omit maps) for the dark and double-flash data, deleting the coordinates of the Mn 4 CaO 5 cluster from the model. Figure 2a–c shows the arrangement of protein subunits and cofactors of photosystem II, including the electron transport chain. The comparison of the electron density maps for the dark state (green) and the double-flash state (white) at a contour level of 1.5 σ is shown in Fig. 2d–f . Both maps show clear electron densities for the transmembrane helices as well as loops and cofactors. Additional electron density maps for representative structural elements of PSII are shown in Extended Data Figs 5 , 6 , 7 and 8 . Overall, the protein fits into the electron densities for the dark and double-flash states and matches with the high resolution structural model. However, differences appear in regions of the Mn 4 CaO 5 cluster and the acceptor side, where the quinones and the non-haem iron are located. Determining the significance of these changes and their correlations is complicated due to the resolution limit of the data. Figure 2g–i shows detailed views of the loops at the acceptor side of PSII. The quinones are not visible at the current resolution of 5 Å. The maps indicate differences between the electron densities of the dark and double-flash states in the loop regions and also in the position of the non-haem iron that is coordinated by the loops. Figure 2: Overall structure and omit map electron density of photosystem II. a, Transmembrane helices and cofactors in photosystem II (stromal view density map). The proteins are named according to their genes and labelled with coloured letters. b , Side view of PSII at its longest axis along the membrane plane. c , Electron transport chain of PSII (P680 (blue), accessory chlorophylls (olive-green), pheophytins (yellow) and plastoquinones PQ A (white) and PQ B (cyan)); atoms of the OEC are depicted as spheres (Mn purple, Ca green, O light red). d – f , Omit map electron densities (view as in b ) at 1.5 σ for the dark state (S 1 ) (green) ( d ), double-flash state (putative S 3 state) (white) ( e ) and overlay of the two omit maps ( f ). g – i , Omit maps (1.5 σ) of the electron acceptor side of photosystem II for the dark (S 1 ) (green) ( g ), double-flash (putative S 3 state) (white) ( h ) and overlay of the two omit maps ( i ). Note that changes include a shift of the electron density of the non-haem iron. PowerPoint slide Full size image We now focus on the structure in the undamaged dark S 1 state of the metal cluster in the OEC and the potential light-induced structural changes that may occur during the S-state transition. Extended Data Fig. 8 shows the SA-omit map of the OEC in the dark S 1 state for the Mn cluster in PSII with the 1.9 Å X-ray structure in ref. 6 . Interestingly, the electron-density map of the dangler Mn atom from the 1.9 Å structure is located outside the dark S 1 state electron density, a feature also visible in the electron density map of ref. 13 . These structural observations are consistent with spectroscopic results, which indicate that the distance between the dangler Mn and the Mn 3 O x Ca distorted cubane is indeed shorter in the dark S 1 state than in the 1.9 Å structure based on the synchrotron data, which might be influenced by X-ray induced reduction of the Mn ions in the metal cluster 18 , 19 . This shorter distance is in agreement with density function theory (DFT) studies 4 , 18 , 20 based on the 1.9 Å structure of PSII 6 , however, the current resolution limit of 5 Å does not allow a quantitative assessment. The mechanism of water splitting is intensely debated and many models have been proposed. The recent 1.9 Å X-ray structure 6 formed the basis for more detailed theoretical studies of the process, yet the proposed mechanisms differ 4 , 20 , 21 , 22 . Based on our time-resolved SFX (TR-SFX) structural data, we looked for differences between the electron-density maps of the OEC, derived from the dark and the double-flash data sets. Figure 3a, b shows the SA-omit maps calculated for dark (blue) and double-flash state (yellow) and compared with the model of the metal cluster from the 1.9 Å structure 6 ( Fig. 3c ). The Mn 4 CaO 5 cluster was omitted from the model for the calculation of the SA-omit map, which is based on annealing at a virtual temperature of 5000 K to minimize phase bias. The SA-omit electron densities of the dark and double-flash states differ in the shape and position, as well as in the protein environment, of the Mn 4 CaO 5 cluster. The dark state simulated-annealed (SA)-omit electron density for the OEC protein environment matches the model of the 1.9 Å structure 6 , whereas the SA-omit map of the double-flash state differs significantly. Any interpretation of changes in the protein environment of the OEC is highly speculative at a resolution of 5 Å and heterogeneities in the S-state transitions. However, the SA-omit map of the double-flash state is suggestive of conformational changes which may indicate a movement of the CD loop (including the ligand D170) away from the cluster. If confirmed at higher resolution, this could explain mutagenesis studies that raised questions about the ligand-role of D170 in the higher S-states 23 . Furthermore, in the double-flash state, the electron density of the metal cluster extends and shows a new connection to the AB loop at the site where D61 is located. Although D61 only serves as a second sphere ligand in the 1.9 Å crystal structure 6 , mutagenesis studies indicated an important role in the water oxidation process, as the S 2 to S 3 transition is blocked in D61 mutants. Figure 3: The OEC simulated annealed omit maps. a , b , At 1.5 σ for dark and double-flash states of the Mn 4 CaO 5 cluster of PSII for the dark S 1 -state (blue) ( a ) and double-flash, putative S 3 state ( b ) with the 1.9 Å structural model (3ARC) from ref. 6 . Mn in the distorted Mn 3 O x Ca cubane (Mn-1 to Mn-3) (light-pink), dangler manganese (Mn-4) (violet), calcium (green) and oxygen (red). c , 1.9 Å crystal structure of the Mn 4 CaO 5 cluster with ligands from ref. 6 (PDB accession 3ARC). d , Proposed model of the S 3 state based on DFT calculations by Isobe et al. 4 (reproduced with permission of The Royal Society of Chemistry). Larger diversions in the SA-omit map of the double-flash (putative S 3 state) include potential movement of the loop connecting transmembrane helices C/D (CD loop) with D170 and AB loop (with D61), and an increase of the distance between the dangler Mn and the Mn 3 O x Ca cubane (violet arrow). PowerPoint slide Full size image The change in the electron-density of the OEC is suggestive of an increase in the distance between the cubane and the dangler Mn and distortion in the cubane in the double-flash state. The observed electron densities ( Fig. 3a, b ) of the dark state and double-flash state are consistent with conformational changes predicted in a recent DFT study of the S 3 state in ref. 4 , shown in Fig. 3d . The increased distance between the cubane and dangler Mn could allow the second ‘substrate’ water molecule to bind between the Mn 3 O x Ca cubane and the dangler Mn in the S 2 to S 3 state transition. It was shown by extended X-ray absorption fine structure (EXFAS) spectroscopy that the Mn–Ca 2+ distances in the Mn 3 O x Ca cubane shrink in the S 3 state 24 . Although the Jahn-Teller effect extends the distances between metals in the lower S-states of the OEC (Mn oxidation states +II, +III and +IV), a shrinking of the Mn 3 O x Ca cubane is predicted in the S 3 state when all 4 Mn in the OEC have reached the oxidation state +IV. A comparison of the electron density in the dark and the double-flash states may indeed suggest an overall decrease in the dimension of the Mn 3 O x Ca cubane in the double-flash state, which is in good agreement with the proposed S 3 state EXAFS and XES models 25 (more detail provided in the Supplementary Discussion). The consistency of spectroscopy and DFT studies with our observations may provide preliminary indications that a significant fraction of the OEC centres in our crystals have reached the S 3 state in the double-flash experiment. Our time-resolved SFX study captures the image of PSII after it has been excited by 2 saturating flashes and provides experimental evidence for structural changes occurring in the putative S 3 state of the OEC, accompanied by structural changes at the acceptor side of PSII. As the resolution is limited to 5 Å, the interpretation of the changes observed is preliminary. This work is a proof-of-principle that time-resolved SFX can unravel conformational changes at moderate resolution and may lay the foundation for solving high resolution structures of PSII at all stages of the water oxidation process in the future. To unlock the secrets of the water-splitting mechanism by TR-SFX at atomic detail, the resolution must be further improved and structures must be determined from all the S-states with multiple time delays. Methods Summary Here, we describe microsecond time resolution optical pump/X-ray probe SFX experiments on PSII nano/microcrystals, to study conformational changes in PSII in the transition from the dark to the double-flash state of PSII, where structures were determined at 5 and 5.5 Å resolution, respectively. Nanocrystal growth for SFX was performed using a free-interface diffusion technique (see Extended Data Fig. 1a–e ). The size and crystallinity of the samples were monitored by dynamic light scattering (DLS) and second order non-linear imaging of chiral crystals (SONICC) 26 . Time-resolved SFX data were collected from PSII crystals delivered to the X-ray FEL interaction region at room temperature in a liquid jet 27 , The crystals were progressed along the S-state cycle 28 from the dark S 1 to the putative S 3 state by two saturating laser flashes before the structure was probed by interaction with the X-rays flashes (see Fig. 1b, c and Methods for details). The structure factors and coordinates have been deposited in the Protein Data Bank and accession codes for S 1 and putative S 3 states are 4PBU and 4Q54 respectively. Online Methods Isolation and crystallization of photosystem II Photosystem II (PSII) was isolated from Thermosynechococcus elongatus as described in ref. 32 with the following modifications. The samples were frozen after the ion exchange chromatography step and batch precipitation/crystallization of PSII was performed four times in decreasing concentrations of PEG 2000, the last purification step was performed at LCLS directly before growth of microcrystals. Standard crystallization methods, such as vapour diffusion and hanging drop, have become the dominant techniques for the growth of protein crystals in the past decade. These methods have been optimized for very small volumes of protein that are currently not useful for serial femtosecond crystallography (SFX). Batch crystallization has been successfully used for the growth of large PSII crystals for standard crystallography 32 and crystallization in batch method can be easily scaled up for large protein volumes. Growth of very small PSII crystals (approximately 1 μm) using the batch method, at high yield, requires high super-saturation conditions to enhance nucleation. This leads to a very broad size distribution of crystals, visible crystal defects, and a coexistence of crystals and amorphous precipitate. The growth of PSII microcrystals for data collection was performed using a free interface diffusion technique that had been adapted to a batch method for a higher yield. In this approach, nucleation is initiated at the interface between the high density precipitant solution and the lower density protein solution, containing PSII detergent micelles. The protein solution was prepared by dissolving the PSII precipitate from the fourth precipitation step (see above) in solubilisation buffer A-sol (100 mM PIPES pH 7.0, 5 mM CaCl 2 , 10 mM tocopherol, 0.03% β-dodecylmaltoside (DDM) (from glycon >99.9% purity), adjusting the concentration of chlorophyll (chl) to 0.5 mM. The crystals were grown in batch experiments in 15 ml Falcon tubes by adding precipitation buffer to a final concentration of 100 mM PIPES pH 7.0, 5 mM CaCl 2 , 10 mM tocopherol, and 10–17% PEG 2000 to the protein solution. The optimal PEG and protein concentration was experimentally determined for each protein batch separately, in small-scale experiments, before the remainder of the protein was crystallized on site at LCLS directly before data collection. All precipitation steps and the crystallization were carried out in darkness to avoid pre-illumination of the crystals. The PEG precipitant solution was added to the PSII solution at a rate of approximately 20 μl per second. The slow addition of the PEG precipitant, with a higher density than the protein solution, led to a two-phase system, where the precipitant solution gathered at the bottom of the tube with a small mixed zone in between the top protein layer and the bottom PEG layer. Once the crystals formed and reached a sufficient size, they sedimented into the precipitant solution and formed a pellet at the bottom of the tube. As the precipitant solution did not contain protein, the crystal growth stopped (see Extended Data Fig. 1a–e ). To further ensure that crystal growth had been terminated, the supernatant was removed after 48 h and a stabilization buffer (100 mM PIPES pH 7.0, 5 mM CaCl 2 , 10 mM tocopherol, 20% PEG 2000) was added. This buffer also served as the running buffer for delivery of the crystals to the X-ray interaction region during the time-resolved serial femtosecond crystallography (TR-SFX) experiments. The supernatant was saved and later used once crystals reached sufficient size, as it continued to crystallize at a significantly slower rate, due to the decreased protein concentration. The crystallization progress was monitored closely by taking 1 μl samples at regular intervals to determine crystal size by dynamic light scattering (DLS) (see Extended Data Fig. 1d ). Crystals were collected and directly used for the SFX experiments after they reached a size of around 1 µm. Although DLS provides the size distribution of the particles, it cannot discriminate between amorphous and crystalline particles. The crystallinity of the samples was therefore monitored by SONICC, which detects nanocrystals as small as 100 nm 26 . Extended Data Fig. 1a–e show the crystallization method and results of crystal characterization experiments. All handling steps with the crystals were performed in dim green light to limit exposure. After growth and stabilization, crystals were stored in complete darkness. All steps thereafter were done in the dark. The plastoquinone derivative PQ decyl was not added to the crystals until the beamtime in June 2012 and therefore was not a part of the double-flash (that is, putative S 3 ) experiments. J. Bergkamp synthesized PQ decyl in the laboratories of A. L. Moore and T. A. Moore at Arizona State University. It contains the same head group as plastoquinone but the 15 units of the isoprene tail were replaced by an N -decyl chain to improve solubility. We have independently determined that addition of PQ decyl maintains full oxygen evolving activity of PSII under continuous illumination for several minutes. A plastoquinone molecule is located in the quinone binding pocket Q B in S 1 , after two laser excitation flashes, reaching the putative S 3 state, the natural plastoquinone (PQ) becomes double reduced to PQ 2− , which takes up two protons and leaves the binding site as plastoquinol PQH 2 . The empty binding pocket is then re-populated by PQ decyl, before the third laser flash induces the next charge separation event to reach the S 4 state (see Fig. 1a for the S-state scheme 31 that also features the reduction state of the quinone in each of the S-states). Characterization of microcrystals by SONICC and DLS All crystal samples were characterized via two methods, second order nonlinear imaging of chiral crystals (SONICC) 26 . The SONICC and DLS experiments were performed using 24-well VDXm plates for data collection from 1 µl suspension of the crystals. The reservoir was filled with 500 µl of precipitation buffer to prevent evaporation of the 1 μl hanging drop containing the crystals. The crystals were monitored by the SONICC system, at 200 mW laser power for an exposure time of 1 s. In the SONICC technique, the crystals were excited by two femtosecond infrared laser pulses of 1064 nm leading to second harmonic generation. The SONICC signal was detected at 532 nm. The dynamic light-scattering experiments were performed in 1 μl hanging drops. Our DLS instrument is equipped with an infrared laser at 785 nm to avoid excitation of the pigments in PSII during the measurements. Attempts to conduct DLS measurements on PSII samples with a red laser, as used in most commercial DLS instruments, failed due to the strong absorption of red light by the chlorophylls in PSII, which strongly diminished the signal. For each sample, 10 measurements were performed with 20 s of data collection per measurement. The viscosity of the buffer solutions was determined experimentally by calibration with 140 nm polystyrene beads. The crystal size distribution was found to be around 1 μm (see Extended Data Fig. 1d ). EPR characterization of the S-state transition Electron paramagnetic resonance (EPR) has been used extensively to determine the progression of PSII through the S-state cycle 33 , 34 . Using this technique, quantification of the S 2 state is determined by the multiline signal (MLS), a signature of only this state of PSII. Protein solutions, used for PSII crystallization, were cycled through the S-states by multiple single flash laser excitation (1–3 flashes) at room temperature. The samples were flash frozen directly after laser excitation in liquid nitrogen and the yield of multiline signal was interrogated by EPR at low temperature. The EPR experiments were performed under conditions that were as similar as possible but not identical to the LCLS experiments (for example the EPR data collection required freezing in glycerol, whereas SFX data are collected at room temperature without glycerol addition). As the S-state yield is an estimate, the double-flash state is indicated as ‘putative S 3 state’ here. Prior to illumination, glycerol was added to samples as a cryo-protectant, yielding a final concentration of ∼ 30% by volume. This resulted in a final PSII concentration of 1.8 mg chlorophyll per ml (2 mM). Dark adaption was performed before the EPR experiments, therefore the PSII samples were initially, predominately in the S 1 state. We did not attempt to get all the PSII into the S 1 state, using pre-illumination flashes in the presence of artificial electron acceptors followed by dark adaptation (as described in ref. 34 ) as the natural mobile plastoquinone (1 Q B per reaction centre) would leave the binding site and consequently be lost in the pre-illumination phase. For flash illumination at room temperature (20 °C), a Continuum Surelite EX Nd:YAG laser was used with a second harmonic generator yielding 532 nm, 8 ns, 1 Hz, and ∼ 380 mJ (fluence of ∼ 10 3 mJ per cm 2 ) pulses. Low-temperature X-band EPR spectra of the flash-frozen samples were recorded using a Bruker EMX X-band spectrometer equipped with a X-Band CW microwave bridge. The sample temperature was maintained at 10 K by an Air Products LTR liquid helium cryostat during collection of the EPR spectra. Spectrometer conditions were as follows: microwave frequency, 9.48 GHz; field modulation amplitude, 25 G at 100 kHz; microwave power, 31 mW. Dark-adapted samples of both PSII solutions and crystal suspensions (frozen without illumination) contained a small percentage (typically 10%) of multiline signal. To determine the maximal possible yield of the MLS, the dark adapted, frozen PSII samples were illuminated at 190 K (dry ice/ethanol bath) for 20 min. The S-state cycle stops at the S 2 state at low temperature (190 K) 33 , therefore all photoactive PSII reaction centres can be brought into the S 2 state by low temperature continuous illumination. The illumination of the frozen crystal suspensions was performed in 2-min intervals with the maximum MLS signal intensity achieved within the first 2 min of continuous illumination. We observed that the presence of glycerol affects the intensity of the MLS. Solutions and crystals to which no glycerol was added exhibited lower MLS intensity after continuous illuminations. Extended Data Fig. 1f shows the EPR spectra of PSII samples, which were excited by 1 or 2 laser flashes at room temperature, followed by flash freezing in the dark. In addition, the graph also shows the control experiment in which dark-adapted frozen PSII was continuously illuminated at 190 K to achieve the maximal S 2 yield. A miss parameter α = 9.7% was obtained by fitting the MLS intensities as a function of laser flashes, see Extended Data Fig. 1g . Samples exposed to three flashes were also included. The data evaluation indicates that with two flash illuminations at least 70% of the PSII reaction centres have reached the S 3 state under these conditions. The transition rates are comparable to results of EPR studies on spinach PSII by Styring et al. , who published transition rates of maximum 75% under conditions that were highly optimized for maximal yields of S-state transitions 34 . CXI instrument setup and sample delivery for TR-SFX data collection on PSII crystals in the double-flash state TR-SFX data were collected at the Coherent X-ray Imaging (CXI) instrument 14 at the Linac Coherent Light Source (LCLS) 15 at SLAC National Accelerator Laboratory (for reviews on TR-SFX see refs 11 , 12 ). The PSII crystals were delivered to the interaction region with the FEL beam as a suspension of crystals using the gas focusing liquid injector described in refs 27 , 35 , 36 . The injection process was improved by the invention of an anti-settling device 37 , which also was modified to permit temperature control of the sample. Stainless steel syringes containing the crystal suspension (pre-filtered through 10 µm stainless steel filters from IDEX) were mounted on a rotating holder, which was cooled with a Peltier element to 10 °C. This set-up maintained the crystals at their growth-temperature until their delivery to the FEL interaction region by the gas focusing jet. The glass capillary nozzle tips were polished to allow for visible laser excitation of the crystals in the nozzle tip. A black coating upstream of the nozzle tip prevented pre-excitation of the crystal suspension upstream of the optical laser interaction region. The gas focused liquid jet had a diameter of 4 µm at the intersection with the X-ray focal area of 2 µm 2 full width at half maximum (FWHM) using the CXI instrument. Data were collected at the X-ray photon energy of 6.0 keV (2.05 Å) with an X-ray pulse duration of approximately 50 femtoseconds. The X-ray diffraction patterns were detected on a Cornell-SLAC Pixel Array Detector (CSPAD) 38 , 39 . The detector consisted of 64 panels, each 194 × 185 pixels tiled to span approximately 1728 × 1728 pixels with gaps between the tiles and approximately 19 cm along one side. Double laser excitation of photosystem II crystals PSII was excited by two subsequent optical pump-laser pulses from a diode-pumped, frequency-doubled Nd:YLF laser (Evolution-30, Coherent), emitting visible laser pulses at a wavelength of 527 nm. The laser was fibre-coupled from a table outside the experimental chamber, channelled into the chamber and onto the head of the liquid jet injector 27 , 35 . This wavelength provided a good compromise between transmission and absorption of light in the PSII crystals to ensure approximately homogeneous excitation throughout the crystals (the size of crystals was approximately 1 µm), as identified by DLS (see Extended Data Fig. 1d ). The optical double pulse was produced by an active Q-switch with ‘on times’ chosen such that the pulse energies of both laser pulses match. This resulted in pulse lengths of 90 ns and 150 ns for the first and second pulses, respectively, this was done to maintain the total number of photons incident on the crystal same. The laser was focused to an area of approximately 400 µm in diameter with a flat top profile and aimed at the transparent tip of the nozzle, about 100 µm upstream from the X-ray interaction region. The laser beam diameter and aim point were chosen based on the desired pump-probe delays and calculations of sample flow profile and flow speed inside the capillary (average speed of 85 mm per second for 50 μm inner diameter of the nozzle) and in the jet (12 m per second for the 4 μm jet). This allowed for illumination of the crystals at the tip of the nozzle and in the liquid jet. It also ensured that crystals probed by X-rays were first exposed to both optical laser pulses for the ‘pumped’ measurements (see Fig. 1b, c ). The molar extinction coefficient was determined from dissolved PSII crystals at 527 nm, which was then used to calculate the absorption of PSII crystals with approximately 1 μm path length. From these parameters, it was calculated that a minimum fluence of 2.3 mJ per cm 2 (or a pulse energy of 3 µJ for a 400 µm diameter spot) was required to excite every PSII complex in a crystal of 1–2 µm diameter. During the experiment, the laser pulse energy was monitored using a power meter placed at a 50:50 beam splitter on the laser table (50% of the energy going into the experiment). The energy of the laser pulse transmitted to the sample was calculated by the previously measured transmission of the entire fibre setup from the 50:50 beam splitter to the final lens in the injector (including fibre couplings and feed-through) and using 20% as a conservative estimate for the light transmission through the optically transparent end of the nozzle to the sample. That transmission could not be measured directly, so it was indirectly estimated. The laser pulse energy was chosen to correspond to approximately 6 µJ per pulse at the sample, that is, three times what is required to optimally pump a 1–2 µm PSII crystal at 527 nm 34 . The desired timing of the optical laser pulses with respect to the X-ray pulses from LCLS 15 were achieved by using the LCLS/CXI event generator and event reader (EVG/EVR) system that provided precise timing signals less than 1 µs before every X-ray pulse (or every other X-ray pulse) and a SRS DG645 delay generator to produce properly timed double trigger signals for the laser Q switch. The time-delay between the two optical laser pulses was set to be 210 µs corresponding to three times the time constant for the S 1 to S 2 transition of 70 µs 28 . The time delay between the second optical pulse and the X-ray pulse was 570 µs to allow the oxygen-evolving complex (OEC) to complete the S 2 to S 3 transition (3 times the time constant of 190 µs for the S 2 to S 3 transition) 28 . As the electron transfer between Q A to Q B takes place in the 200 to 400 μs time range 40 , (depending also on the organism and oxidation state of Q B ) the delay time of 780 μs between flash 1 and the X-ray pulse represents a reasonable minimal delay time aimed at following both the processes at the donor side in the OEC and the acceptor side at the Q B binding site. The uniform change in unit-cell dimension in the double-flash experiment (putative S 3 state) and its reversion in the triple flash experiment (putative transient S 4 state) provides an independent indication for significant and uniform progression of the OEC through the S-states. In order to monitor the delay between the pump beam and the probe beam, we separately measured the response of two photodiodes to the optical and X-ray excitation with an Acqiris digitizer. One photodiode was exposed to stray optical light on the laser table and the other to X-rays scattered from the sample jet at very high angle. This was necessary because of the disparity in the response of the diodes to each signal near the interaction region—that is, at the required optical and X-ray fluences needed to perform the experiment, the signals could not be discriminated when using a single photodiode. The delay between the signals introduced by the measurement of the optical light on the laser table was measured at the beginning of the experiment by equalizing the signals from each beam near the X-ray interaction point. That constant was used to calculate a calibrated delay time with sufficient precision for our experiment using the digitizer trace. The LCLS timing signal triggered the preceding optical laser pulses for every other X-ray pulse allowing us to acquire alternating diffraction images from ‘dark’ (ground state) and ‘double-flash’ PSII crystals in the putative S 3 state, that is, SFX data were collected with a frequency of the X-ray pulses of 120 Hz whereas the frequency of the pump laser pulses was 60 Hz ( Fig. 1b, c ). This approach minimizes other sources of error that might occur from systematic error. With this setup, alternating ‘dark’ and double excited ‘illuminated’ images were collected at 3,600 dark images and 3,600 illuminated images per minute. In addition to this alternating experimental scheme, dark run data was collected while the 527 nm pump laser was switched off (see Extended Data Table 2b for data statistics of dark only and alternate dark/light runs). Representative diffraction patterns of the S 1 and the double-flash data sets are shown in Extended Data Fig. 2a, b . Processing and evaluation of data with the Cheetah and CrystFEL programs A total of 5,528,071 raw diffraction frames were collected from photosystem II microcrystals in January 2012. The raw diffraction data (XTC format) were pre-processed by the Cheetah software package 16 , and then analysed in the software suite CrystFEL 17 . Examples of diffraction patterns of the PSII crystals are shown in Extended Data Fig. 2a for the dark (S 1 ) state and in Extended Data Fig. 2b for the double-flash (putative S 3 ) state. The first step of pre-processing involved dark current subtraction from each diffraction frame and masking of dead, hot, cold and saturated pixels. It also included masking of the low-resolution scattering from the liquid jet and the detector panel edges. Local background correction step was applied to the raw diffraction frames during their pre-processing in Cheetah 16 . Each diffraction frame was analysed and identified as a crystal ‘hit’ only if it contained 25 or more peaks with the intensities of at least 400 analogue-to-digital units. The locally background-corrected patterns collected in the alternating mode were sorted into two data sets for the dark and double-pumped hits based on signals from a photodiode and video camera in the experimental chamber. Comparison between the dark data sets from the alternating light/dark runs and complete dark runs showed no difference in unit cell constants, or indexing rates, therefore the dark data sets were merged. The final data sets contained 71,628 ‘hits’ from the dark state data set and 63,363 ‘hits’ from double-flash data sets. The triple-flash data sets, leading to the putative transient S 4 state of PSII (collected during a separate beamtime in June 2012), were collected in alternate runs with the frequency of 120 Hz of the LCLS X-ray pulses and 60 Hz laser excitation frequency. The same laser excitation scheme as described before was used to reach the S 4 state. The third laser excitation, reaching the S 4 state, was achieved by triggering a third laser excitation 527 nm laser pulse 570 µs after the second laser excitation pulse. This third pulse was triggered by a second 527 nm laser installed in the CXI hutch, which excited the jet through a separate hole in the shroud. The delay time between the third laser flash and the FEL X-ray pulse was 250 µs. The data sets from the January 2012 beamtime are designated as dark (A) and double-flash (A) and the data sets from the June 2012 beamtime are designated as dark (B) and triple-flash (B) (see Extended Data Table 2a ). The dark (B) and triple-flash (B) data sets contained 33,373 and 32,190 hits, respectively. The hits for all data sets were passed as separate sets to the CrystFEL software suite 17 for auto indexing with MOSFLM, using the orthorhombic unit cell dimensions of PSII from Thermosynechococcus elongatus (PDB code 1FE1) 5 within a tolerance limit of 6%, 5%, 5% for the reciprocal axes of a , b , c respectively for the S 1 state. Similarly, the tolerance limit of 8%, 5%, 5% were used for the reciprocal axes of a , b , c, respectively, for the double-flash and triple-flash states. After indexing, the 4 data sets were handled separately and the ‘indexing rates’ (fraction of hits which could be successfully indexed) were 48% for dark (A), 29% for double-flash (A), 35% for dark (B), and 39% for triple-flash (B). Extended Data Table 2a, b shows the indexing statistics as well as the unit cell constants for all 4 data sets. The unit cell is orthorhombic and shows very significant changes in the unit cell dimensions between the dark S 1 and double-flash data sets (A) state (see Extended Data Table 2 and Extended Data Fig. 3 ). The most pronounced change is in the dimension of the a axis which increases by 3.3 Å. This change in unit cell constants is accompanied by the slight decrease in diffraction quality (5 vs 5.5 Å resolution) (see Extended Data Tables 1 and 2 ) and significant lowering of indexing rates. This change in the unit cell dimension is fully reversed to the unit cell constants observed in the dark S 1 state when PSII crystals are excited by three laser flashes, eventually reaching the putative transient S 4 state. Furthermore, the indexing rate for the dark (B) and triple-flash (B) data sets are comparable, with 35% and 39%, respectively. Multiplicity in the measurements is very important due to the partial nature of all reflections, the variation of the flux in each FEL pulse and the fact that each diffraction pattern is collected from a different crystal. The triple-flash data set, with 12,500 diffraction patterns, is sufficient to accurately determine the unit cell constants. However, this is borderline for the determination of accurate structure factors and therefore, further data evaluation has been limited to the dark (S 1 ) and double-flash data sets. The dark (S 1 ) and double-flash (putative S 3 ) data sets that were used for structure factor determination consist of 34,554 and 18,772 indexed patterns, respectively. Our data sets were merged separately in three dimensions and the structure factors were extracted separately from dark and double-flash data sets of PSII protein using the Monte Carlo method 41 , which integrates the snapshots partial reflections from randomly oriented crystals of varying size and shape (see Supplementary Videos 1 and 2 ). Average multiplicities are 684 and 373 in all resolution shells from 19.20 Å to 4.03 Å of the dark (S 1 ) and double-flash states, respectively (see Extended Data Tables 1a, b ). A comparison of the data statistics of our work with that of Kern et al. (ref. 13 ) is shown in Extended Data Table 2c . Our data sets evaluated with CrystFEL 17 show significantly higher multiplicity of the data and better correlation coefficients (CC 1/2 ) 29 which are indicative of the quality of the merged reflections, when compared to the data of Kern et al (ref. 13 ), evaluated with the software suite cctbx.xfel 30 . The internal consistency of the SFX data are expressed as R split 17 . For the calculation of R split , the sets of images of a data set are split into two random halves, and the structure factors are calculated separately for each half. The difference between the amplitudes of each of the structure factors of (hkl) plane between the two half data sets is used to estimate the convergence of the full data set as given by R split . An R split of 0.22 means an estimated 22% error of the structure factors of the full data set to the ‘true’ data set. The two half data sets would have agreed to 0.22 * sqrt(2). R split was calculated for the dark and double-pumped data sets separately. As expected, with an increasing number of indexed patterns, R split decreases as the data set reaches higher multiplicity and completeness (see Extended Data Fig. 4 ). The average R split values over all resolution shells were 0.07 for the dark and 0.09 for the double-pumped data sets. Structure determination The data for the dark (S 1 ) and double-flash (that is, putative S 3 ) state of PSII were handled as two completely separate data sets for the whole data analysis process. After processing the raw serial femtosecond X-ray crystallographic data, the final structure factors from dark and double-pumped data sets were passed to the CCP4 software package 42 . Molecular replacement Molecular replacement 43 was carried out by the PhaserMR program, which is a part of the CCP4 suite 42 using the PSII X-ray structure at 1.9 Å resolution of Umena and co-workers (PDB ID: 3ARC) 6 as the search model, which was modified by removing waters, detergents, lipids and alternative conformers of the amino acids. We used monomer 1 of the PSII dimer (in which the monomer 1 subunits are labelled with capital letters and small letters are used for monomer 2 in 3ARC model) as the search model for molecular replacement to solve the phase problem by using the program phaser (version 2.5.3). After we found monomer 1, we repeated the search for monomer 2. Refinement Both structures were refined by phenix.refine 44 using rigid body refinement, where each C-alpha chain of each protein subunit was considered as a rigid body. The cofactors were also considered as rigid bodies. During the rigid body refinement, we considered only translational refinement and not rotational refinement of the rigid entities. So, the RMS bond angle was not refined. We used the original B factors from the 3ARC model because B-factor refinement is not useful at the given resolution of 5.0 Å. After three refinement cycles, R -factors for the dark state at 5.0 Å of R work of 0.260 ( R free = 0.262) were observed. The R -factors for the double-flash state at 5.5 Å, are R work of 0.280 ( R free = 0.290). Refinement statistics are shown in Table 1 . All figures displaying structures were made using PyMOL (DeLano Scientific; ). Calculation of electron density maps The electron density maps for dark and double-flash data sets were generated using the FFT program from the CCP4 suite. The omit maps, defined by Bhat, were calculated using ‘Omit’ (CCP4 suite) 45 , 46 where the OEC was removed from the MR solution. These (2 F o – F c ) omit maps were calculated using experimental data and the MR model excluding the OEC before applying refinement in order to avoid model-bias. For superimposition of these two omit maps from dark and double-flash states respectively, the following steps were carried out. First, using the omit maps as inputs, new sets of coordinate files (PDB files) were generated from the MR solutions of dark and double-flash states separately, so that each of the two omit maps fits the model, using the Molrep program in CCP4 suite. Second, the modified coordinate files for the dark and double-flash states, outputs of Molrep program, were opened, and the superimposed coordinates were saved in the Coot program 47 . The double-flash state coordinate file was considered as the moving object and the dark state coordinate file as the fixed object. As a result, Coot provided Euler angles and translational coordinates ( x , y , z values) for this superposition. Third, using these Euler angles and translational coordinates as rotational operator with opposite sign, the double-flash state omit map was rotated using the MAPMASK program in the CCP4 suite. Because the unit cell constants of the dark and double-flash electron density maps differ, they are in different frames of inertia, which we have to take into account for the overlaying process. The rotated double-flash state omit map (output of MAPMASK program) was moved over the superimposed coordinate file. The same procedure was applied to the dark state omit map aligning with the superimposed double-flash coordinate file using MAPMASK program in the CCP4 suite. Examples of the electron density maps are shown in Extended Data Figs 5 , 6 , 7 . Calculation of simulated annealed omit maps The solutions from the molecular replacement for the PSII dimer for the dark and double-flash states were used for the calculation of the simulated annealed (SA) omit maps 48 . For each of the PSII coordinate files of the MR solutions, the OEC was removed and then the resulting PSII coordinate file was used for calculating the SA-omit map with a starting temperature of 5000 K using the ‘AutoBuild create omit map’ program from the Phenix suite (version 1301 dev) 49 . The SA omit maps of the OEC in the dark state (S 1 ) and the double-flash (putative S 3 ) state are shown in Fig. 3a, b and also see Extended Data Fig. 8 for the dark state SA omit map from a different viewpoint. Unit cell increase The conformational change and associated unit cell changes may be caused by dissociation of the mobile plastoquinone PQ B from the Q B binding pocket after double-flash excitation, when PSII may reach the S 3 state (see Fig. 1a ). The structural changes leading to the difference in unit cell constants are probably most significant at the stromal side of PSII where the quinone bindings sites are located. To avoid structural heterogeneity at the acceptor side by partial re-occupation of the Q B binding site, no quinone was added to the crystals for the double pump experiments. We thereby may have ‘trapped’ PSII in the double flash experiment in the putative S 3 state conformation with an empty Q B binding pocket. In order to transition from S 3 to S 4 , an electron acceptor must replenish the empty Q B binding site. Therefore, the plastoquinone derivative PQ decyl, which diffuses into the Q B pocket, was added to the crystals used for the triple-flash excitation data set. With the Q B binding site re-occupied, the change in unit cell constants is reversed. Structural changes of the OEC In light of new results on theoretical modelling of the OEC 3 , 4 , 18 , 21 , 22 , 50 , we further examined the SA-omit maps in the dark and double-flash states for differences in the metal cluster that can be detected even at low resolution and discuss the results here in light of recent computational and spectroscopic studies on the metal cluster. The changes in the density of the Mn 4 CaO 5 metal cluster are suggestive of an increase of the distance between the cubane and the ‘dangler’ Mn and a distortion of the cubane in the S 3 state. The observed electron densities are compared in Fig. 3a and 3b with the recent theoretical studies of Isobe and co-workers 4 , shown in Fig. 3d , who predicted a ‘breakage’ of the dangler Mn from the cubane cluster in the S 3 state. Additionally, EXAFS data constrains the extent of the movement of the dangler Mn relative to the cubane 51 , 52 . The increase in distance could allow for the binding of the second substrate water molecule between the dangler Mn and the Mn 3 CaO x cubane. The presence of a substrate water molecule between the dangler Mn and the distorted cubane in the higher S-states, has also been predicted to be essential for the catalytic mechanism in a recent DFT model of the full catalytic S-state cycle, including modelling of the substrate water exchange 21 , 53 . In addition to the elongation, the overall dimensions of the Mn 4 CaO 5 cluster appear to condense in the double-flash data set that may represent the putative S 3 -state. This may include shrinking of the distance between the Ca 2+ and the 3 Mn in the distorted cubane. EXAFS studies on PSII, where the Ca was substituted with Sr, showed significant changes in Mn–Mn or Mn–Sr distances in the S 3 state 24 , which were interpreted to indicate the distance between Mn and Ca would shrink in the S 3 state. Our experimental findings suggest a shrinking of the Mn 4 CaO 5 cluster in double-flash state, which supports the hypothesis of a condensation of the Mn 3 O x Ca cubane part of the Mn 4 CaO 5 cluster in S 3 (ref. 4 ). Models of Mn–oxygen cubane compounds show an increased distance between the Mn and O atoms in the cubane at lower oxidation states (+2 and +3) because of the Jahn-Teller (JT) effect 22 , 54 . Distances derived from a recently published model Mn–O and Mn 3 O x Ca cubane structures 55 indicate that Mn–O distances depend on the oxidation states of the Mn-ions: the average Mn +2 –O distance is 2.2 Å, the average Mn +3 –O distance is 2.0 Å and the average Mn 4+ –O distance is 1.8 Å. Two models have been proposed on the basis of X-ray absorption and emission spectroscopy, one describes the S 3 state as Mn (+3 +4 +4 +4) and the other proposes Mn (+4 +4 +4 +4) 25 , 56 . In the model in which all Mn ions have reached the Mn +4 oxidation state, a significant shrinking of the dimension of the cluster is expected due to the lack of the JT distortion with the average Mn–O distance being reduced to 1.8 Å (ref. 54 ). The shrinking of the overall dimensions of the metal cluster, supported by our maps of the double-flash state, appears to be in agreement with the studies on model compounds. This indicates that the JT distortion diminishes in the putative S 3 state during progression of the S-states cycle when all Mn reach their +4 oxidation states 55 . The SA-omit maps of the dark (S 1 ) and the double-flash (putative S 3 ) states may be also indicative of changes in the protein environment of the Mn 4 CaO 5 cluster. Although the electron density map in the dark S 1 state overall follows the protein backbone of the 1.9 Å structure, larger perturbations of the protein environment of the cluster are visible in the double-flash state. The double-flash state electron density map may suggest a movement of the loop which connects the transmembrane helices C and D (that is, the CD loop) at the lumenal side which includes D170, away from the metal cluster and a movement of the AB loop (connecting the transmembrane helices A and B) into closer vicinity to the cluster, which may allow D61 to become part of the ligand sphere of the metal cluster. Although this interpretation of changes in the protein environment of the cluster is highly speculative at the given resolution, it could explain the results of mutagenesis studies on PSII. Although the mutation of D170A (which coordinates the dangler Mn and the Ca in the 1.9 Å structure of PSII) has no strong effects on the oxygen evolution function 23 , 57 , less than 15% of the oxygen evolution function remains in the D61A mutation 58 , 59 . This mutagenesis result was difficult to rationalize because D61 is found only in the second ligand sphere of the OEC in the 1.9 Å structure 6 . However, our SA-Omit electron density map of the metal cluster in the double-flash state shows a connection to the protein electron density in close vicinity to D61 (see Fig. 3b ). This finding may provide a first indication that D61 may serve as a ligand to the dangler manganese in the higher S-states. Although details of the conformational changes cannot be unravelled at the current resolution of 5 Å, the comparison of the dark and double-flash state SA omit maps provide an indication that the protein ligand sphere of the Mn 4 CaO 5 cluster may undergo significant changes when the OEC reaches the double-flash (putative S 3 ) state. Accession codes Primary accessions Protein Data Bank 4PBU 4Q54 Data deposits The structure factors and coordinates have been deposited in the Protein Data Bank and accession codes for S 1 and putative S 3 states are 4PBU and 4Q54 , respectively. Change history 10 September 2014 Minor changes were made to Fig. 3c labelling.
An international team, led by Arizona State University scientists, has published today in Nature a groundbreaking study that shows the first snapshots of photosynthesis in action as it splits water into protons, electrons and oxygen, the process that maintains Earth's oxygen atmosphere. "This study is the first step towards our ultimate goal of unraveling the secrets of water splitting and obtaining molecular movies of biomolecules," said Petra Fromme, professor of chemistry and biochemistry at ASU. Fromme is the senior author and leader of the international team, which reported their work in "Serial time-resolved crystallography of photosystem II using a femtosecond X-ray laser," in the July 9 on-line issue of Nature. Photosynthesis is one of the fundamental processes of life on Earth. The early Earth contained no oxygen and was converted to the oxygen-rich atmosphere we have today 2.5 billion years ago by the "invention" of the water splitting process in Photosystem II (PSII). All higher life on Earth depends on this process for its energy needs and PSII produces the oxygen we breathe, which ultimately keeps us alive. The revealing of the mechanism of this water splitting process is essential for the development of artificial systems that mimic and surpass the efficiency of natural systems. The development of an "artificial leaf" is one of the major goals of the ASU Center for Bio-Inspired Solar Fuel Production, which was the main supporter of this study. "A crucial problem facing our Center for Bio-Inspired Fuel Production (Bisfuel) at ASU and similar research groups around the world is discovering an efficient, inexpensive catalyst for oxidizing water to oxygen gas, hydrogen ions and electrons," said ASU Regents' Professor and Center Director Devens Gust. "Photosynthetic organisms already know how to do this, and we need to know the details of how photosynthesis carries out the process using abundant manganese and calcium. "The research by Fromme and coworkers gives us, for the very first time, a look at how the catalyst changes its structure while it is working," Gust added. "Once the mechanism of photosynthetic water oxidation is understood, chemists can begin to design artificial photosynthetic catalysts that will allow them to produce useful fuels using sunlight." In photosynthesis, oxygen is produced at a special metal site containing four manganese atoms and one calcium atom connected together as a metal cluster. This oxygen-evolving cluster is bound to the protein PSII that catalyzes the light driven process of water splitting. It requires four light flashes to extract one molecule of oxygen from two water molecules bound to the metal cluster. Fromme states that there are two major drawbacks to obtaining structural and dynamical information on this process by traditional X-ray crystallography. First, the pictures one can obtain with standard structural determination methods are static. Second, the quality of the structural information is adversely affected by X ray damage. "The trick is to use the world's most powerful X-ray laser, named LCLS located at the Department of Energy's SLAC National Accelerator Laboratory," said Fromme. "Extremely fast femtosecond (10-15 second) laser pulses record snapshots of the PSII crystals before they explode in the X-ray beam, a principle called 'diffraction before destruction.'" In this way, snapshots of the process of water splitting are obtained damage free. The ultimate goal of the work is to record molecular movies of water splitting. The team performed the time-resolved femtosecond crystallography experiments on Photosystem II nanocrystals, which are so small that you can hardly see them even under a microscope. The crystals are hit with two green laser flashes before the structural changes are elucidated by the femtosecond X-ray pulses. The researchers discovered large structural changes of the protein and the metal cluster that catalyzes the reaction. The cluster significantly elongates, thereby making room for a water molecule to move in. "This is a major step toward the goal of making a movie of the molecular machine responsible for photosynthesis, the process by which plants make the oxygen we breathe, from sunlight and water," explained John Spence, ASU Regents' Professor of physics, team member and scientific leader of the National Science Foundation funded BioXFEL Science and Technology Center, which develops methods for biology with free electron lasers. ASU recently made a large commitment to the groundbreaking work of the femtosecond crystallography team by planning to establish a new Center for Applied Structural Discovery at the Biodesign Institute at ASU. The center will be led by Petra Fromme. Student role in research An interdisciplinary team of eight ASU faculty members from the Department of Chemistry and Biochemistry (Petra Fromme, Alexandra Ros, Tom Moore and Anna Moore) and the Department of Physics (John Spence, Uwe Weierstall, Kevin Schmidt and Bruce Doak) worked together with national and international collaborators on this project. The results were made possible by the excellent work of current ASU graduate students Christopher Kupitz, Shibom Basu, Daniel James, Dingjie Wang, Chelsie Conrad, Shatabdi Roy Chowdhury, Jay-How Yang and ASU doctoral graduates and post-docs Kimberley Rendek, Mark Hunter, Jesse Bergkamp, Tzu-Chiao Chao and Richard Kirian. Two undergraduate students Danielle Cobb and Brenda Reeder supported the team and gained extensive research experience by working hand in hand with graduate students, researchers and faculty at the free electron laser at Stanford. Four ASU senior scientists and postdoctoral researchers (Ingo Grotjohann, Nadia Zatsepin, Haiguang Liu and Raimund Fromme) supported the faculty in the design, planning and execution of the experiments, and were instrumental in evaluation of the data. The first authorship of the paper is jointly held by the ASU graduate students Christopher Kupitz, who's dissertation is based on the development of new techniques for the growth and biophysical characterization of nanocrystals; and Shibom Basu, who devoted three years of his doctoral work to the development of the data evaluation methods. "It is so exciting to be a part of this groundbreaking research and to have the opportunity to participate in this incredible international collaboration," said Kupitz, who will graduate this summer with a Ph.D. in biochemistry. "I joined the project because it fascinates me to work at the LCLS accelerator on this important biological project." "The most exciting aspect of the work on Photosystem II is the prospect of making molecular movies to witness the water splitting process through time-resolved crystallography," added Basu. National and international collaborators on the project include the team of Henry Chapman at DESY in Hamburg, Germany, who with the ASU team and researchers at the MPI in Heidelberg pioneered the new method of serial femtosecond crystallography. Other collaborators included a team led by Matthias Frank, an expert on laser spectroscopy and time-resolved studies with FELs at Lawrence Livermore National Laboratory, and the team of Yulia Pushkar at Purdue University, who supported the work with characterization of the crystals by electron paramagnetic resonance. "We're tantalizingly close," said Chapman of the Center for Free-Electron Laser Science at DESY and a pioneer in X-ray free laser studies of crystallized proteins. "I think this shows that we really are on the right track and it will work."
10.1038/nature13453
Biology
Embracing bioinformatics in gene banks
Martin Mascher et al. Genebank genomics bridges the gap between the conservation of crop diversity and plant breeding, Nature Genetics (2019). DOI: 10.1038/s41588-019-0443-6 Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-019-0443-6
https://phys.org/news/2019-07-embracing-bioinformatics-gene-banks.html
Abstract Genebanks have the long-term mission of preserving plant genetic resources as an agricultural legacy for future crop improvement. Operating procedures for seed storage and plant propagation have been in place for decades, but there is a lack of effective means for the discovery and transfer of beneficial alleles from landraces and wild relatives into modern varieties. Here, we review the prospects of using molecular passport data derived from genomic sequence information as a universal monitoring tool at the single-plant level within and between genebanks. Together with recent advances in breeding methodologies, the transformation of genebanks into bio-digital resource centers will facilitate the selection of useful genetic variation and its use in breeding programs, thus providing easy access to past crop diversity. We propose linking catalogs of natural genetic variation and enquiries into biological mechanisms of plant performance as a long-term joint research goal of genebanks, plant geneticists and breeders. Main The establishment and maintenance of large ex situ collections of plant genetic resources (PGRs) were motivated by the impending loss of genetic diversity in crop plants when genetically uniform cultivars developed by systematic breeding began to replace traditional landraces worldwide 1 . Seeds stored in genebanks can be considered historic documents that provide information about the history of agriculture. Recent reports 2 , 3 on the joint analysis of DNA sequences of archeobotanical remains and extant diversity panels hosted in genebanks have highlighted the potential of, and challenges facing, plant archaeogenetics. However, genebanks were never meant to serve as purely conservational archives of crop diversity but had been instated with the forward-looking goal of preserving the ‘evolutionary potential’ of crops 4 . Long before environmentalism dominated public discourse, the genetic diversity represented by locally adapted landraces was considered vital for safeguarding future crop productivity. Despite the express goal of exploiting past diversity for crop improvement, the deployment of alleles originating from traditional landraces in elite breeding programs has seen only a few success stories to date. Notably, the dwarfing genes of the Green Revolution were introgressed into wheat and rice breeding lines from East Asian landraces 5 . Another example is the mlo alleles, which are prevalent in Ethiopian barley landraces 6 and confer broad-spectrum resistance to powdery mildew. In 1967, Krull and Borlaug stated that “the problem at present is less a lack of genetic variation but rather of efficiency in identifying and incorporating it” 7 . Five decades later, the consensus among genebank managers, plant geneticists and breeders is still that there is an urgent need for better ways to systematically evaluate and then realize the evolutionary potential of large seed collections locked away in cold rooms. In this Perspective, we aim to highlight how genomics-driven approaches to genebank management and pre-breeding can facilitate the characterization and utilization of PGRs. We provide a vision for the transformation of germplasm collections into bio-digital resource centers. Genebank genomics: molecular passport data for every seed A total of ~7.4 million accessions are stored in more than 1,750 genebanks worldwide 8 . Major challenges are (i) tracking the identity of accessions, (ii) avoiding unnecessary duplications within and between genebanks and (iii) maintaining the genetic integrity of accessions. The first two items are related to the practicalities of managing tens of thousands of seed lots, whereas the last point refers to the primary conceptual drawback of ex situ conservation: differential survival, drift and genetic erosion in storage and regeneration 7 . The traditional means of defining the identities of genebank material are passport records describing the taxonomy and provenance of accessions. Moreover, during successive rounds of seed multiplication, curators score highly heritable phenotypic characters as proxies for the underlying genetic makeup to monitor the authenticity of accessions. Recent studies in maize 9 and barley 10 have shown that affordable high-throughput genotyping provides ample amounts of single-nucleotide polymorphisms as defining characteristics of an accession. Clusters in analyses of population structure, such as principal component analysis 11 and model-based ancestry estimation 12 , differentiate accessions according to domestication status and geographic origin, and recapitulate distinctions in morphological and life-history traits that define gene pools in breeding and give rise to partial reproductive isolation. Thus, dense genotypic information can serve as molecular passport data to complement, corroborate and correct traditional passport records (Fig. 1 and Box 1 ). Although the implementation of this concept has been attempted for at least two decades 13 , 14 , the enormous increases in throughput afforded by high-throughput sequencing have only recently enabled dense genome-wide genotyping at the scale of tens of thousands of samples, that is, entire genebank collections. Benefiting from the availability of reference-genome assemblies for almost all major crop species, sequence-based genotyping overcomes the reproducibility issues of earlier marker platforms such as random amplification of polymorphic DNA or amplified fragment length polymorphism. Fig. 1: Genebank genomics as a tool for collection management. Dense genotypic information for all accessions held in genebanks will facilitate access to germplasm and guide conservation decisions. Drift during multiplication cycles may be monitored through repeated genotyping. Comparison of marker data between genebanks may complement traditional passport records and highlight duplicates and coverage gaps. Precision collections are panels of homozygous lines for all or a core set of the accessions of an inbred crop. Genetic data of a precision collection constitute a permanent resource underpinning genotype-to-phenotype mapping. Full size image In the context of genebank management, duplication refers to the inadvertent maintenance of two or more accessions tracing back to the same seed lot. Duplication enhances the workload of genebank managers and can misguide users, who, for example, might pick 1,000 barley accessions at random from the holdings of a major genebank and find that 20% of them are duplicates 15 . In contrast, safety backups are properly recorded in databases and, if entire collections are backed up, as in the case of the Svalbard Global seed vault, enable their recovery in case of catastrophic failure 16 . Finding duplicated accessions with genetic-marker data is beset with conceptual pitfalls arising from intra-accession diversity, which can be commensurate with inter-accession diversity in outcrossing species and is also present in non-negligible quantities in inbreeding species 10 , 17 (Fig. 2 ). Identity-by-state analyses can pinpoint duplicated genotypes. However, even in inbreeding species, genotyping a single individual per accession is not sufficient to establish the ‘identity by descent’ of seed lots but can provide only a first hint of potentially duplicated accessions. Corroborating evidence of the common origin of two accessions can be adduced from (i) examination of marker data for a larger sample of individuals from both accessions, (ii) field evaluation in which candidate duplicates are grown side by side, (iii) inspection of the original (paper) passport records and (iv) comparison of vouchers in the genebank’s herbarium. The list of duplicates, which may comprise thousands of entries 10 , must be worked through case by case. Simply dragging along duplicates through future propagation cycles may be less expensive than spending time and effort on compiling validation datasets and tracing back the ‘lines of descent’ of thousands of accessions—detective work that may often prove futile and discouraging when perusing decade-old passport records leads to dead ends. Nevertheless, we believe that duplicate cleansing should be attempted at least once to derive reliable cost estimates and then compare them to the accumulating costs of maintaining superfluous material. In addition, users of genebanks would profit, because the risk of inadvertently phenotyping duplicated material would be greatly reduced. Fig. 2: Identification of duplicates is complicated by heterogeneity within accessions. Genebank duplicates arise from the preservation of two seed lots (shown as glass jars with seeds) that trace back to the same original seed lot but are now maintained as different accessions. Information about the relationships between duplicated seed lots does not exist or is not accessible (for example, because of language barriers or a lack of digitalization). Different genotypes within accessions are represented as seeds of different sizes and colors. In the case of homogeneous accessions (for example, modern cultivars of inbred crops), duplicated accessions may be identified by comparing multiple individuals, all of which will be genetically identical. Heterogeneity is found within accessions of outcrossing species and in landraces of inbred crops that are mixtures of different (pure-breeding) genotypes. In these cases (middle), genotypic data even of many individuals per accession may not suffice to reliably detect duplicates, because diversity within an accession may be as high as that between accessions 52 . Pairs of closely related accessions (right), such as two landraces collected from neighboring villages or accessions of clonally propagated crops differing for somatic mutations with phenotypic effects (for example, tuber-color mutants in potato), may appear genetically highly similar but are not the results of duplications during ex situ conservation. Full size image In addition to the retrospective analysis of duplication and the correctness of passport records, we also envision genetic profiles as a tool for continuous monitoring of genetic integrity. When entering the genebank, seed bags represent a snapshot of genetic diversity at a particular time and place 1 . As seeds are stored ex situ, the sample composition changes by force of circumstance. After many decades of research into optimal storage conditions, many genebanks operate seed dryers and cold-storage facilities to extend the longevity of seeds and decrease the need for regenerating accessions in the field. However, none of these efforts prevent spontaneous mutations. Moreover, small effective population sizes amplify random genetic drift, and new selection pressures arise in the foreign environments of genebank fields (Fig. 1 ), thus necessarily altering allele frequencies and potentially resulting in an irrecoverable loss of valuable genetic variation. If cost-effective high-throughput methods were available for non-destructive DNA extraction from seeds and subsequent genotyping, entire seed lots could be genotyped to reveal mislabeling or cross-contamination from neighboring plots. The inspection of allele-frequency trajectories across multiple multiplication cycles would pinpoint haplotypes on the verge of extinction that could then be maintained under special growth regimes. Are we building castles in the air when we consider genotyping millions of seeds? The ‘time scale of concern’ of genebank managers as ‘crop evolutionists’ is on the order of centuries 4 . The interval between two regeneration cycles of many orthodox seeds stored in cold rooms can span the entire career of a single scientist. If single plants can now be genotyped from tens of thousands of accessions, is it unreasonable to speculate that our successors 40 years hence will be able to genotype each seed that we now commit to cold storage? Certainly, the implementation of monitoring identity at the level of an entire collection will require drastic reductions in cost and input requirements as well as increases in the throughput of DNA extraction and genotyping methods. First steps toward using high-throughput genotyping to monitor genetic integrity may be the comparison of allele frequencies in seed lots from past propagation cycles for selected accessions, possibly also including duplicates from different genebanks. Similarly, ancient-DNA methods 18 may enable genotyping of herbarium vouchers dating back to the entry of an accession into a genebank and comparison of the original genetic makeup to its current offspring. As genebanks increasingly rely on digital representations of DNA sequence data, questions about sustainable data management come to the fore. Currently, genotyping results based on the first generations of molecular markers, such as RFLPs or microsatellites 19 , are difficult to relate to digital marker matrices. Future generations may face the same obstacles with the data currently being generated. Will our sequence data, marker scores and analysis results be understood by genebank managers in the year 2100? Or will relational databases be bemoaned as we do paper records? We believe that the findable, accessible, interoperable and reusable (FAIR) principles for open data sharing, documentation and archiving that are now becoming community standards in computational biology should be adopted in genebank management 20 . Bulk publication of raw data with the relevant metadata and the use of open-source software for analysis facilitates data integration between different genebanks. Open data formats adhering to precisely defined specifications, such as the Multi-crop Passport Descriptors (MCPD; ), are already used by many genebanks to store passport data, genetic information and field evaluation records. First steps in this direction have been taken through the design, implementation and adoption of Breeding API (BrAPI; ) 21 , an international standardized programmatic interface for genebank records, and the assignment of digital object identifiers to accessions ( ). Minimum information about a plant phenotyping experiment (MIAPPE) 22 has been proposed as an interoperability standard for phenotypic data. Owing to the large variety of genotyping platforms, a community standard for exchanging genotypic data for PGRs has not yet been agreed upon. As genebanks transform into bio-digital resource centers, management of PGRs will become an interdisciplinary effort. Expertise in agronomy, plant genetics and seed biology will need to be supplemented by computational skills to generate, analyze and share molecular passport data. Data curation will be central to the implementation of genomics-assisted collection management. Even if automated standard operating procedures should be a goal, manual curation by experts in PGR management will be required at least initially to define data density, thresholds and corroborating evidence to make management decisions, and to go back to original passport records to validate them (Box 1 ). Digital cross-referencing of genebank accessions with molecular passport data at the global level has the potential to reduce unnecessary duplications, help close coverage gaps in collections and guide new collecting missions. Although the merits of open data and international collaborations are universally recognized in the scientific community, an unanticipated threat to the vision of data sharing between genebanks has surfaced in the proposed amendments to the Nagoya protocol to cover digital sequence information. In the worst case, these amendments would ban the public release of sequence information for genebank accessions 23 . Such a ban would probably not disrupt genebank management operations. However, without access to integrated information systems combining passport records, genotypic data and field observations, potential users of PGRs (such as plant breeders and research geneticists) would face great obstacles in accessing genebank material. Schemes for benefit sharing between donors and users of PGRs must not stifle genomics-assisted conservation efforts in their infancy but must ensure that data sharing policies are adapted to scientific practices and enable technological advances in genebank management. Box 1 Correcting passport records on the basis of genetic clustering We give one example of how genetic analyses based on molecular passport data can help to improve collection management: in their survey of the genetic diversity of barley accessions maintained in the German Federal ex situ genebank ( ), Milner et al. 53 came across many purportedly wild barley accessions originating from Ethiopia. In a principal component analysis based on more than100,000 genetic markers, these accessions clustered with domesticated barley, in agreement with the notion that wild barley is not native to Ethiopia 54 . Manual inspection of the passport records for these accessions indicated that the material had been introduced in 2003. As seed material was transferred, the corresponding passport records were imported into the genebank information system (GBIS; ) 55 . The biological status of accessions (for example, modern cultivar or traditional landrace) is encoded by numeric values in the Multi-crop Passport Descriptors. The entries of the putatively mis-assigned Ethiopian accessions were ‘99’ (other type, with additional remarks). As passport records of the newly introduced accessions were imported into the GBIS, this value was erroneously converted to ‘100’ (wild). According to the available genetic data, this mistake was readily identified and fixed. GBIS lists now the correct domestication status for the Ethiopian accessions. Show more Speed pre-breeding The preceding section focused on how genomics can assist genebank managers in improving the accessibility and documentation of their holdings. Next, we focus on methodological advances in pre-breeding and genetics that facilitate the identification of useful genetic variation and expedite its deployment in breeding programs. In contrast to genebank management, the timescale of concern here is approximately two decades, spanning the interval between obtaining a seed from cold storage and the registration of a new variety 4 . These efforts would not be made by genebanks alone but instead would be carried out together with quantitative geneticists as well as public and private breeders. The first step toward the utilization of PGRs is the informed selection of the most promising genotypes for pre-breeding (Fig. 3 ). Such an educated choice would require phenotypic characterization of accessions, which profit from the use of fixed genotypes that can be regenerated to enable repeated measurements. In inbreeding crops, an effective means of obtaining ample amounts of genetically identical plants is single-seed descent and subsequent seed increase. Part of this work can be carried out by genebanks. For example, pure breeding seed stocks may be derived by single-seed descent from a single plant picked at random from each accession and maintained as ‘precision collections’ of homozygous genotypes in addition to existing accessions capturing intra-accession diversity. Users might be charged fees to access precision collections, thus compensating for the extra costs of seed multiplication, storage and distribution. For outbreeding species, fixing genotypes through inbreeding is challenging because of the heavy genetic load 24 . As an alternative, Romero Navarro et al. 25 have used a single heterozygous individual per maize accession and crossed it to a defined inbred line tester. The resultant F 1 families are mosaics of the heterozygous genotypes and were used for evaluation in multi-environment field trials. The disadvantage of this approach is the loss of the donor genotype by segregation. In contrast, Böhm et al. 26 have derived doubled-haploid lines from several highly heterozygous individuals of open-pollinated maize accessions, thus purging lethal or detrimental alleles in one sharp selection step and rapidly producing fixed genotypes ready for large-scale phenotypic evaluation. Doubled-haploid production could be carried out by genebanks, and cost recovery could be achieved through fees, as with precision collections of inbreeders. Fig. 3: Selection of the most promising genotypes for pre-breeding. Genotypic and phenotypic information enables the informed selection of the most promising genetic resource to enter pre-breeding programs. Genetic studies and breeding require the fixation of genotypes for repeated phenotypic measurement (for example, in replicated multi-environment field trials). Fixation strategies differ between inbreeding and outcrossing species. Depending on the genetic architectures of the traits, entire collections can be phenotyped, or core collections maximizing genetic diversity may need to be defined first. Then core selections can be iteratively refined through either direct observation or genome-wide prediction of phenotypes. Genotypes with the highest breeding values may then enter pre-breeding programs or be used in genetic studies to elucidate the molecular basis of beneficial traits of PGRs. DH, doubled haploid. Full size image The choice of a pre-breeding strategy depends mainly on whether entire collections can be phenotyped or whether this task would require too many resources. In cases of resistance to some diseases 27 , phenotyping entire collections is now feasible because of innovations in high-throughput screening technologies 28 (Fig. 3 ). Molecular passport data can reduce the workload through excluding duplicates. Phenotypic data for entire collections allow trait-customized core collections to be defined. These collections can underpin genome-wide association mapping after whole-genome sequencing. In the best case, these approaches directly lead to the discovery of candidate genes 29 . After causal genes have been identified, allele mining is an effective means to search for further useful variation 30 and to ultimately validate the novelty of haplotypes, whose deployment could broaden the genetic variation in the elite pool. In the near future, this technique may be complemented by genome editing 31 . The costs and workload required for phenotyping complex traits, such as grain yield, drought tolerance or quality, require limited attention to be paid to core sets. Other obstacles are genotype-by-environment interactions and maladaptation of landraces to current agricultural practices. Crossing PGRs to adapted elite testers is an effective approach to evaluate the performance of exotic genotypes in modern agriculture that has been used for both outcrossing and inbreeding species 32 , 33 to partially overcome the phenotyping bottleneck. Dominant alleles at major domestication and adaption loci inherited from the elite testers may reduce the effects of deleterious alleles, thus decreasing biases in breeding-value estimation of PGRs. Initial core sets for phenotyping are chosen to maximize estimates of neutral diversity derived from molecular passport data (Fig. 3 ). Yu et al. 34 have proposed the application of genome-wide prediction, which operates under the assumption that genetic variation for agronomic traits is controlled not by a few large-effect loci but by a large number of small-effect loci 35 , 36 . Trained on phenotypes of representative core sets, genome-wide prediction models use genetic marker data to infer phenotypes. Thus, molecular passport data together with genome-wide prediction enable the imputation of phenotypes for many PGRs and can aid in iterative refinement of core collections. Moreover, prediction models can be used to impute phenotypic values across genebanks: if, for instance, accessions from two major genebanks in Europe and Central America were genotyped with a common marker platform, and phenotypic data for one target environment (for example, Europe) were available for the European genebank’s accessions, the breeding values for accessions from the Central American genebank (which are usually grown in an environment very different from European conditions) could be predicted from the marker data. Given the availability of molecular passport data and phenotypic observations to train prediction models, this approach would enable users to screen the world’s genebanks for all accessions likely to have beneficial traits (for example, pathogen resistance) or those well adapted to target environments of interest. An important caveat is that valuable novel diversity is most probably rare. Thus, training populations must be compiled with utmost caution 37 . In the long term, genebanks should aim at phenotyping entire collections even for complex traits to also capture valuable low-frequency alleles. After promising genetic resources and genomic regions associated with plant performance have been narrowed down, the next steps are crossing selected genotypes to modern breeding lines, validating breeding-value estimates and subsequently developing commercial varieties. New developments in breeding methodology could expedite this process considerably. These include speed breeding 38 , that is, the use of highly artificial growth conditions that dramatically accelerate plant development to enable rapid generation cycles. Simultaneously, quantitative genetics methods continue to evolve. Recent advances include reciprocal genomic selection 39 to increase genetic gain per time unit, and strategies for designing heterotic groups to establish hybrid breeding programs in inbreeding crops 40 . These techniques follow the traditional paradigm of plant breeding: recurring cycles of crosses and selections. Targeted genetic modifications have long been considered a shortcut for transferring beneficial alleles from exotic donors into elite material. Genetic engineering would enable the rapid transfer of beneficial traits between crops and their wild relatives in both directions. Wulff and Dhugga 41 have proposed resistance-gene cassettes as a means of introducing durable resistance from diploid relatives into hexaploid bread wheat. In contrast, several groups have very recently reported on the (re-)domestication of solanaceous crops through CRISPR–Cas9-mediated gene knockout 42 , 43 , 44 . However, consumer acceptance of these methods remains low in many countries, thus indicating a need for better science communication 45 . A scientific objection, which may ultimately be of greater consequence, is that gene editing is based on the premise of having editable targets at hand. Such availability may be the case for qualitative resistance or the developmental ‘defects’ of the domestication syndrome; however, the difficulty in engineering complex agronomic traits may be commensurate with understanding the underlying biological mechanisms. The genebank-to-phenotype map The logical extension of the concepts touched upon in the preceding section is a complete and accurate genotype-to-phenotype map 46 for all individuals stored in the genebank. With this map, simple database queries would lead to precise estimates of the genetic value of any given seed stored in a cold room or any plant growing in the genebank’s field. This map would not only predict quantitative outputs but also describe the mechanisms governing the degree to which the alleles, genes and haplotypes expressed in a given plant contribute to plant performance, and how each of these factors would perform when introgressed into a new background. This process would amount to adding the dimension of intraspecific diversity to the gene-regulatory network in the sense of Davidson 47 . As plant geneticists, we should not be satisfied with the commercial successes of black-box predictions from genomic selection models 48 , but we should strive for a deeper understanding of how genomic information stored in seeds is translated into plant performance. The diversity in the developmental genetics of edible organs 49 , 50 or the biosynthesis of secondary metabolites makes research in model plants such as Arabidopsis thaliana or Brachypodium distachyon alone unlikely to be sufficient to unravel the molecular basis of yield formation, pest resistance and resource efficiency in crops. Implementing the vision of a genebank-to-phenotype map is an endeavor matching the timescale of concern of Frankel’s ‘crop evolutionist’ 4 , but it has the potential for multiplying the benefits derived from ‘the preservation of variability in an exploitable form’ 51 .
The preservation of plant biodiversity is the task of the roughly 1,750 gene banks distributed around the world. They store plant samples and sometimes additional phenotypic or genetic information of around 7.4 million accessions of plant species in total. It is expected that with facilitated access to improved, quicker and cheaper sequencing and other omics technologies, the number of well-characterised accessions and the amount of detailed information that needs to be stored along with the biological material will grow rapidly and continuously. A team of scientists from the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) in Gatersleben has now looked ahead into the upcoming challenges and possibilities of the future of gene banks by publishing a perspective paper in Nature Genetics. In the early to mid-20th century, it became increasingly apparent that crop landraces were slowly being replaced by modern crop varieties and were in danger of disappearing. In order to prevent loss of genetic diversity and biodiversity, the first gene banks were established with the mission to preserve these plant genetic resources. Today, gene banks function as biorepositories and safeguards of plant biodiversity, but most importantly as libraries that turn the genetic plant information and plant material into a freely accessible and valuable resource. As such, scientists, plant breeders and anyone around the world can request and use the data stored within more than 1,750 global gene banks for research or plant breeding purposes. The Gene Bank of the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) in Gatersleben currently holds one of the world's most comprehensive collections of crop plants and their wild relatives, collating a total of 151,002 accessions from 2,933 species and 776 genera. The majority of the plant germplasm samples are stored as dry seed at -18°C, while accessions propagated vegetatively are permanently cultivated in the field (ex situ) or preserved in liquid nitrogen at -196°C. The online portal of the IPK gene bank allows users to view and sift through the stored plant accessions and their corresponding passport data, as well as to request plant material on a non-commercial scale. A new perspective paper authored by Dr. Martin Mascher and colleagues of the IPK now examines the current and upcoming challenges for gene banks, and the opportunities for their further advancement. The scientists identified three major challenges for gene banks that need attention. Two are caused by the basic demands of managing tens of thousands of seed lots, namely, the tracking of the identity of accessions, and the need to avoid unnecessary duplications within and between gene banks. The third challenge is that of maintaining the genetic integrity of accessions, due to the inherent drawbacks of using ex situ conservation such as differential survival, drift and genetic erosion in storage and regeneration. However, the authors suggest that a stronger genomic-driven approach towards gene banks might help when taking on these challenges. For example, traditionally, the "passport data" of the gene bank material describe the taxonomy and provenance of accessions. By adding single-nucleotide polymorphisms (SNPs) as defining characteristics of an accession, this genotypic information could serve as molecular passport data to complement and correct traditional passport records, as well as assist with the cleansing and prevention of duplicates and improve the quality and integrity of the collections. By implementing the shift towards bioinformatics and big data analytics in plant sciences, traditional gene banks, which focus on the preservation of germplasm collections, will be able to transform into bio-digital resource centres, which combine the storage and valorisation of plant materials with their genomic and molecular characterisation. Current funding scenarios of gene banks do not yet allow for the systematic generating of molecular passport data for each submitted plant sample at gene banks. However, first steps into the direction of high-throughput genotyping of entire collections have already been taken. This was previously showcased by an international research consortium led by the IPK, which characterised a world collection of more than 22,000 barley varieties on a molecular level through genotyping-by-sequencing. Some of the authors of the perspective paper had also been involved in this case-study and had contributed to the creation of the web-information-portal BRIDGE as a result. BRIDGE, short for "Biodiversity informatics to bridge the gap from genome information to educated utilization of genetic diversity hosted in gene banks," is a data storage for the attained genomic barley information which links to the phenotypic information collated at the IPK hosted Federal Ex situ Gene Bank for Agricultural and Horticultural Crop Species. Whilst BRIDGE is already paving the way towards evolving the Gaterslebener Gene Bank into a "one stop shop for facilitated and informed utilisation of crop plant biodiversity," international collaborations, such as the organisation DivSeek, are building the international framework for enabling gene banks, plant breeders and researchers globally to more efficiently process and mobilise plant genetic diversity, thus starting to bridge the gaps between bioinformaticians, geneticists and gene bank curators. Hence, a worldwide network of bio-digital resource centres, sharing data freely and thus help fostering research progress in plant science and plant breeding may become a reality in the near future.
10.1038/s41588-019-0443-6
Medicine
New benchmark could improve detection of genetic variants linked to spinal muscular atrophy, other diseases
Chen-Shan Chin, Curated variation benchmarks for challenging medically relevant autosomal genes, Nature Biotechnology (2022). DOI: 10.1038/s41587-021-01158-1. www.nature.com/articles/s41587-021-01158-1 Journal information: Nature Biotechnology
http://dx.doi.org/10.1038/s41587-021-01158-1
https://medicalxpress.com/news/2022-02-benchmark-genetic-variants-linked-spinal.html
Abstract The repetitive nature and complexity of some medically relevant genes poses a challenge for their accurate analysis in a clinical setting. The Genome in a Bottle Consortium has provided variant benchmark sets, but these exclude nearly 400 medically relevant genes due to their repetitiveness or polymorphic complexity. Here, we characterize 273 of these 395 challenging autosomal genes using a haplotype-resolved whole-genome assembly. This curated benchmark reports over 17,000 single-nucleotide variations, 3,600 insertions and deletions and 200 structural variations each for human genome reference GRCh37 and GRCh38 across HG002. We show that false duplications in either GRCh37 or GRCh38 result in reference-specific, missed variants for short- and long-read technologies in medically relevant genes, including CBS , CRYAA and KCNE1 . When masking these false duplications, variant recall can improve from 8% to 100%. Forming benchmarks from a haplotype-resolved whole-genome assembly may become a prototype for future benchmarks covering the whole genome. Main Authoritative benchmark samples are driving the development of technologies and the discovery of new variants, enabling highly accurate clinical genome sequencing and advancing our detection and understanding of the impact of many genomic variations on human disease at scale. With recent improvements in sequencing technologies 1 , assembly algorithms 2 , 3 , 4 and variant-calling methods 5 , genomics offers more insights into challenging genes associated with human diseases across a higher number of patients 6 . Still, challenges remain for medically relevant genes that are often repetitive or highly polymorphic 7 , 8 . In fact, a recent study found that 13.8% (17,561) of pathogenic variants identified by a high-throughput clinical laboratory were challenging to detect with short-read sequencing 9 . These included challenging variants such as variants 15–49 bp in size, small copy-number variations (CNVs), complex variants and variants in low-complexity or segmentally duplicated regions. The Genome in a Bottle (GIAB) consortium develops benchmarks to advance accurate human genomic research and clinical applications of sequencing. GIAB provides highly curated benchmark sets for single-nucleotide variant (SNV) 10 , small insertion and deletion (INDEL) 10 and structural variant (SV) calling 11 . Here, we define SNVs as single base substitutions, while INDELs are defined as insertions and deletions smaller than 50 bp, in contrast to insertions and deletions larger than 50 bp, which we refer to as SVs. Furthermore, GIAB and the Food and Drug Administration (FDA) host periodic precisionFDA challenges providing a snapshot and recommendations for small variant calling enabling the high precision and sensitivity required for clinical research, with a recent challenge demonstrating the importance of including more difficult genomic regions 12 . Recently, GIAB focused primarily on a read mapping-based genome-wide approach integrating short-, linked- and long-read sequencing to characterize up to 92% and 86% of the autosomal bases for small variants and SVs, respectively 11 , 13 . GIAB also released a targeted assembly-based benchmark for the major histocompatibility complex (MHC) region, a highly diverse and repetitive region of the human genome that includes the human leukocyte antigen (HLA) genes 14 . Still, multiple regions of the genome are not fully resolved in existing benchmarks due to repetitive sequence, segmental duplications and complex variants (i.e., multiple nearby SNVs, INDELs and/or SVs) 15 . Many clinically relevant genes are in the remaining hard-to-assess regions. The clinical tests for these genes often require locus-specific targeted designs and/or employ multiple technologies and are only applied when suspicion of a specific disorder is high. Mandelker et al. categorized genes based on their repetitive content and identified 193 genes that cannot be fully characterized by short-read sequencing 7 . This gene set was constructed by identifying genes with low mapping quality in the clinical databases OMIM, HGMD and ClinVar. Subsequently, Wenger et al. showed that while short reads could not accurately map the full length of these genes, highly accurate long reads could fully map 152 (78.76%) of them 1 . The latest v4.2.1 GIAB small variant benchmark regions included at least 90% of the gene body for 110 of the 159 difficult genes on autosomes 13 . In contrast, the previous v3.3.2 GIAB small variant benchmark regions included at least 90% of the gene body for only 19 of 159 difficult genes 10 . Although v4.2.1 includes substantially more difficult genes, variant calls in the remaining most difficult genes still need to be assessed, and challenges remain with typical mapping-based approaches in some genes, even when using highly accurate long reads. To support ongoing advancements in clinical genome sequencing and bioinformatics, we present a more comprehensive benchmark of challenging, medically relevant genes (CMRGs) focusing on HG002, which has a broad consent from the Personal Genome Project for open genomic data and commercial redistribution 16 (Fig. 1 ). With the advent of highly accurate long reads, new approaches for haplotype-resolved (diploid) assembly have advanced rapidly 2 , 3 . Here, we focus on generating a benchmark for as many of these genes as possible using a whole-genome haplotype-resolved assembly. We curated a set of 273 medically relevant genes with ≤90% of bases included in previous GIAB benchmarks but fully covered by both haplotypes of a trio-based hifiasm assembly. The assembly included all phased small variants and SVs across these genes. Then, we delineated regions where we can provide reliable small variant and SV benchmarks, developing a prototype process for future whole-genome assembly-based benchmarks. Fig. 1: GIAB developed a process to create new phased small variant and SV benchmarks for 273 CMRGs. a , We developed a list of 4,701 autosomal potentially medically relevant genes. We generated a new benchmark for 273 of the 4,701 genes that were completely resolved by our hifiasm haplotype-resolved diploid assembly and ≤90% included in the v4.2.1 GIAB small variant benchmark for HG002 (v4.2.1 regions). b , We required that the entire gene region (pink) and the 20-kb flanking sequence on each side (blue) be completely resolved by both haplotypes in the assembly (Hifiasm Hap1 and Hifiasm Hap2), indicated with the Hifiasm dipcall bed track. In addition, we required that any segmental duplications overlapping the gene be completely resolved. From the small variant benchmark regions (CMRG SV, blue bars), we excluded SVs and any tandem repeats or homopolymers overlapping SVs (right: tandem repeat and homopolymer (TR and homopol.) region in brown). The left tandem repeat and homopolymer region (in brown) is excluded from the small variant benchmark regions because the larger tandem repeat contains an imperfect homopolymer longer than 20 bp, which we exclude because long homopolymers have a higher error rate in the assembly. All regions of this gene were included in the SV benchmark regions (CMRG SV, blue bar). The vertical red lines in CMRG small variant and CMRG SV indicate locations of benchmark small variants and SVs, respectively. Finally, we evaluated the small variant and SV benchmarks with manual curation and long-range PCR and also ensured they accurately identify false positives and false negatives after excluding errors found during curation. Full size image Results Identification of CMRGs To prioritize genome regions for the expanded benchmark, we identified several lists of potentially medically relevant genes. The first was a list of 4,773 potentially medically relevant genes from the databases OMIM, HGMD and ClinVar previously compiled in 2012, which includes both commonly tested and rarely tested genes (Supplementary Table 13 in ref. 7 ). The second was a list from the COSMIC gene census, which contains 723 gene symbols found in tumors ( ) 17 . We also developed a focused list of high-priority clinical genes that are commonly tested for clinical inherited diseases (Supplementary Data 1 ). There are 5,175 gene symbols in the union of these sets, of which 5,027 have unique coordinates on the primary assembly of GRCh38 and valid ENSEMBL annotations, and 4,697 are autosomal; 70% of these genes are specific to the list from OMIM, HGMD and ClinVar, which includes genes associated with disease in a small number of studies and thus are currently tested more frequently in research studies than in high-throughput clinical laboratories (Fig. 1a ). Supplementary Note 1 and Supplementary Data 2 show our analysis of the fraction of these 4,697 autosomal medically relevant genes included in our previous benchmark (v4.2.1), resulting in 395 genes that were included ≤90% in GRCh37 or GRCh38 and are the focus of this manuscript. Assembly enables phased small variant and SV benchmarks Many of the 395 medically relevant genes were not covered well by the v4.2.1 small variant benchmark due to SVs, complex variants and segmental duplications (Fig. 2 ). Thus, here we resolve many of these regions using a haplotype-resolved assembly of HG002 constructed by hifiasm 2 . Hifiasm can resolve both haplotypes with high base-level quality (quality value > 50), including many segmental duplications, and produces variant calls and genotypes that are highly concordant with the v4.2.1 small variant benchmark, with both recall and precision, for >99.7% of SNVs and >97% of INDELs in regions covered by the assembly (Supplementary Data 3 ). Fig. 2: The new CMRG benchmark contains more challenging variants and regions than previous benchmarks. a , Fraction of each gene region (blue) and exonic regions (red) included in the new CMRG small variant or SV benchmark regions. b , Comparison of the fraction of challenging sequences and variants for genes included in the new CMRG benchmark versus the previous v4.2.1 HG002 benchmark versus genes excluded from both benchmarks. 99% of all CMRG benchmark genes have challenging sequences or variants in at least 15% of the gene region. The catalog of repetitive challenging sequences comes from GIAB and the Global Alliance for Genomics and Health (see ‘difficult context’ in Table 1 ). Challenging variants for HG002 are defined as complex variants (i.e., more than one variant within 10 bp) as well as putative SVs and putative duplications excluded from the HG002 v4.2.1 benchmark regions. c , Size distribution of INDELs in the small variant benchmark, which includes some larger INDELs in introns (light blue) and exons (dark blue). d , Size distribution of large insertions and deletions in the SV benchmark in introns (light blue) and exons (dark blue). Full size image We generated a benchmark ( Methods ) for 273 of the 395 genes that were fully resolved by this assembly. To be included in the CMRG benchmark, the entire gene, including the 20-kb flanking sequence (the longest reads used for the assembly) on each side and any overlapping segmental duplications, needed to have exactly one fully aligned contig from each haplotype with no breaks on GRCh37 and GRCh38 (Supplementary Data 2 ). We required the alignments to completely resolve any overlapping segmental duplications to minimize ambiguity or errors in the assembly–assembly alignment. These 273 genes are substantially more challenging than genes previously covered by GIAB’s v4.2.1 benchmark; for example, for 99% of the new genes, at least 15% of the gene region is either challenging to sequence or contains challenging variants in HG002 (Fig. 2b ). Here, we use the definition of challenging sequences from GIAB and the Global Alliance for Genomics and Health v2.0 stratifications 12 , 18 . Furthermore, when comparing variants in regions of the CMRG bodies included by the v4.2.1 benchmark, the CMRG benchmark or both benchmarks, 11% of the CMRG benchmark INDELs are >15 bp (Fig. 2c ) compared with 3.5% in v4.2.1. The CMRG INDELs >15 bp are also substantially more challenging than the v4.2.1 INDELs >15 bp. This is shown by a decrease in recall of HiFi DeepVariant from 99.5% (v4.2.1) to 84.9% (CMRG). In addition, the precision has decreased for HiFi DeepVariant from 99.9% (v4.2.1) to 94.2% (CMRG) (Supplementary Data 4 ). We created separate CMRG benchmark bed files for small variants and SVs, which both rely on the same benchmark variant calls from hifiasm. The CMRG benchmark extends beyond the v4.2.1 benchmark across the 273 challenging gene regions, adding many phased SNVs, INDELs and large insertions and deletions at least 50 bp in length overlapping these genes (Table 1 ). Table 1 Number of bases and variants in different HG002 GIAB benchmarks sets included in the 273 genes in the CMRG benchmark Full size table Resolving CMRGs Beyond previous GIAB benchmarks, this CMRG benchmark includes 273 more challenging genes. These include (1) genes that are duplicated in the reference but not in HG002, as described above; (2) highly homologous genes such as SMN1 and SMN2 or NCF1 , NCF1B and NCF1C ; and (3) genes with SVs and complex variants like RHCE . The gene SMN1 resides within a large segmental duplication on chromosome 5 containing both SMN1 and SMN2 . Biallelic pathogenic variants in SMN1 result in spinal muscular atrophy (SMA), a progressive disorder characterized by muscle weakness and atrophy due to loss of neuronal cells in the spinal cord 19 . While the 28-kb sequences of SMN1 and SMN2 generally differ by only five intronic and three exonic nucleotides 20 , the identification and characterization of pathogenic variants in SMN1 and the copy-number state of SMN2 are relevant for guiding newly developed therapies and counseling families regarding recurrence risk of this disease. Some individuals have copy-number polymorphisms of these genes, but HG002 appears to contain one copy each of SMN1 and SMN2 on each haplotype based on the presence of two haplotypes for each gene in Oxford Nanopore Technologies (ONT) and 10x Genomics data. However, the genes are surrounded by complex repeats and are thus not fully resolved by our assembly (Fig. 3b ). The maternal assembly has a single contig passing through the SMA region but misses SMN2 and some of the surrounding repeats (dot plot in Supplementary Fig. 1 ). The paternal assembly contains both SMN1 and SMN2 , but the assembly is broken into three contigs in the SMA region (dot plot in Supplementary Fig. 1 ). Upon curation of the data from Pacific Biosciences (PacBio) HiFi, ultralong ONT and 10x Genomics in Fig. 3a , the variants called from the assembly of SMN1 were supported by ONT and 10x Genomics across the full gene and by PacBio HiFi across the part of the gene covered by reads. Because we manually confirmed the assembly accuracy in this gene, we included SMN1 in our benchmark even though the assemblies did not cover the segmental duplications within the entire SMA region. We excluded SMN2 because only one haplotype was resolved by hifiasm v0.11. Another challenging example is NCF1 , which is associated with 20% of cases of chronic granulomatous disease, a primary immunodeficiency 21 , 22 . The gene lies within a large segmental duplication, which may make molecular diagnosis of some cases of chronic granulomatous disease challenging. Our benchmark covers the first two exons that were missing from the v4.2.1 benchmark (Supplementary Fig. 2 ). Fig. 3: The new benchmark covers the gene SMN1 , which was previously excluded due to mapping challenges for all technologies in the highly identical segmental duplication. a , Dotplot of GRCh38 against GRCh38 in the SMA region, showing a complex set of inverted repeats that make it challenging to assemble. b , Integrated Genomics Viewer view showing that only a small portion of SMN1 was included in v4.2.1 and that all technologies have challenges mapping in the region, but 10x Genomics and ultralong ONT reads support the variants called in the new CMRG benchmark. For the CMRG and v4.2.1 benchmarks, thick blue bars indicate regions included by each benchmark, and orange and light blue lines indicate positions of homozygous and heterozygous benchmark variants, respectively. CMRG variants were called from the trio-based hifiasm assembly of paternal and maternal haplotypes (Hifiasm-pat and Hifiasm-mat, respectively). Coverage tracks are show for 60× PCR-free Illumina 2 × 150-bp reads (Illumina-60×), 10x Genomics-linked reads (10x Genomics), 50× PacBio HiFi 15- and 20-kbp reads (PB HiFi-50×) and 60× ONT ultralong reads (ONT-UL-60×). Full size image Our benchmark provides a way to measure the accuracy of variant calls in gene conversion-like events and SVs. For example, there is a 4.5-kb gene conversion-like event between RHCE (Supplementary Fig. 3 ) and RHD (Supplementary Fig. 4 ; ref. 22 ) and a similar event between SIGLEC16 and SIGLEC11 (ref. 23 ). The benchmark includes other substantially more challenging SVs as well, including a 16,946-bp insertion in a variable number tandem repeat 24 in an intron of the gene GPI (Supplementary Fig. 5a ) and two insertions in the segmentally duplicated gene GTF2IRD2 (Supplementary Fig. 5b ; more in Supplementary Note 2 ). Resolving false gene duplications in the reference The CMRG benchmark identified variant-calling errors due to false duplications in GRCh37 or GRCh38 in several medically relevant genes. Previous work described true highly homologous genes inside segmental duplications in GRCh37 and GRCh38 that give rise to read mapping issues 7 , 8 ; our CMRG benchmark, however, shows that several of these highly homologous genes are in fact false duplications in the reference. For example, PacBio HiFi and Illumina short-read coverage is low and missing one or both haplotypes for CBS, CRYAA and KCNE1 on GRCh38, because reads incorrectly align to distant incorrect copies of these genes ( CBSL , CRYAA2 and KCNE1B , respectively; Fig. 4 and Supplementary Figs. 6 and 7 ). Clarification of these regions is important, such as for CBS , deficiency of which is associated with homocystinuria (a disorder associated with thromboembolic events), skeletal abnormalities and intellectual disability. Most cases of homocystinuria are detected by newborn screening, and subsequent molecular evaluation can help confirm the diagnosis and provide recurrence risk information for families of affected individuals. H19 , a noncoding gene on chromosome 11 that is frequently evaluated in cases of Beckwith–Wiedemann syndrome 25 , is similarly affected by a false duplication on GRCh38. The additional copies of CBS, U2AF1, CRYAA and KCNE1 in GRCh38 do not occur in HG002, and the Genome Reference Consortium and Telomere-to-Telomere Consortium recently determined that several regions on the p arm of chromosome 21, as well as several other regions in GRCh38, were incorrectly duplicated 26 , 27 . In support of this, the gnomAD v2 database has normal coverage and variants called in these genes for GRCh37, but gnomAD v3 has very low coverage and few variants for GRCh38. A companion manuscript from the Telomere-to-Telomere Consortium demonstrates that the new T2T-CHM13 reference corrects these and additional false duplications affecting 1.2-Mbp and 74 genes 27 . Fig. 4: The benchmark resolves the gene CBS, which has a highly homologous gene (CBSL) due to a false duplication in GRCh38 that is not in HG002 or GRCh37. a, The duplication in GRCh38 causes Illumina and PacBio HiFi reads from one haplotype to mismap to CBSL instead of CBS . The ultralong ONT reads, 10x Genomics-linked reads and assembled PacBio HiFi contigs map properly to this region for both haplotypes, because they contain sufficient flanking sequence. When the falsely duplicated sequence is masked using our new version of GRCh38, variant calls from a standard Illumina–GATK pipeline (ILMN-GATK w/Mask VCF) are completely concordant with the new benchmark. Pink shaded box indicates CMRG benchmark regions; only variants within the benchmark regions are included in the benchmark. b , Comparison of variant accuracy for GRCh38 before and after masking false duplications on chromosome 21 using variant callsets from three technologies: HiFi-minimap2-DeepVariant (HiFi-minimap2-DV), Illumina-BWA-MEM-GATK, and ONT-minimap2-clair2. The new benchmark demonstrates decreases in false-negative and false-positive errors for three callsets in the falsely duplicated genes CBS , CRYAA and KCNE1 when mapping to the masked GRCh38. Full size image We worked with the Genome Reference Consortium to produce a new masking file that changes the sequence in the falsely duplicated regions of chromosome 21 on GRCh38 to N’s. Masking in this way maintains the same coordinates but dramatically improves variant calling in the genes. Previous work demonstrated that variant calls could be recovered even from short reads by masking additional copies of highly homologous ‘camouflaged’ gene sequences, although this approach does not determine in which gene copy the variants occurred 8 , 28 . In our case, we are masking additional gene copies that are incorrect in the reference, enabling unambiguous variant calling in the correct genes. We show that masking the false duplications substantially improves recall and precision of variant calls in these genes for Illumina, PacBio HiFi and ONT mapping-based methods, increasing sensitivity of a common pipeline using Illumina, Burrows–Wheeler Aligner maximal exact match (BWA-MEM) and the Genome Analysis Toolkit (GATK) from 8% to 100% (Fig. 4 ), without increasing errors in other regions (Supplementary Fig. 8 ). Our benchmark also identified some falsely duplicated genes in GRCh37, specifically the medically relevant genes MRC1 and CNR2 . Both short and long reads map correctly to MRC1 in GRCh38, but many reads incorrectly align to a false additional copy of the gene in GRCh37. Similarly, CNR2 is annotated on GRCh37 to include a large region downstream that has an erroneous additional unplaced contig on chromosome 1 (chr1_gl000191_random) that interferes with mapping to a 106-kb region that includes part of CNR2 as well as other genes ( PNRC2 and SRSF10 ) not included in our medical gene list. Our benchmark correctly resolves all of these genes on both GRCh37 and GRCh38, because the assembled contigs align correctly for each haplotype. The CMRG benchmark identified false positives that were eliminated by adding hs37d5 decoy sequence to GRCh37, but it also identified false negatives caused by the decoy. The hs37d5 decoy was created from assembled sequences not in the GRCh37 reference and was used in phase 2 of the 1000 Genomes Project to remove some false positives due to mismapped reads from these sequences 29 . To evaluate the impact of the decoy on variant call accuracy in our CMRGs, we benchmarked HG002 Illumina–BWA-MEM–GATK calls against our benchmark with and without adding the hs37d5 decoy sequence to the GRCh37 reference. Using the decoy eliminated 1,272 false-positive SNVs and INDELs in the medical gene benchmark, including 1,191 in KMT2C , 15 in MUC5B and the remainder in clusters of false positives in long interspersed nuclear elements, short interspersed nuclear elements and long terminal repeats in other genes. However, using the decoy sequence also caused 78 SNV and INDEL false negatives, notably 52 in CYP4F12 and 18 in LMF1 due to falsely duplicating parts of these genes. Therefore, while the hs37d5 decoy improves overall performance of variant calling, it can cause some false negatives in medically relevant genes similar to the false duplications in the primary assemblies discussed above. A potential solution may be to mask the falsely duplicated portions of the hs37d5 decoy similar to the masking of false duplications in GRCh38. Benchmark reliably identifies variant calling errors We evaluated the CMRG small variant benchmark by comparing seven variant callsets from short- and long-read technologies and a variety of mapping and assembly-based variant calling methods. The goal of this curation process is to verify that the CMRG benchmark reliably identifies false positives and false negatives across sequencing technologies and variant calling methods. Manual curation of a random subset of 20 false positives, 20 false negatives and 20 genotyping errors from each callset (split evenly between GRCh37 and GRCh38 and between SNVs and INDELs) demonstrated that most types of discrepancies were errors in each callset (Supplementary Fig. 9 ). However, the majority of INDEL differences were identified as errors in the benchmark for two callsets, and curation identified 215 small regions with errors in the benchmark. These errors included missing haplotypes (particularly heterozygous INDELs in otherwise homozygous regions) and errors due to noise in the HiFi data in very long homopolymers, as detailed in Supplementary Note 3 . We also excluded 33 errors found in manual curation of complex small variants in tandem repeats (Supplementary Note 3 ), such as MUC5B in Supplementary Fig. 10 . To more completely exclude these errors in the CMRG benchmark, we also curated all of the false positives, false negatives and genotyping errors that were in at least half of the callsets on GRCh37 or GRCh38. We found that 44 of 50 and 59 of 63 errors identified by the evaluation on GRCh37 and GRCh38, respectively, were excluded by curation of the common false positives and false negatives. After excluding these errors, v1.00 accurately identifies errors for both SNVs and INDELs. We have included our full curation results in Supplementary Data 5 , which gives coordinates of common errors on both GRCh37 and GRCh38. This table can be used as a resource for investigating false positives and false negatives identified in a user’s query callset, as we provide notes about the evidence for the benchmark at each common false-positive or false-negative site. We evaluated the CMRG SV benchmark by comparing four short- and long read-based callsets, finding that the benchmark reliably identified false positives and false negatives across all four callsets. Upon manual curation only two sites were identified as problematic due to different representations that current benchmarking tools could not reconcile. We also found that the benchmarking statistics were sensitive to benchmarking tool parameters, particularly for duplications (Supplementary Note 3 ). We further confirmed that the 50 SVs ≥500 bp were all supported by Bionano Genomics (Bionano) optical mapping-based SV calling. From the manual curation of common false positives, false negatives and genotyping errors, we also identified some categories of variants in which the benchmark correctly identified errors in the majority of callsets: (1) clusters of false negatives and genotyping errors in the genes that are falsely duplicated in GRCh37 ( MRC1 and part of CNR2 ) and GRCh38 ( CBS , CRYAA , KCNE1 and H19 ); and (2) clusters of false positives and genotyping errors due to mismapped reads in the parts of KMT2C that are duplicated in HG002 relative to GRCh37 and GRCh38, which are responsible for 277 of the 386 false positives in the HiFi DeepVariant callset (Supplementary Fig. 11 ). We also determined that the benchmark correctly identified false negatives across technologies, but particularly short read-based methods, in segmental duplications like SMN1 and NCF1 , and in gene conversion-like events in RHCE , SIGLEC16 and GTF2IRD2 . In addition to previously developed stratifications for difficult regions, we developed new stratifications for falsely duplicated genes, genes with large duplications and complex variants in tandem repeats, which we available in the GIAB v3.0 stratifications (Supplementary Note 4 ). We further confirmed 225 of 226 variants across 10 genes in segmental duplications that were covered confidently by an orthogonal long-range PCR and Sanger sequencing method (Supplementary Table 1 and Supplementary Data 6 ). A total of 127 other variants that we attempted to confirm did not have coverage or had noisy sequencing, and only one variant (a homozygous SNV at GRCh38 chr.16:2113578 in PKD1 ) was contradicted by long-range PCR but clearly supported by Illumina, 10x Genomics, PacBio HiFi and ONT (Supplementary Fig. 12 ). To demonstrate how the CMRG benchmark can identify new types of errors relative to v4.2.1, we benchmarked a stringently filtered Illumina–BWA-MEM–GATK callset versus both the v4.2.1 benchmark and the medically relevant gene benchmark. Figure 5 shows that the fraction not assessed decreases and the false-negative rate increases substantially overall, but particularly for difficult variants. For SNVs, these difficult variants fall primarily in segmental duplications and low-mappability regions, while for INDELs, the CMRG benchmark also identifies additional false negatives in other regions excluded from the ‘not in all difficult’ stratification, such as tandem repeats and homopolymers. Fig. 5: The new CMRG small variant benchmark includes more challenging variants and identifies more false negatives in a standard short-read callset (Illumina–BWA-MEM–GATK) than the previous v4.2.1 benchmark in these challenging genes. While the false-negative rate (circles) is similar in easier regions (purple ‘Not in all difficult’ points), the false-negative rate is much higher overall (green ‘All CMRG benchmark regions’ points). The fraction of variants excluded from the benchmark regions (triangles) is much higher for the v4.2.1 benchmark in all stratifications. Challenging regions from the v3.0 GIAB stratifications shown here include complex variants in tandem repeats (TR) longer than 100 bp, segmental duplications (SegDup), and regions difficult to map with 100 bp reads (LowMap). This information is also presented in ‘summary stats NYGC’ in Supplementary Data 4 . Full size image Remaining challenges across medically relevant genes While the CMRG benchmark covers many new, challenging genes, 122 autosomal genes covered <90% by v4.2.1 are still excluded from the CMRG benchmark (110 on GRCh37 and 100 on GRCh38) for multiple reasons detailed in Supplementary Data 7 ; when progressively categorizing excluded genes on GRCh38, (1) 20 genes were affected by gaps in the reference; (2) 38 genes had evidence of duplications in HG002 relative to GRCh38; (3) six genes were resolved but excluded due to being in the MHC region 14 ; (4) three genes were resolved on GRCh38 but not GRCh37, as we required genes to be resolved on both references; (5) 19 genes were >90% included by the dip.bed but had multiple contigs or a break in the assembly–assembly alignment; (6) seven genes had a large deletion of part or all of the gene on one haplotype; (7) four genes had breaks or false duplications in the hifiasm assembly; (8) two genes were in the structurally variable immunoglobulin locus; and (9) one gene ( TNNT3 ) had a structural error in GRCh38 (described in ref. 27 ). As examples, LPA and CR1 were not included in the benchmark due to very large insertions and deletions, respectively, that cause a break in contig alignments, although the hifiasm assembly resolved both haplotypes (Supplementary Figs. 13 and 14 ). LPA contains multiple tandemly duplicated copies of the same region (i.e., kringle IV repeats with a unit length of ~5,550 bp) that are associated with cardiovascular disease 30 . The HG002 hifiasm assembly resolved the entire LPA region, and the 44.1-kb and 99.9-kb expansions of the kringle IV repeats for the maternal and paternal haplotypes, respectively, were consistent with the insertions predicted by an independent trio-phased Bionano optical mapping assembly (45.0 kb and 101.2 kb). This complex, large expansion of the kringle IV repeats can be represented in many different ways in a variant call format (VCF) with different levels of precision (e.g., as a large insertion, a tandem duplication or a CNV, and the copies may differ or include small variants). Existing benchmarking tools cannot compare these different representations robustly, partly owing to limitations of the VCF 31 . To benchmark assemblies of this gene in HG002, the sequences could be compared directly to the hifiasm contigs, which we have annotated for LPA and other genes using LiftOff 32 . CR1 , a gene implicated in Alzheimer’s disease 8 , is similarly resolved by hifiasm and contains a 18.5-kb homozygous deletion consistent with Bionano, but this deletion causes a break in the dipcall/minimap2 alignment (Supplementary Fig. 14 ) . Other genes are excluded from the benchmark because they have additional copies in HG002, but not in GRCh38. For example, KCNJ18 is excluded because GRCh37 and GRCh38 are missing a copy of this gene ( KCNJ17 ), so additional contigs from KCNJ17 align to KCNJ18 (ref. 27 ). Also, genes in the KIR region are highly variable, and CNVs are observed frequently in the population, with 35 alternate loci and 15 novel patches in GRCh38.p13. Hifiasm resolves the paternal allele in a single contig, but the maternal allele is split into three contigs in the KIR region, including a tandem duplication of the gene KIR2DL1 (Supplementary Fig. 15 ). There is no standard way to represent or benchmark small variants within duplicated regions, so we excluded KCNJ18 , KIR2DL1 and other duplicated genes such as PRSS1 and DUX4 from our benchmarks (Supplementary Figs. 16 and 17 ). More information about these complex genes is in Supplementary Note 5 . Discussion In this work, we provide highly curated benchmarks for both phased small variants and SVs covering 273 medically relevant and challenging genes. Parts or all of these genes are often excluded from standard targeted, exon or whole-genome sequencing or analysis. Still, the impact of these genes is well documented across multiple diseases and studies. Our benchmark will pave the way to obtain comprehensive insights into these highly relevant genes to further expand medical diagnoses and potentially improve understanding of the heritability for multiple diseases 33 . We give specific examples of challenges with calling variants in these genes, including mapping challenges for different technologies and identifying genes for which GRCh37 or GRCh38 is a better reference. This benchmark was designed to be complementary to previous mapping-based benchmarks. Some difficult genes, such as PMS2 , are resolved well by v4.2.1 in HG002 but not by the assembly. Some difficult genes, such as the HLA family 14 or GBA1 / GBA2 , are resolved well by the assembly but not included in the benchmark because they were well resolved previously. Still, a few challenging regions remain excluded from our benchmark or are not resolvable despite the availability of highly accurate long-read data. Some genes include variable long tandem repeats (e.g., LPA and CR1 ), which are resolved in our assembly, but the large >20-kb changes in length of the alleles are currently too complex for standard benchmarking methodologies. This clearly shows the need for more advanced methods, such as graph representations of haplotypes or alleles. In addition, a few genes (e.g., SMN2 ) escaped a comprehensive and accurate assessment even with current long-read-based assembly methods, highlighting the need for further development of sequencing and bioinformatics methods. Furthermore, our extensive curation of the benchmark helped identify limitations of the current haplotype-resolved whole-genome assembly methods, paving the way for future whole-genome assembly-based benchmarks: (1) the assembly often misses one allele for heterozygous INDELs in highly homozygous regions; (2) some consensus errors exist, causing errors in a single read to be called as variants; and (3) if both haplotypes of the assembly do not completely traverse segmental duplications, then the assembly is less reliable (e.g., SMN2 in HG002), although it can sometimes be correct (e.g., SMN1 in HG002). Some genes also may be resolvable in HG002 but not in other genomes, or vice versa, due to structural or copy-number variability in the population, so benchmarks for additional samples will be needed. By basing this benchmark on a haplotype-resolved whole-genome assembly, we were able to identify biases in mapping-based methods due to errors in the GRCh37 and GRCh38 references. While previous studies concluded that variant calling performance is generally better on GRCh38 (refs. 34 , 35 ), our benchmark demonstrates that variant calls in some genes are less accurate on GRCh38 than GRCh37. Another group recently independently identified the importance of masking the additional copy of one gene ( U2AF1/U2AF1L5 ) for cancer research 36 . Our results identify that false duplications cause many of the discrepancies found recently between exome variant calls on GRCh37 and GRCh38 (ref. 37 ). We produced similar benchmarks for both versions of the reference so that scientists can better understand the strengths and weaknesses of each reference and test modifications to the reference, such as the hs37d5 decoy for GRCh37 or the masked GRCh38 we propose here. During this process, we also identified and resolved variant calling errors due to several false duplications in these medically relevant genes in GRCh38 on chromosome 21. Overall, 11 genes are impacted by these false duplications, including three medically relevant genes from our list ( CBS , KCNE1 and CRYAA ). As a solution to this problem, we provide a GRCh38 reference that masks the erroneous copy of the duplicated genes. We use our benchmark to show that this reference dramatically improves read mapping and variant calling in these genes across almost all sequencing technologies. These false duplications exist only in GRCh38 and not in other human reference genome versions or in the broader population. A new telomere-to-telomere reference genome eliminates these false duplications and fixes collapsed duplications that prevented us from creating a benchmark for medically relevant genes like KCNJ18 and MAP2K3 , and a similar CMRG benchmark for HG002 is now available on the new reference 27 . Future work will include using haplotype-resolved assemblies to form benchmarks for more genic and nongenic regions of the genome, eventually using genomes that are assembled telomere to telomere. Our approach to form benchmarks from a haplotype-resolved whole-genome assembly is a prototype for future comprehensive benchmarks covering the whole genome combining different types of small variants and SVs. Overall, this benchmark enables a more comprehensive assessment of sequencing strategies, analytical methodologies and other developments for challenging genomic variants and regions relevant to medical research 5 , 38 , paving the way for improved clinical diagnoses. Methods Sample availability For the 10x Genomics and ONT sequencing and Bionano mapping, the GM24385 (RRID:CVCL_1C78) cell line was obtained from the Coriell Institute for Medical Research National Institute for General Medical Sciences cell line repository. For the Illumina and PacBio sequencing, National Institute of Standards and Technology (NIST) RM 8391 DNA was used, which was prepared from a large batch of GM24385 to control for differences arising during cell growth. For binning reads into paternal and maternal haplotypes, Illumina sequencing of DNA from NIST RM 8392 (HG002-HG004) was used. DNA was extracted from cell lines publicly available as GM24149 (RRID:CVCL_1C54) and GM24143 (RRID:CVCL_1C48) at the Coriell Institute for Medical Research National Institute for General Medical Sciences cell line repository. Medical genes We used genes from a variety of databases and sources to compile a list of medically relevant genes. The largest set of genes we use is from Supplementary Table 13 of Mandelker et al., which was a capture of the OMIM, HGMD and ClinVar databases gathered in 2012. Further, we used the COSMIC cancer gene census, which is a list of 723 genes. Supplementary Data 1 also contains additional details about the higher-priority list of 942 genes in the union of ClinGen genes with ‘definitive’, ‘strong’ or ‘moderate’ evidence (719 genes), National Comprehensive Cancer Network/European Society for Medical Oncology (hereditary cancer syndromes) (49 genes), American College of Medical Genetics Secondary Findings 2.0 (commonly referred to as the ACMG59, for which reporting of secondary or incidental findings is recommended) (59 genes), Clinical Pharmacogenetics Implementation Consortium pharmacogenetics genes (127 genes), and the Counsyl expanded carrier screening list (235 genes), which includes recommended reproductive medicine genes as a small subset. Medical gene coordinate discovery We used coordinates from ENSEMBL ( ) and then downloaded ‘chromosome’, ‘start’, ‘end’, ‘gene_name’ and ‘stable_ID’ using bioMart for GRCh38 and GRCh37. We looked up the collection of medical genes and found the coordinates for each in GRCh38 and GRCh37, with the full lists (GRCh3x_mrg_full_gene.bed) and 273 genes included in the CMRG benchmark (GRCh3x_mrg_bench_gene.bed) available under . Calculating overlap with GIAB HG002 v4.2.1 small variant benchmark We used bedtools 39 intersected with the ENSEMBL coordinates for each gene and the v4.2.1 small variant benchmark regions browser extendible data (BED) files. We calculated the number of bases in the intersection and compared that to the total number of bases in each gene. We chose 90% as the threshold for the purpose of keeping manual curation tractable over the set of the genes. Haplotype-resolved assembly using PacBio HiFi reads with hifiasm using trio-binning We used the haplotype-resolved assembly produced by hifiasm v0.11 using 34× coverage (two 15-kb and two 20-kb libraries) by PacBio HiFi Sequel II System with Chemistry 2.0 reads ( ) using k -mer information from parental Illumina short reads (30× 2 × 150-bp reads at ), described recently 2 . Calling variants relative to GRCh37 and GRCh38 using dipcall We aligned the haplotype-resolved assembly of HG002 to GRCh37 and GRCh38 using minimap2 (ref. 40 ) through dipcall ( ) as is done in the NIST assembly benchmarking pipeline ( ). Dipcall generates variant calls using any nonreference support in regions that are ≥50 kb, with contigs having mapping quality ≥5. Dipcall also produces a BED that denotes confident regions that are covered by an alignment ≥50 kb, with contigs having mapping quality ≥5 and no other >10-kb alignments. Benchmark development We selected genes that had continuous haplotype coverage of the gene body, including the 20 kb on each side to account for robust alignments. In addition, each haplotype had to fully cover any segmental duplications in close proximity to or overlapping the extended gene regions. This also included complex SVs inside of the segmental duplications to be able to robustly identify SNVs and SVs subsequently. We considered a gene to be fully resolved by the haplotype-resolved assembly if the dip.bed covered the gene along with 20 kb of flanking sequence to consider the PacBio HiFi read length as well as any overlapping segmental duplications. We chose these criteria to ensure that genes were resolved in regions with high-quality assembly. We then performed manual curation of the resolved genes and flanking sequence to understand overall characteristics of the candidate benchmark. We began initial evaluation against mapping-based callsets to understand the performance of the benchmark in these genes. We found that perfect homopolymers of >20 bp and imperfect homopolymers >20 bp accounted for a majority of false negatives and false positives for both SNVs and INDELs. Imperfect homopolymers are defined as stretches of one base that are interrupted by one different base in one or more locations, and each of the stretches of exact homopolymer bases has to be at least 4 bp (e.g., AAAAGAAAAAGAAAATAAAA). Manual curation of a random subset of these sites showed that in most instances, it was unclear whether the mapping-based callset or the assembly-based benchmark was correct. BED files for these homopolymers are available under . We excluded the following regions from the v0.02.03 small variant and v0.01 SV benchmark regions (the benchmark versions used in the evaluation): (1) one region identified manually as an erroneous insertion resulting from an issue with the method hifiasm v0.11 used to generate the consensus sequence; (2) genes in the MHC, as these were previously resolved by diploid assembly in the v4.2.1 benchmark 14 ; and (3) regions around variants identified as errors or unclear upon manual curation, as described below. For the small variant benchmark, we additionally excluded (1) SVs at least 50 bp in size and overlapping tandem repeats, because these cannot be compared robustly with small variant comparison tools; and (2) perfect and imperfect homopolymers of >20 bp plus 5 bp on each side. For the SV benchmark, we additionally excluded (1) tandem repeats that contain more than one variant at least 10 bp in size, because these complex variants can cause inaccurate comparisons with current benchmarking tools; and (2) INDELs 35–49 bp in size. Benchmark evaluation We used hap.py 41 with vcfeval to compare VCFs from a variety of sequencing technologies and variant calling methods to the GRCh37 and GRCh38 difficult medical gene small variant benchmark, with v3.0 GIAB/GA4GH stratifications under . We randomly selected 60 total sites for curation, with 30 selected from GRCh37 and 30 selected from GRCh38. Five SNVs and five INDELs were selected from each of these three categories: (1) false positives (variants in the comparison VCF but not the benchmark), (2) false negative (variants in the benchmark but not the comparison VCF), and (3) genotype errors (variant appearing as both a false positive and a false negative using hap.py with vcfeval). This curation process will also help us to make further refinements, if needed, to the GIAB benchmark. For the small variant benchmark evaluation, we used seven VCFs 12 from short- and long-read technologies and a variety of mapping and assembly-based variant calling methods: (1) Illumina–DRAGEN, (2) Illumina–NovaSeq–GATK4 (ref. 42 ), (3) Illumina–xAtlas 43 , (4) PacBio HiFi–GATK4,(5) an assembly based on ONT reads called with dipcall, (6) a union of three callsets (Illumina called with modified GATK, PacBio HiFi called with Longshot 44 v0.4.1, and ONT called with PEPPER–DeepVariant 45 , and (7) Illumina, PacBio and ONT combined called with NeuSomatic 46 . We excluded errors identified upon curation, as described in Supplementary Note 3. In Supplementary Note 4, we also include performance metrics for the 53x 15 kb + 20 kb HiFi-DeepVariant v0.9 callset under . Variant callsets used for evaluation National Center for Biotechology Information (NCBI) de novo assembly The de novo assembly of HG002 was initially generated using NextDenovo2.2-beta.0 with ONT Promethion data ( ) and then polished with PacBio 15-kb and 20-kb circular consensus sequencing (or HiFi) reads ( ), followed by scaffolding with HiC data ( ). The scaffolded assembly was further polished with Illumina short reads ( ) twice using pilon 47 and then phased with Whatshap 48 . Finally, two VCF files (HG002_grch37_dipcall.vcf.gz and HG002_grch38_dipcall.vcf.gz) were generated based on phased HG002 genome using dipcall with GRCh37 and GRCh38 reference genomes, respectively ( ). Illumina–DRAGEN HG002 DNA was prepared using the Illumina DNA PCR-free library preparation kit. The library was sequenced on the NovaSeq 6000 platform with 151-bp paired-end reads. Illumina–DRAGEN 3.6.3 was used to align sequencing reads and call variants. SNP and INDEL were filtered using the following hard filters: DRAGENHardSNP:snp: MQ < 30.0 || MQRankSum < −12.5 || ReadPosRankSum < −8.0;DRAGENHardINDEL:indel: ReadPosRankSum < −20.0. Illumina, PacBio and ONT combined called with NeuSomatic The predictions are based on the adaptation of the deep learning-based framework in NeuSomatic for germline variant calling. We used the network model trained for NeuSomatic’s submission for the PrecisionFDA truth challenge v2 (ref. 12 ). The model is trained on HG002 using GIAB benchmark set v4.2. For this callset, separate input channels were used for PacBio, Illumina and ONT reads. DNAnexus: union of short read callsets from four callers We downloaded HG002 WGS FASTQ reads from NIST’s FTP 49 , followed by downsampling of the reads to 35× coverage (47.52%) from an original coverage of 73.65× using seqtk. We called variants against both GRCh38 (GRCh38 primary contigs and decoy contigs, but no alternate contigs or HLA genes) and hs37d5 builds. We ran four different germline variant callers (see below) with their suggested default parameters, collected the union of all variants using a customized script where we recorded which caller(s) called the variant and their filter statuses in INFO and FILTER fields and generated the union VCF file for HG002. The customized script excludes variants (with the same chromosome, position, reference base and alternates) that have conflicting genotype reported by different callers and only keeps the variants that are reported as exactly the same genotype when more than one caller is calling it. The variant calling pipelines used were (1) BWA-MEM and GATK4 (BWA-MEM 50 version 0.7.17-r1188 ( ) and GATK version gatk-4.1.4.1 ( )); (2) Parabricks_DeepVariant (Parabricks Pipelines DeepVariant v3.0.0_2 ( )); (3) Sentieon_DNAscope (Sentieon (DNAscope) version sentieon_release_201911 ( )); and (4) BWA-MEM and Strelka2 (BWA-MEM version 0.7.17-r1188 ( ) and Strelka2 version 2.9.10 ( )). Illumina NovaSeq 2 × 250-bp data The sample HG002 was sequenced on an Illumina NovaSeq 6000 instrument with 2 × 250-bp paired-end reads at the New York Genome Center. The libraries were prepped using a TruSeq DNA PCR-free library preparation kit. The raw reads were aligned to both GRCh37 and GRCh38 human references. Alignment to the GRCh38 reference, marking duplicates and base quality recalibration were performed as outlined in the Centers for Common Disease Genomics functional equivalence paper 51 . Alignment to GRCh37 was performed using BWA-MEM 50 (v0.7.8), marking duplicates was performed using Picard (v1.83) and local INDEL realignment and base quality recalibration were performed using GATK 52 (v3.4-0). Variant calling was performed using GATK (v3.5) and adhering to the best-practices recommendations from the GATK team. Variant calling constituted generating gVCF using HaplotypeCaller, genotyping using the GenotypeGVCFs subcommand and variant filtering performed using VariantRecalibrator and ApplyRecalibration steps. A tranche cutoff of 99.8 was applied to SNP calls and 99.0 to INDELs to determine PASS variants (i.e., variants that are not filtered). The raw reads are available for download at the Sequence Read Archive at . Small variants from Illumina, PacBio and ONT For this submission, we combined data from three sequencing technologies to obtain a more sensitive VCF file. We used our in-house variant calling pipeline for the Illumina dataset. In short, BWA-MEM v0.7.15-r1140 was used to align reads to the GRCh37 or GRCh38 reference genome, and BAM files were processed with SAMtools 53 v1.3 and Picard v2.10.10. SNVs and INDELs were identified with the HaplotypeCaller following the best-practices workflow recommendations for germline variant calling in GATK v3.8 (ref. 50 ). For both PacBio and ONT datasets, we ran another pipeline using NanoPlot v1.27.0 for quality control (Filtlong v0.2.0 for filtering reads and minimap2 v2.17-r941 for alignment). Longshot v0.4.1 was used for variant calling for PacBio data and PEPPER-DeepVariant was used for ONT data. On the variant callsets, we filtered out variants by applying the following criteria: FILTER = PASS, QD (quality by depth) ≥2.0 and MQ (mapping quality) ≥50 for Illumina data; and FILTER = PASS and QUAL (quality) ≥150 for PacBio data. No filters were applied to ONT calls. Finally, we created a consensus VCF file by merging the single VCF files obtained by each of these three pipelines using the GATK CombineVariants tool. SVs from Illumina (intersection callsets from five callers) We called SVs on short-read Illumina data using five different SV callers: DELLY 54 v0.8.5, GRIDSS 55 v2.9.4, LUMPY 56 v0.3.1, Manta 57 v1.6.0 and Wham 58 v1.7.0. The HG002 BAM file aligned to the GRCh37 or GRCh38 reference genome by BWA-MEM v0.7.15-r1140, with duplicates marked using Picard v2.10.10 and base quality score recalibrated by GATK v3.8, was used to feed these SV callers, which were executed with recommended default parameters. LUMPY and Wham SV calls were genotyped using SVTyper v0.7.1. GRIDSS SV types were assigned with the simple-event-annotation R script, included in the GRIDSS package. Resulting SV callsets were filtered based on the author’s recommendations for each caller as follows: Manta (FILTER = PASS, INFO/PRECISE, FORMAT/PR ≥ 10), LUMPY (INFO/PRECISE, remove genotypes 0/0, QUAL ≥ 100, FORMAT/AO ≥ 7), DELLY (FILTER = PASS, INFO/PRECISE), GRIDSS (FILTER = PASS, INFO/PRECISE, QUAL ≥1,000, INFO/SVLEN <1 kb, remove DAC Encode regions) and Wham (INFO/SVLEN <2 kb, INFO/A >5, remove genotypes 0/0, INFO/CW[bnd] >0.2). The resulting VCF files from each caller were merged to create the intersection of variants using SURVIVOR 59 v1.0.7, containing variants >50 bp in size, with 1,000 bp as the distance parameter and without requiring any type specificity (all variant types are merged). In the intersection set, we retained calls supported by two or more callers. SVs from ONT (merge callsets from two callers) For these submissions, we built a custom pipeline to process the ONT HG002 dataset using NanoPlot 60 v1.27.0 for quality control, Filtlong v0.2.0 for filtering reads and minimap2 (refs. 40 , 60 ) v2.17-r941 for alignment to the GRCh37 or GRCh38 reference genome. The SVs were called on the resulting BAM file using cuteSV v1.0.8 and Sniffles 61 v1.0.12. The resulting VCF files were filtered out based on the default values suggested by each of the tool’s authors: cuteSV 62 (minimum read support of 10 reads, INFO/RE ≥10; this caller intrinsically filters by FILTER = PASS and INFO/PRECISE) and Sniffles (FILTER = PASS, INFO/PRECISE and minimum read support of 10 reads). Finally, we created the VCF files by merging the single filtered VCF files using SURVIVOR 59 v1.0.7, containing variants >50 bp in size, with 1,000 bp as the distance parameter and without requiring any type specificity (all variant types are merged). Remapping variants between GRCh38 and GRCh37 To remap curated variant locations between GRCh38 and GRCh37, we used the NCBI Remap tool. For variants that remapped in the first pass, we used the first pass location. For variants that did not remap in the first pass, all remapped in the second pass, and we used the second pass location. Masking false duplications on chromosome 21 of GRCh38 We worked with the Genome Reference Consortium (GRC) to develop a list of regions in GRCh38 that could be masked without changing coordinates or harming variant calling, because they were erroneously duplicated sequences or contaminations. The BED file with these regions can be found at . To create the masked reference, we started with the GRCh38 reference with no alternate loci or decoy from . To generate the masked GRCh38 (i.e., replacing the duplicated and contaminated reference sequence with N’s), we used the following Bedtools tools ( ) command: maskFastaFromBed -fi GCA_000001405.15_GRCh38_no_alt_analysis_set.fasta -bed GCA_000001405.15_GRCh38_GRC_exclusions.bed -fo GCA_000001405.15_GRCh38_no_alt_analysis_set_maskedGRC_exclusions.fasta. To generate the v2 masked GRCh38, we ran the following Bedtools tools ( ) command: “‘ maskFastaFromBed -fi GCA_000001405.15_GRCh38_no_alt_analysis_set.fasta -bed GCA_000001405.15_GRCh38_GRC_exclusions_T2Tv2.bed -fo GCA_000001405.15_GRCh38_no_alt_analysis_set_maskedGRC_exclusions.fasta. This uses the bed file GCA_000001405.15_GRCh38_GRC_exclusionsv2.bed generated by the Telomere-to -Telomere Consortium variants team to mask false duplications located under , which also contains the new masked references and other references used in this work. Evaluation of GRCh38 masked genome improvement For short reads, a common whole genome resequencing analysis pipeline was used to produce variant call files for the HG002 sample in VCF and gVCF formats. The applications and parameters used in the analysis pipeline were derived from best practices for Illumina short-read whole-genome sequencing resequencing analysis developed for the Centers for Common Disease Genomics project 51 . The analysis pipeline consists of the following high-level steps: sequence alignment to reference genome using BWA-MEM, duplicate read marking using Picard Tools MarkDuplicates, base quality score recalibration using GATK BaseRecalibrator and variant calling using GATK HaplotypeCaller. This analysis pipeline was run twice on a set of paired-end HG002 FASTQs with 35× coverage as input, with the pipeline runs differing only by reference genome used during the alignment step. The first run used a version of the GRCh38 reference genome prepared without decoy or alternate haplotype contigs. The second run used a version of the GRCh38 reference genome identical to that used in the first run, except that five regions in chromosome 21 and the entire contig chrUn_KI270752v1 were masked with N’s, as described above. The commands executed by the analysis pipeline runs are in Supplementary Data 8 and Data 9 , which correspond to the runs using the unmasked and masked GRCh38 reference genomes, respectively. The following versions for applications and resources were used in the analysis pipeline: BWA v0.7.15, GATK v3.6, Java v1.8.0_74 (OpenJDK), Picard Tools v2.6.0, Sambamba 63 v0.6.7, Samblaster 64 v0.1.24, Samtools v1.9, dbSNP Build 138 on GRCh38 and Known INDELs from Mills and 1000 Genomes Project on GRCh38. For PacBio long reads, we used a 35× 15-kb + 20-kb HiFi dataset from the precisionFDA Truth Challenge v2 (ref. 12 ) aligned to the standard and masked GRCh38 reference with pbmm, and called variants with DeepVariant v1.0 (ref. 65 ). HG002 haplotype-resolved assembly annotation Liftoff 32 v1.4.0 was used with default parameters to lift over Ensembl v100 annotations from GRCh38 onto each haplotype assembly separately. The resulting GFF files are available at . Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The PacBio HiFi reads used to generate the hifiasm assembly for the benchmark are in the NCBI Sequence Read Archive with accession numbers SRR10382245, SRR10382244, SRR10382249, SRR10382248, SRR10382247 and SRR10382246. The v1.00 benchmark VCF and BED files, as well as Liftoff gene annotations, assembly–assembly alignments and variant calls, are available at , and as a DOI at . This is released as a separate benchmark from v4.2.1, because it includes a small fraction of the genome, it has different characteristics from the mapping-based v4.2.1 and v4.2.1 only includes small variants. Using v4.2.1 and the CMRG benchmarks as two separate benchmarks enables users to obtain broader performance metrics for most of the genome and for a small set of particularly challenging genes, respectively. The masked GRCh38 reference, recently updated to v2 with additional false duplications from the Telomere-to-Telomere Consortium, is under . We recommend using v3.0 GA4GH/GIAB stratification bed files intended for use with hap.py when benchmarking, which are available at . These stratifications include bed files corresponding to false duplications and collapsed duplications in GRCh38. All data have no restrictions, as the HG002 sample has an open consent from the Personal Genome Project. Code availability Scripts used to develop the CMRG benchmark and generate figures and tables for the manuscript are available at . The previously developed assembly, which was used as the basis of this benchmark, was from hifiasm v0.11. A variety of open source software was used for variant calling for the evaluations of the benchmark, including NextDenovo2.2-beta.0, DRAGEN 3.6.3, NeuSomatic’s submission for the PrecisionFDA truth challenge v2 (ref. 12 ) (BWA-MEM 50 version 0.7.17-r1188 ( ) and GATK version gatk-4.1.4.1 ( )), Parabricks_DeepVariant (Parabricks Pipelines DeepVariant v3.0.0_2 ( )), Sentieon (DNAscope) version sentieon_release_201911 ( ), BWA-MEM and Strelka2 (BWA-MEM version 0.7.17-r1188 ( ) and Strelka2 version 2.9.10 ( )), BWA-MEM 50 (v0.7.8), Picard tools ( ) (ver. 1.83), GATK 52 (v3.4-0), GATK (v3.5), BWA-MEM v0.7.15-r1140, SAMtools 53 v1.3, Picard v2.10.10, GATK v3.8, DELLY 54 v0.8.5, GRIDSS 55 v2.9.4, LUMPY 56 v0.3.1, Manta 57 v1.6.0, Wham 58 v1.7.0, NanoPlot 60 v1.27.0, Filtlong v0.2.0, minimap2 (refs. 40 , 60 ) v2.17-r941, cuteSV v1.0.8, Sniffles 61 v1.0.12, SURVIVOR 59 v1.0.7, BWA v0.7.15, GATK v3.6, Java v1.8.0_74 (OpenJDK), Picard Tools v2.6.0, Sambamba 63 v0.6.7, Samblaster 64 v0.1.24, Samtools v1.9, DeepVariant v1.0 and Liftoff32 v1.4.0.
The stretches of DNA that differ from person to person, called variants, are a major part of what makes us unique, but they can also put us at greater risk of disease. Although we can currently spell out between 80% and 90% of the millions that are in the human genome, the remaining variants may hold clues for treating an array of diseases. Today the list of variants yet to be decoded has shrunk sizably. A team led by researchers at the National Institute of Standards and Technology (NIST), Baylor College of Medicine and DNAnexus has characterized over 20,000 variants in 273 genes of medical importance. In a study published in the journal Nature Biotechnology, the researchers applied both cutting-edge and long-standing DNA sequencing methods to decipher the genetic codes of the variants with a high degree of certainty. Using their results, they formulated benchmarks that will help labs and clinics sequence the genes more accurately, which is critical for gaining a better understanding of a host of diseases and eventually developing treatments. "Some of these genes, which have previously been very difficult to access, are suspected to have some connection to disease. Others have very clear clinical importance," said NIST biomedical engineer Justin Zook, a co-author of the study. "SMN1, for example, is a gene we characterized that is directly associated with spinal muscular atrophy, a rare but severe condition." The new benchmark is the latest produced by the Genome in a Bottle (GIAB) consortium, a NIST-hosted collaborative effort aimed at improving DNA sequencing technologies and making them practical for clinical application. These benchmarks are highly accurate sequences of DNA that clinics and research labs can use as a kind of answer key when testing their own sequencing methods. By sequencing the same genome used to develop a benchmark and then comparing their result to the benchmark itself, they could learn how well they can detect certain variants. Over the years, producing benchmarks for some regions of the genome has proved much more difficult than others. There are several reasons, many of which are tied to the general approach people use to sequence DNA. Rather than sequencing entire genomes in one go, DNA sequencing technologies read out sequences of small fractions of DNA first, and then attempt to place them together correctly, similar to a puzzle set. Reference genomes, the first of which was completed by the Human Genome Project, are nearly full genomes, stitched together from several people's DNA, that serve as guides for where to place the puzzle pieces. Since we share close to 99.9% of our genetic makeup as a species, any human genome will have mostly the same code as the reference genome. This means putting together a genome is a matter of laying out the pieces based on where they match up with the reference. Most variants fall in line using this process. Certain types throw a wrench into it. In particular, a type called a structural variant can create large differences between a genome and a reference genome. They range from 50 up to thousands of letters, or bases, and take many forms, including inserted, deleted or rearranged code. The more distinct a genome is from the reference, the harder it is to use the reference as a guide, Zook said. Structural variants could cause labs to unintentionally misplace chunks of DNA, and, in a clinical setting, that sort of error may cause a disease-linked variant to evade detection or a harmless variant to create alarm. On top of the human costs, treatments prescribed needlessly or too late due to these mismeasurements could establish the need for more expensive or invasive treatments for patients down the road, driving up health care costs drastically. However, recent advances in sequencing technology have cleared some of these obstacles. In the new study, the GIAB consortium applied the latest technology to decode some of the most elusive regions of the human genome with either a known or suspected connection to diseases. A key player in the effort was high fidelity, or HiFi, sequencing, which can sequence longer stretches of DNA. Common DNA sequencing methods can read about a hundred bases, but with HiFi sequencing, you can accurately read tens of thousands at a time, Zook said. "Instead of having a thousand-piece puzzle, where you have these little, tiny pieces that you have to put together, it's more like having a hundred-piece puzzle where you have bigger pieces that you can put together," Zook said. The team specifically employed HiFi with hifiasm, a state-of-the-art software tool that simultaneously solves another issue that has hampered DNA sequencing. Rather than reading both copies of an individual's chromosomes (one from mother, the other from father), previous methods sequenced an amalgamation of both, causing them to create errors and miss important details unique to each copy. With hifiasm, the researchers could independently spell out the separate copies of a person's genome. In the case of this study, the genome was from a single person, designated HG002, who had consented to publicizing their genetic code through the Personal Genome Project. The authors used these technologies in addition to previously established methods, leveraging the strengths of each at once. In the end, their approach allowed them to unearth the sequences of more than 20,000 variants—including dozens of the difficult-to-assess structural variants—across 273 genes, and did so with higher accuracy than could be achieved just using a single method. In addition to spinal muscular atrophy, the researchers characterized variants in genes connected to heart disease, diabetes, celiac disease and many other conditions. The team also unexpectedly encountered errors in the two reference genomes they were using. Some could cause sequencing methods to misread genes that cause serious conditions, including homocystinuria, which is associated with skeletal, cardiovascular and nervous system disorders and is usually detected through newborn screening, Zook said. With their newly benchmarked variants, the authors proposed corrections to the reference genomes they used. The benchmarks themselves are now publicly available for labs to put to good use. To do so, interested researchers or clinicians would first need to sequence HG002 samples, which can be accessed through the NIST Office of Reference Materials, and then check their results against the benchmarks. The study marks a significant step in the GIAB consortium's ongoing journey to improve the accuracy of DNA sequencing. But with thousands of important genes left to characterize containing variants that are difficult to pin down, the researchers aim to trudge on, applying the latest and greatest technologies as they become available.
10.1038/s41587-021-01158-1
Medicine
New opioid speeds up recovery without increasing pain sensitivity or risk of chronic pain
Amy K. Feehan et al. Morphine immunomodulation prolongs inflammatory and postoperative pain while the novel analgesic ZH853 accelerates recovery and protects against latent sensitization, Journal of Neuroinflammation (2019). DOI: 10.1186/s12974-019-1480-x Journal information: Journal of Neuroinflammation
http://dx.doi.org/10.1186/s12974-019-1480-x
https://medicalxpress.com/news/2019-05-opioid-recovery-pain-sensitivity-chronic.html
Abstract Background Numerous studies have identified the proinflammatory, pronociceptive effects of morphine which ultimately exacerbate pain. Our novel endomorphin analog ZH853 does not produce proinflammatory effects on its own and gives potent, long-lasting analgesia. This study investigates whether ZH853’s lack of interaction with the neuroimmune system reduces the risk of prolonged pain. Methods Adult male Sprague-Dawley rats were subjected to one of two treatment paradigms. Either (1) chronic pain followed by chronic treatment with morphine, ZH853 or vehicle, or (2) chronic drug administered prior to pain induction. Complete Freund’s adjuvant (CFA) was injected or paw incision surgery was performed on the left hind plantar foot pad. Drugs were administered through Alzet osmotic minipumps at a rate of 1 μl/h for 5 days at appropriate doses based on prior experiments. Animals were tested for mechanical allodynia and thermal hyperalgesia using von Frey filaments and the Hargreaves apparatus, respectively. Additionally, several gait parameters were measured using the CatWalk XT. When all animals had recovered from pain, 1 mg/kg of naltrexone was administered to test for development of latent sensitization (LS). A second set of animals was used to investigate dorsal horn inflammation following CFA and drug treatment. ANOVAs were used to assess differences between drug treatment groups. Results As expected, morphine increased and prolonged pain in all experiments compared to vehicle treatment. However, ZH853 treatment reduced the overall time spent in pain and the severity of pain scores compared to morphine. ZH853 not only reduced inflammation versus morphine treatment but also, in some instances, acted as an anti-inflammatory drug compared to vehicle treatment. Finally, ZH853 prevented the development of LS while vehicle- and morphine-treated animals showed robust relapse to pain. Conclusions ZH853 has a favorable side effect profile versus morphine and provides superior analgesia in a number of pain states. We now know that chronic use of this compound reduces time spent in a chronic pain state, the opposite of common opioids like morphine, and reduces the risk of LS, making ZH853 an excellent candidate for clinical development in humans for inflammatory and postoperative pain. Background In addition to the well-known negative side effects of currently used opioids, including abuse liability, respiratory depression, tolerance, and others, several recent studies have shown a lesser-known side effect; chronic exposure to morphine and other opioids can paradoxically exacerbate and prolong pain hypersensitivity. Known as the “two hit” hypothesis, it has been proposed that injury causes pro-inflammatory signaling in the central nervous system (CNS) which is exacerbated by morphine such that pain is ultimately more intense and longer-lasting [ 1 , 2 ]. This exacerbation can occur with either order of stimulus (injury then drug or vice-versa) and can contribute to the transition from acute to chronic pain. The transition to chronic pain can also occur through another recently described mechanism known as “latent sensitization” (LS). LS is pathological pain following injury or inflammation that is “masked” during apparent recovery by endogenous opioid receptor function and “unmasked” by treatment with an opioid inverse agonist, such as naltrexone, or by stress [ 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ]. In this study, we compare the effects of morphine and a novel opioid on both the paradoxical exacerbation of pain and on LS, with the goal of assessing the new analog for preventing the transition from acute to chronic pain. We recently characterized a novel endomorphin [ 11 ] analog (ZH853) that shows antinociceptive effects equivalent to or greater than morphine, but with reduced abuse liability, respiratory depression, tolerance, hyperalgesia, impairment of motor coordination, and glial activation [ 11 ]. We also characterized the effectiveness of acute administration of the analog in alleviating several forms of chronic pain, including neuropathic, inflammatory, postoperative, and visceral pain [ 12 ]. This lead compound has been selected for development for clinical application and will be tested here for the effects of short term (3–5 days) chronic administration on the recovery from inflammatory and postoperative pain and the presence of LS. In this study, drugs were chronically infused either before or after pain. Recovery from pain was measured to determine whether ZH853 causes detrimental effects on pain similar to those reported with morphine use in humans and animals [ 1 , 2 , 13 ]. The drug-then-pain paradigm has been studied by several groups in different pain states. Some chronic pain patients who use opioids therapeutically are known to experience worse pain following surgery than non-opioid users, and that pain cannot always be controlled satisfactorily [ 14 , 15 ]. In rodent studies, Horvath et al. [ 16 ] showed that 7 or 14 days of chronic morphine infusion increased and prolonged both allodynia and thermal hyperalgesia, and the Watkins lab found that animals treated with morphine prior to chronic constriction injury or complete Freund’s adjuvant (CFA) had more intense and prolonged pain that correlated with a number of upregulated proinflammatory markers [ 2 ]. The reverse paradigm (pain then drug) is perhaps more clinically relevant but has been explored to a lesser extent. Recently, Grace et al. [ 1 ] found that morphine given after the induction of neuropathic pain prolonged hypersensitivity versus vehicle treatment, and this result was consistent with the effects of morphine treatment on postoperative pain [ 17 ]. Both studies found that morphine plus pain increased proinflammatory signaling in the spinal dorsal horn and that blocking inflammation pharmacologically blocks the effect of morphine on prolonged pain. Comprehensively described by Taylor [ 8 ], Corder [ 4 ], and Marvizon [ 3 , 6 ], LS and endogenous dependence are best thought of as two sides of the same coin; the phenomena are inextricably linked and cause a susceptibility to unmasking pain through either stress or chemical inactivation of Mu-opioid receptor (MOR) constitutive activity. The initial pain insult activates pain pathways (“accelerator”) and descending pathways induce constitutive activity at the MOR (MOR CA ) to counteract pain (“brake”). Over a long period of recovery from the injury, the body becomes dependent on MOR CA , which causes increased sensitization of pain pathways (increasing the “brake” increases the “accelerator”). This interplay between increasing pain sensitization and increasing endogenous analgesia continues in a feed-forward cycle, establishing chronic, relapsing pain that is susceptible to stress [ 5 , 9 ] or inverse agonists at the MOR [ 4 , 6 ]. In fact, after an injury, descending pathways can be inhibited at the cervical spinal cord to induce a relapse to pain at the level of the lumbar spinal cord [ 3 ], indicating that descending modulation of opioid receptors and, to some extent, α2A adrenergic receptors is necessary to continually suppress pain [ 6 ]. LS has been demonstrated in humans [ 7 ], a subset of whom appear to be more prone to developing LS. As with inflammation, LS plays a role in the transition from acute to chronic pain. In the current study, we examined the effects of morphine and ZH853 on two pain models (CFA and paw incision) in two long-term paradigms (drug then pain, pain then drug) using both traditional test methods (e.g., von Frey, Hargreaves, Randall-Selitto) as well as a functional assay (CatWalk XT), which examine different aspects of pain [ 18 ]. To assess inflammation, spinal cords were collected from a subgroup of animals and processed for immunohistochemistry when behavioral differences were most profound. Finally, we tested the effects of morphine and ZH853 on LS induced by inflammatory (CFA) and postoperative (paw incision) pain. To do this, we used a standard 1 mg/kg injection of the MOR inverse agonist naltrexone (NTX) to shut off MOR constitutive activity, then measured mechanical allodynia, thermal hyperalgesia, and several CatWalk variables. The results show that morphine exacerbated and prolonged chronic pain and had no effect on LS, while ZH853 accelerated time to recovery and blocked LS. These two findings in different paradigms indicate that ZH853 has a superior profile for reducing the transition from acute to chronic pain. Methods Animals Male Sprague-Dawley rats (~ 59–67 days old and 250–300 g at the beginning of the experiments, Charles River, Wilmington, MA) were group housed in a 12-h light/dark cycle (6 am/6 pm) in a temperature- (68–72 °F) and humidity-controlled room with food and water provided ad libitum. All experiments were approved by the Tulane Institutional Animal Care and Use Committee and conducted according to the NIH Guide for the Care and Use of Laboratory Animals. All efforts were made to minimize animal suffering, and to reduce the number of animals used. No alternatives to in vivo techniques are available. Drugs ZH853 was synthesized as described previously [ 11 , 12 ] by Bachem (Torrance, CA). Morphine sulfate was supplied by NIDA. All drugs were dissolved in 20% polyethylene glycol (PEG) in sterile saline. Drug injections for rats were given as described previously through intrathecal (i.t.) catheters [ 11 , 12 , 19 ]. Before drug dosing, rats with i.t. catheters underwent lidocaine testing to determine successful placement of the catheter. After a 10 μl injection of lidocaine followed by 12 μl of streptokinase (to maintain catheter patency), rats with properly placed catheters develop rapid but transient bilateral paralysis and recover in fewer than 5 min. Rats who did not recover or respond were immediately euthanized. Osmotic minipumps (Alzet model 2001, Durect Corp, Cupertino, CA) filled with vehicle, morphine, or ZH853 and primed in 0.9% saline at 37 °C for 16 h, were implanted subcutaneously (s.c.), and connected to a PE-8 (0.008″ I.D.) i.t. catheter. In the pain-then-drug experiment, 80% of the E max interpolated from dose response curves in [ 12 ] for CFA pain were given as a bolus dose (4.657 μg morphine, 0.06 μg ZH853) followed by infusion at a rate of 2.3285 μg/h of morphine and 0.02 μg/h of ZH853. Hourly dose was calculated as the bolus dose divided by the number of hours that the dose lasted. For ZH853, 0.06 μg lasted 3 h, so continual dosing was set at 0.02 μg/h. Then, 4.657 μg of morphine lasted 2 h, so 2.3285 μg/h was used as the hourly dose. Doses for postoperative pain were a bolus dose (1.932 μg morphine, 0.176 μg ZH853) followed by infusion at a rate of 1.932 μg/h of morphine and 0.07 μg/h of ZH853 for 3 days to comply with CDC guidelines for postoperative opioid use [ 20 ]. Hourly dose was calculated as the bolus dose divided by the number of hours that the dose lasted. For ZH853, 0.176 μg lasted 2.5 h, so continual dosing was set at 0.07 μg/h. Then, 1.932 μg of morphine lasted 1 h, so this was also used as the hourly dose. For the drug-then-pain experiment, the pumps delivered 8× the ED50/h (2 μg/h morphine [ 21 ], and 0.056 μg/h ZH853) for 5 days as determined previously to be equianalgesic in the tail flick assay [ 11 ]. Drug solutions were coded and the experimenter was blind to treatment. Adjuvant-induced inflammation Hind paws were swabbed with a sterile 70% alcohol pad. As previously described [ 12 , 22 ], CFA (100 μl, s.c., Sigma, St. Louis, MO) was injected into the left hindpaw using a 27-gauge needle. As an internal control, the rat’s right hind paw was injected with 100 μl of sterile saline. Testing started 24 h after injection. Before testing, swelling in the left paw was measured with a Plethysmometer Paw Volume Meter (IITC Life Science Inc., Woodland Hills, CA). Animals were monitored for signs of infection or axotomy at the injection site. Paw incision surgery Hindpaws were swabbed with a sterile 70% alcohol pad. As previously described [ 12 , 23 ], a 1-cm longitudinal incision was made with a No. 11 blade through the skin and fascia on the plantar aspect of the left hindpaw beginning 0.5 cm from the end of the heel. The flexor muscle was elevated with forceps and incised longitudinally several times. The skin was closed with two 5–0 surgical sutures (Ethicon, Somerville, NJ). Behavioral testing started 1 h after wound closure. Pain assessments All animals were monitored for signs of axotomy, infection, or porphyrin staining which would indicate extreme stress and exclude them from behavioral testing. All tests were started in the morning, beginning with the least noxious (CatWalk and von Frey) and followed by Hargreaves and Randall-Selitto testing with at least 20 min of acclimation between tests. Animals were acclimated for at least 30 min prior to von Frey testing at the beginning of the test period. Baseline measurements were conducted after the i.t. catheter was implanted. Animals were randomized into drug groups by an experimenter blinded to drug treatment such that average baselines for each group were as similar as possible across all baseline tests. All animals were used only once to prevent drug or testing experience from confounding the study and were drug- and test-naïve when the study started. Mechanical allodynia Mechanical allodynia was assessed using nylon von Frey filaments (Stoelting, Wood Dale, IL) according to the “up-down” algorithm described by Chaplan [ 24 ]. The apparatus and procedure are described elsewhere [ 12 ]. Briefly, rats were placed on wire mesh platforms in plexiglass boxes for 30 min of acclimation. Fibers of increasing stiffness were applied dorsally just lateral to the paw incision, pressed upward to cause a slight bend in the fiber and left in place for 8 s [ 25 ]. The experimenter was careful to avoid the toes and hairline of the paw. Withdrawal of the hind paw from the fiber was scored as a response. When no response was obtained, the next stiffest fiber in the series was applied to the same paw; if a response was obtained, a less stiff fiber was applied. Testing proceeded in this manner until four fibers had been applied after the first one causing a withdrawal response, allowing the estimation of the mechanical withdrawal threshold. Sensory thresholds were estimated as described previously [ 24 ]. This assay can detect mechanical thresholds as low as 0.02 g. Thermal hyperalgesia Withdrawal latency to heat was evaluated using the IITC Plantar Analgesia Meter (IITC Life Science, Inc.) to assess thermal hyperalgesia [ 12 , 26 ]. In this procedure, a radiant heat source was directed at the hind paws (intensity 70, cutoff 15 s) and latency to withdraw was recorded. A high intensity projector bulb (Osram 58–8007 8 V, 50 W) positioned 40 mm under a glass floor was projected through a 5 × 10 aperture in the top of a movable case. Once the rat withdrew the hindpaw, the heat source was turned off and latency to withdraw was recorded three times for each paw and averaged. CatWalk XT To measure functional impairment and recovery from inflammation, we used the CatWalk XT version 10.6 gait analysis system (Noldus Information Technology, Wagening, Netherlands), which has been described in detail elsewhere [ 27 , 28 , 29 , 30 , 31 ]. Briefly, paw prints are captured by a high-speed video camera positioned underneath a long narrow plexiglass-bottomed chamber. The rat is put in one end of the chamber and crosses it to reach a dark chamber that leads to the animal’s home cage. Contact with the plexiglass causes a distortion of light that the system then interprets to track and analyze paw prints. Three compliant runs were collected at each time point with a maximum run time of 5 s (average = 1.52 ± 0.03 s) and run variation of less than 60% (average = 28 ± 0.79%). These parameters were sufficient to produce smooth, consistent runs across the experiment. The CatWalk XT also has an automatic classification system to define all four paws. A researcher not involved in data analysis went through each compliant run to filter out false signals or errant classifications and exclude partial paw prints at the beginning and ending of each run. We investigated the following parameters: a) Swing time is the duration of no contact with the glass plate per step cycle and has been shown to increase in pain conditions. b) Swing speed is the rate in meters/second that a paw is not in contact with the glass plate. c) Stand time is the duration of ground contact for a single paw. This variable will decrease with pain and is the counterpart to the swing phase. d) Duty cycle expresses the stand as a percentage of a step cycle. Duty cycle = stand/(stand + swing) × 100%. e) Single stance measures the amount of time that the left hind paw is contacting the glass in the absence of the right hind also touching the glass. This will decrease in pain conditions. f) Paw print length which is the length from the center toe to the farthest central point of the print. Print length decreases in pain conditions. g) Maximum contact intensity of a paw which is an indirect measure of how much weight the animal bears on that paw. Weight bearing, and therefore contact intensity, decrease in pain states. Naltrexone-precipitated unmasking of pain Regardless of the order of intervention or pain type, all animals recovered to baseline, but dysregulation of the endogenous opioid system by morphine, a pain state, or both has been documented to “mask” the actual pain state [ 5 , 6 , 7 , 9 , 32 , 33 ]. By using naltrexone 1 mg/kg s.c., we probed whether drug, pain, or drug plus pain groups showed an unmasking of latent sensitization. Animals were given naltrexone and testing occurred 30–60 min after injection, at a time when naltrexone activity is stable. All testing techniques described in Marvizon 2015 [ 10 ] were followed as closely as possible except that animals were not tested over a time course. Immunohistochemistry When behavioral scores were most different, 25 days after CFA injection in the first paradigm (pain then drug) and 21 days after CFA injection in paradigm 2 (drug then pain), a subset of animals were deeply anesthetized with ketamine/xylazine (85/10 mg/kg) and transcardially perfused with 0.1 M phosphate-buffered saline (PBS) followed by 4% paraformaldehyde. Spinal cords were collected and post-fixed overnight at 4 °C, cryoprotected in 30% sucrose/0.1 M PBS for 2 days, and sectioned in 50 μm slices using a cryostat. Spinal cord sections from L4-L5 were collected and every 6th section was developed with a different antibody stain or combination of two stains. After two washes in PBS and blocking with 5% normal horse serum/0.3% Triton X-100, sections were incubated in the following primary antibodies: calcitonin gene-related peptide (CGRP) (rabbit, T-4032, Peninsula Labs, San Carlos, CA), glial fibrillary acidic protein monoclonal (GFAP) (mouse, Astro6 MA5-12023, ThermoFisher, Carlsbad, CA), Anti-CD11b/c (OX42) (rabbit, CBL1512-100UG, Millipore Sigma, St. Louis, MO), purinergic receptor 7 (P2X7R) (rabbit, #APR-008, Alomone Labs, Jerusalem, Israel), phosphorylated-p38 MAP kinase (pp38) (rabbit, #4511, Cell Signaling Technology, Danvers, MA), or interleukin-1beta (IL-1β) (rabbit, ab9787, Abcam, Cambridge, MA) and were incubated for 24 h at 4 °C on a slow rocker. All tissue was washed twice, re-blocked with serum, and incubated in donkey anti-mouse secondary antibody conjugated to Alexa488 (A21202, ThermoFisher) or donkey anti-rabbit conjugated to Alexa594 (A21207, ThermoFisher) for 2 h at room temperature, washed, and slide mounted with VECTASHIELD Antifade mounting medium with DAPI (H-1200, Vector Laboratories, Burlingame, CA). For all negative controls, the same procedure was performed but the primary antibody was omitted to check for non-specific staining. Imaging and analysis All images were captured with a Nikon Ni-E microscope and Hammamatsu camera. ImageJ software was used to assess integrated density as previously described [ 11 ]. Briefly, a blinded observer set thresholds on images with the default ImageJ algorithm and calculated the integrated density (area times mean gray value) in a specified region of interest. The gray value includes both the intensity and number of pixels above the set threshold in the region of interest. This controls for background staining variability. When counting stain-positive cells, a blinded observer used ImageJ to count cells from five randomly selected viewing fields within the region of interest, and a second blinded observer confirmed these counts. At least 4–6 rats per group, per endpoint were used for immunohistochemistry (IHC) experiments, and 4–6 slices per rat (right and left dorsal horns) were quantified for each experiment. There were no differences between right and left dorsal horns, so data was collapsed into one column. Statistical analysis Data sets were analyzed with Prism (GraphPad Software, La Jolla, CA) and are expressed as mean ± standard error of the mean (SEM). Animal numbers of 4–7 per group were used based on similar experiments in our lab and others’ for appropriate statistical analysis. For all time course data, two-way repeated measures analysis of variance (ANOVA)s followed by Newman-Keuls post-hoc testing were used to determine differences in drug group over time. In the recovery analysis, the same analysis was used to compare each point to the baseline. The time point when test scores are not significantly different from baseline was considered “recovery.” Areas under the curve (AUC) were calculated for each time course for each animal. Differences between group means were determined by one-way ANOVA with Newman-Keuls post-hoc test. Results Paw volumes for the CFA experiment, reflecting inflammation-induced edema, were determined by plethysmometer, and left hind (LH)/right hind (RH) values were calculated for vehicle, morphine, and ZH853 groups prior to CFA and after injection. There were no significant differences between groups (Fig. 1 ). Despite the consistency of injury, morphine caused increased sensitivity in all tests compared to vehicle treatment while ZH853 did not. Fig. 1 Plethysmometer data. Left hind (LH) paw over right hind (RH) paw values for foot size. Significant differences ( p < 0.01) were observed among the three time points, but not among the drugs or the interaction, indicating a consistent injury across groups Full size image CFA then drug Mechanical allodynia and naltrexone unmasking After treating CFA-induced mechanical allodynia with vehicle or equi-antinociceptive doses of morphine or ZH853, animals treated with morphine experienced greater allodynia than vehicle-treated animals at 25, 39, and 46 days after CFA injection (Fig. 2 a). ZH853-treated animals, however, recovered to baseline by day 18 and experienced less allodynia than vehicle from day 18 to 32 and less than morphine-treated animals from day 18 to 46 (interaction F 28, 266 = 4.681, p < 0.0001; time F 14, 266 = 35.29, p < 0.0001; drug F 2, 19 = 14.09; p = 0.0002). The AUC between day 11 and 60 shows that morphine-treated animals had significantly more allodynia than vehicle- and ZH853-treated groups, and that vehicle-treated animals had more overall pain than ZH853 animals ( F 2, 19 = 15.61, p < 0.0001). At day 53 and 60, all groups had returned to baseline allodynia, and a naltrexone injection was used to probe whether LS had developed. In both vehicle ( p < 0.0001) and morphine ( p < 0.0001) groups, significant allodynia developed after 30 min, but ZH853-treated animals showed no change in allodynia. Additionally, on the last day of drug dosing (D8, green box), ZH853 still produced some anti-allodynia which was not the case for morphine, indicating reduced tolerance caused by ZH853. Fig. 2 CFA then drug. Drug dosing is indicated with green boxes (5 days of drug (D3–8), 2 days wash out (D9–10)) and CFA injection is indicated by the red dashed line. “Days” indicate days after CFA. a Mechanical allodynia was increased by morphine treatment and decreased by ZH853 treatment relative to vehicle treatment. b Thermal hyperalgesia was increased by morphine treatment. ZH853 and morphine reversed hypersensitivity in both von Frey (green box in A) and Hargreaves (green box in b ) tests 24 h after implantation of the minipumps, and both drugs were less effective by day 5 of infusion (day 8). c In the CatWalk test, all variables are a proportion of left hind paw (LH) values over right hind paw (RH). Drug treatment did not alter the functional impairments caused by CFA and all animals recovered from gait disturbances by the end of drug treatment. n indicated in bar graphs. Dashed lines indicate difference versus morphine while solid lines indicate difference versus vehicle. Two-way, repeated measures ANOVAs were used in all cases. Error bars indicate SEM. + = veh vs morphine, # = veh vs ZH853, * = morphine vs 853. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 Full size image Thermal hyperalgesia After the development of thermal hyperalgesia, drug dosing with either morphine or ZH853 reversed pain on the first day of drug dosing (D4, green box) (Fig. 2 b). This effect diminished by day 8, but to a lesser extent in the ZH853 group. Although no specific time point was significantly different between groups during recovery, there were significant differences in the time by drug interaction (interaction F 26, 247 = 2.223, p = 0.0009; time F 13, 247 = 38.99, p < 0.0001; drug F 2, 19 = 2.624, p = 0.0986). Additionally, the AUC indicates that morphine-treated animals experienced more sensitivity overall ( F 2, 19 = 4.763; p = 0.0211) (Fig. 2 b). Functional impairment In the analysis of seven CatWalk variables, changes in gait function were apparent following CFA injection but, because drug dosing occurred when most of these variables had nearly returned to baseline, there were no differences between groups due to drug treatment (swing time interaction F 16, 152 = 0.3986, p = 0.9813; time F 8, 152 = 19.04, p < 0.0001; drug F 2, 19 = 0.7010, p = 0.5085) (Fig. 2 c). Additionally, LH/RH values for each variable were stable around 1 from day 18 onward and did not change following naltrexone injections (data not shown). Prolonged pain By comparing the means at each time point with the group baseline using a two-way ANOVA, we determined the average length of time in pain by drug treatment groups from day 11 to the end of the study. Significance is listed in Table 1 . Morphine significantly prolonged allodynia and thermal hyperalgesia versus both vehicle and ZH853. ZH853, on the other hand, shortened allodynia and thermal hyperalgesia versus both morphine and vehicle. Drug administration did not affect CatWalk variables, likely because animals had nearly returned to baseline by the onset of drug treatment. Table 1 Time to recovery analysis: CFA then drug Full size table Drug then CFA Mechanical allodynia and naltrexone unmasking After chronic drug dosing, the subsequent CFA injection induced mechanical allodynia over the first 7 days. Animals treated with morphine experienced greater allodynia than vehicle-treated animals at 21, 28, and 35 days after CFA injection (Fig. 3 a). ZH853-treated animals, however, experienced less pain than the morphine group from day 14 to 35 (interaction F 26, 234 = 2.399, p = 0.0003; time F 13, 234 = 21.10, p < 0.0001; drug F 2, 18 = 18.87, p < 0.0001). The AUC between day 1 and 49 shows that morphine animals had significantly greater allodynia than vehicle and ZH853 groups, and that vehicle animals had similar pain to ZH853 animals ( F 2, 18 = 12.22, p = 0.0004). At day 49, all groups had returned to baseline allodynia, and a naltrexone injection (1 mg/kg) was used to probe whether latent sensitization had developed. In both vehicle- and morphine-treated animals, significant allodynia developed after 30 min, but ZH853-treated animals showed no significant change in allodynia after either 1 or 5 mg/kg of naltrexone. After the 1 mg/kg injection of naltrexone, allodynia was slightly increased in morphine versus vehicle-treated animals, but at 5 mg/kg a floor effect made them equally sensitive (see graph for statistics). Fig. 3 Drug then CFA. Drug dosing is indicated with green boxes (5 days of drug, 2 days wash out) and CFA injection is indicated by red line. “Days” indicate days after CFA. a Mechanical allodynia and b thermal hyperalgesia were prolonged by prior exposure to morphine but not ZH853, despite the fact that ZH853 produced greater antinociception in the paw pressure test 24 h after implantation of the minipumps ( d ). In the CatWalk test ( c ), morphine-treated animals showed exaggerated functional impairment (guarding the left paw) within the first few days after CFA injection versus vehicle- and ZH853-treated animals. n = 7 for all groups. Error bars indicate SEM. + = veh vs morphine, # = veh vs ZH853, * = morphine vs 853.* p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 Full size image Thermal hyperalgesia After pre-treatment with drug, thermal hyperalgesia developed. Animals treated with morphine experienced greater hyperalgesia than vehicle-treated animals at 14, 21, and 49 days after CFA injection (Fig. 3 b). ZH853-treated animals, however, experienced less pain than the morphine group from day 7–21 and on day 35 (interaction F 22, 198 = 1.027 p = 0.4331; time F 11, 198 = 53.68, p < 0.0001; drug F 2, 18 = 10.75, p = 0.0008). The AUC between day 1 and 56 shows that morphine animals had significantly more pain than vehicle and ZH853 groups, and that vehicle animals had similar pain to ZH853 animals ( F 2, 18 = 10.42, p = 0.0010). Notably, morphine-treated animals were significantly more sensitive than vehicle and ZH853 groups when tested just prior to CFA injection 2 days after cessation of drug dosing, which is not unusual given morphine’s proinflammatory and pain sensitizing effects after chronic dosing (see figure for statistics). Functional impairment In the analysis of seven CatWalk variables, changes in gait were apparent following CFA injection, and made worse by pre-treatment with morphine for the first few days (Fig. 3 c). Morphine animals had greatly increased swing time, the amount of time that the paw is held in the air, for the first 3 days after CFA injection (interaction F 16, 144 = 2.889, p < 0.0004; time F 8, 144 = 40.6, p < 0.0001; drug F 2, 18 = 4.061; p = 0.0350). Print length was shortened in the first few days after injury; even more so in morphine-treated animals (interaction F 16, 144 = 2.426, p = 0.0029; time F 8, 144 = 33.75; p < 0.0001; drug F 2, 18 = 1.693, p = 0.2119). All animals had a decrease in swing speed after injury, the rate in meters/second that a paw is not in contact with the glass plate (interaction F 16, 144 = 1.590, p = 0.0782; time F 8, 144 = 81.14; p < 0.0001; drug F 2, 18 = 1.099, p = 0.3545). Stand time, or duration of ground contact, was also decreased by injury (interaction F 16, 144 = 1.253, p = 0.2354; time F 8, 144 = 46.16; p < 0.0001; drug F 2, 18 = 2.817, p = 0.0863). Duty cycle (interaction F 16, 144 = 1.601, p = 0.0753; time F 8, 144 = 64.26; p < 0.0001; drug F 2, 18 = 2.277, p = 0.1314), single stance (interaction F 16, 144 = 1.258, p = 0.2324; time F 8, 144 = 76.66; p < 0.0001; drug F 2, 18 = 1.383, p = 0.2762), and maximum contact area (interaction F 16, 152 = 0.6678, p = 0.8220; time F 8, 152 = 47.76; p < 0.0001; drug F 2, 19 = 0.2218, p = 0.8031) were also decreased within the first few days. (LH/ RH values for each variable were back to baseline from day 14 onward and did not change following naltrexone injections of either 1 mg/kg or 5 mg/kg (data not shown). Data are summarized in Table 2 . Table 2 Drug then CFA CatWalk Statistics Full size table Duration of allodynia and hyperalgesia Means were compared between each time point and the group baseline using a two-way ANOVA to determine the average length of time in pain by drug treatment groups from day 1 to the end of the study. Significance is listed in Table 3 . Morphine significantly prolonged allodynia versus both vehicle and ZH853. ZH853 shortened allodynia and thermal hyperalgesia versus both morphine and vehicle. Table 3 Time to recovery analysis: drug then CFA Full size table Paw incision then drug Mechanical allodynia After treating paw incision-induced mechanical allodynia with vehicle or equi-antinociceptive doses of morphine or ZH853, animals treated with morphine experienced greater allodynia overall than vehicle-treated animals (Fig. 4 a). ZH853-treated animals, however, experienced less pain than vehicle on days 14 and 28, and less pain than morphine-treated animals from day 14 to 28 (interaction F 20, 160 = 2.976, p < 0.0001; time F 10, 160 = 32.82, p < 0.0001; drug F 2, 16 = 12.64, p = 0.0005). The AUC between day 7 and 42 shows that morphine-treated animals had significantly more pain than vehicle- and ZH853-treated groups, and that vehicle-treated animals had more overall pain than ZH853-treated animals ( F 2, 16 = 9.293, p = 0.0021). At day 42, all groups had returned to baseline allodynia, and a naltrexone injection was used to probe whether latent sensitization had developed. In both vehicle- ( p < 0.001) and morphine- ( p < 0.001) treated animals, significant allodynia developed after 30 min, but ZH853-treated animals showed no change in allodynia. Additionally, on the last day of drug dosing (D3, green box), ZH853 still produced some anti-allodynia versus vehicle which was not the case for morphine. Fig. 4 Paw incision (PI) then drug. Immediately after closing the incision, a bolus injection of drug was given and pumps were connected for 3 days. a Mechanical allodynia was greatly reduced by both morphine and ZH853 on the first day of drug but by day 2 and 3 only ZH853 gave analgesia versus vehicle treatment. b ZH853 prevented thermal hyperalgesia for longer than morphine, but ultimately animals tolerated to both drugs by D3. However, ZH853-treated animals recovered more quickly than vehicle or morphine groups. Upon treatment with naltrexone, relapse to thermal hyperalgesia was apparent in vehicle- and morphine-treated animals but not ZH853-treated animals. c CatWalk data was highly variable and there were no differences between drug treatment groups. n indicated in bar graphs. Dashed lines indicate difference versus morphine while solid lines indicate difference versus vehicle. Error bars indicate SEM. + = veh vs morphine, # = veh vs ZH853, * = morphine vs 853. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 Full size image Thermal hyperalgesia Morphine and ZH853 were antinociceptive in the first hour of drug dosing (Fig. 4 b: H1, green box), but morphine antinociception was greatly reduced at 3 and 5 h. By D1, morphine-treated animals had pain similar to vehicle-treated animals, but ZH853 was still providing some pain relief. By D3, all groups were similarly hyperalgesic. On days 21–28, morphine-treated animals were in more pain than ZH853-treated animals, and on day 28, vehicle-treated animals were also in more pain than ZH853-treated animals (interaction F 26, 208 = 14.56, p < 0.0001; time F 13, 208 = 47.27, p < 0.0001; drug F 2, 16 = 60.74, p < 0.0001). After all groups were recovered at D42, naltrexone was administered and thermal hyperalgesia developed in vehicle- and morphine-treated animals but not ZH853-treated animals ( p = 0.0001). Additionally, the AUC indicates differences in sensitivity over the recovery time course ( F 2, 16 = 5.970, p = 0.0116) with morphine-treated animals experiencing the most sensitivity followed by vehicle- then ZH853-treated animals. Functional impairment In the analysis of seven CatWalk variables, changes in gait function were apparent following CFA injection, but there were no differences between drug treatment groups (swing time, interaction F 10, 75 = 0.9907, p = 0.4591; time F 5, 75 = 4.082, p = 0.0025; drug F 2, 15 = 2.062, p = 0.1617) (Fig. 4 c). It should be noted that the CatWalk only captures subtle changes in gait following paw incision surgery unlike the dramatic changes seen with CFA. The small dynamic range in these gait changes may make it more difficult to see differences in drug treatment groups. Prolonged allodynia and hyperalgesia By comparing the means at each time point with the group baseline using a two-way ANOVA, we determined the average length of time in pain by drug treatment groups from day 1 to the end of the study. Significance at each time point is listed in Table 4 . Morphine significantly prolonged thermal hyperalgesia versus both vehicle and ZH853. ZH853, on the other hand, shortened allodynia and thermal hyperalgesia versus both morphine and vehicle. Drug administration did not affect CatWalk variables, likely because they had nearly returned to baseline by the onset of drug treatment. Table 4 Time to recovery analysis: paw incision (PI) then drug Full size table Drug then paw incision Mechanical allodynia After exposure to tolerance-inducing doses of morphine or ZH853, animals treated with morphine experienced greater allodynia than vehicle-treated animals at 14 and 21 days after paw incision (Fig. 5 a). ZH853-treated animals, however, showed improved pain sensitivity versus vehicle- and morphine-treated animals as early as D7, and experienced less pain than morphine-treated animals from D7 to 21 (interaction F 18, 144 = 1.957, p = 0.0158; time F 9, 144 = 57.06, p < 0.0001; drug F 2, 16 = 2.540, p = 0.1102). The AUC between D1 and 42 shows that morphine animals had significantly greater allodynia than ZH853-treated animals ( F 2, 16 = 6.509, p = 0.0085). At day 28, all groups had returned to baseline, and a naltrexone injection was used to probe whether latent sensitization had developed. Unexpectedly, no allodynia was observed in any group at this time point. Fig. 5 Drug then paw incision (PI). “Days” indicate days after paw incision. Drug was given for 5 days and stopped for 2 days prior to incision. a Mechanical allodynia developed in all groups following paw incision, but ZH853-treated animals recovered faster than vehicle- or morphine-treated animals. b Thermal hyperalgesia was severe and long-lasting in all drug treatment groups, but ZH853 prevented LS as indicated by lack of unmasking by naltrexone. c CatWalk variables changed in response to paw incision but not by drug group. n indicated in bar graphs. Dashed lines indicate difference versus morphine. Error bars indicate SEM. + = veh vs morphine, # = veh vs ZH853, * = morphine vs 853. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 Full size image Thermal hyperalgesia After drug treatment, paw incision induced a strong thermal hypersensitivity in all animals. There were significant differences by the interaction of time and drug group (interaction F 28, 224 = 2.697, p < 0.0001; time F 14, 224 = 79.95, p < 0.0001; drug F 2, 16 = 1.433, p = 0.2676) and a naltrexone challenge caused thermal hyperalgesia in vehicle- and morphine-treated groups but not ZH853-treated groups ( p < 0.0001). The AUC indicates a lack of overall differences in sensitivity ( F 2, 16 = 1.324, p = 0.2937) (Fig. 5 b). Functional impairment In the analysis of seven CatWalk variables, changes in gait function were apparent following paw incision, but there were no differences between groups due to drug treatment (swing time: interaction F 14, 112 = 0.7318, p = 0.7385; time F 7, 112 = 19.82, p < 0.0001; drug F 2, 16 = 0.5549, p = 0.5848) (Fig. 5 c). Prolonged allodynia and hyperalgesia By comparing the means at each time point with the group baseline using a two-way ANOVA, we determined the average length of time in pain by drug treatment groups from D1 to the end of the study. Significance at each time point is listed in Table 5 . Morphine significantly prolonged allodynia versus both vehicle and ZH853. ZH853, on the other hand, shortened allodynia vs morphine and thermal hyperalgesia versus both morphine and vehicle. Table 5 Time to recovery analysis: drug then paw incision (PI) Full size table Inflammation In order to determine whether morphine-exacerbated CFA pain is associated with potentiated pro-inflammatory signaling in the spinal dorsal horn, spinal cords were collected from animals perfused at 25 days (CFA then drug) and 21 days (drug then CFA) for IHC. These time points were chosen because of the profound differences in allodynia between drug treatment groups. Several measures of inflammation were assessed at the level of L4–L6 of the spinal cord. Overall, inflammatory markers were upregulated consistently among morphine-treated animals. In almost every instance, ZH853-treated animals had reduced inflammation when compared with morphine-treated animals, and vehicle-treated animals had variable inflammation in relation to morphine or ZH853. In the CFA-then-drug paradigm (Fig. 6 a), drug treatment impacted pp38 ( F 2,25 = 8.013; p = 0.0020), IL-1β ( F 2,24 = 6.358; p = 0.0061), CGRP ( F 2,25 = 11.12; p = 0.0004), astrocyte activation (GFAP) ( F 2,23 = 4.285; p = 0.0262), and P2X7Rs ( F 2,25 = 3.713; p = 0.0401). Microglial (OX42) differences were not apparent at this time point ( F 2,25 = 1.029; p = 0.3719). Fig. 6 Immunohistochemistry following CFA and drug. Tissue was taken 25 days after CFA in the CFA-then-drug paradigm and 21 days after CFA in the drug-then-CFA paradigm. a CFA then drug. Staining at this time point did not show higher microglial activation in any group, but pp38 was decreased by ZH853, and IL-1β and CGRP were increased by morphine relative to vehicle and ZH853. Astrocyte activation and P2X7Rs were increased in morphine-treated animals compared to ZH853-treated animals. b Drug then CFA. Microglial activation was greater in animals pretreated with morphine- versus vehicle- and ZH853-treated animals. pp38 was increased by morphine and decreased by ZH853, producing a significant difference between drugs. IL-1β was significantly decreased by ZH853 relative to morphine and vehicle. CGRPR staining was increased by morphine relative to vehicle- or ZH853-treated animals. Astrocyte activation was decreased by ZH853 relative to vehicle and morphine, and P2X7R was increased by morphine relative to vehicle and ZH853. n indicated in bar graphs. Error bars indicate SEM. One-way ANOVAs were performed for each marker. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 Full size image In the drug-then-CFA paradigm (Fig. 6 b), drug greatly impacted microglial activation (OX42) ( F 2,19 = 6.891; p < 0.0001), pp38 ( F 2,21 = 9.140; p = 0.0014), IL-1β ( F 2,21 = 10.86; p = 0.0006), CGRP ( F 2,21 = 10.35; p = 0.0007), astrocyte activation (GFAP) ( F 2,21 = 12.57; p = 0.0003), and P2X7Rs ( F 2,20 = 9.328; p = 0.0014). Representative images for this paradigm for all markers are included in Fig. 7 . Fig. 7 Representative images of IHC from the Drug-then-CFA paradigm. Each analysis was done on images acquired with the same exposure, gain, and aperture across treatment groups, but exposure and gain were adjusted for each antibody. All images were taken on a × 10 objective and microglia were imaged at × 20 for the inset of OX42. OX42 labels microglia, pp38 labels phosphorylated p38 Map Kinase, IL-1β labels interleukin-1beta, CGRP labels α-calcitonin gene-related peptide, Astro6 labels glial fibrillary acidic protein (GFAP), and P2X7R labels the P2X purinoceptor 7 Full size image The effect of CFA alone is represented by the vehicle-treated group. Except for OX42 in the CFA-then-drug model (Fig. 6 a), morphine increased all inflammatory markers relative to ZH853. pp38 was reduced by ZH853 relative to vehicle; otherwise, ZH853 and vehicle groups were not significantly different. Morphine increased IL-1β and CGRP relative to vehicle. In the drug-then-CFA model, morphine increased all inflammatory markers relative to ZH853. ZH853 reduced IL-1β and GFAP versus vehicle, and morphine increased OX42, CGRP, and P2X7Rs versus vehicle. A simple linear regression was calculated to determine whether a correlation existed between inflammatory markers and von Frey scores regardless of drug administered or paradigm (Fig. 8 ). A significant positive correlation with mechanical allodynia was found for pp38 ( F 1,24 = 12.87, p = 0.0015), with an R 2 of .3490, and for P2X7R ( F 1,21 = 4.930, p = 0.0375), with an R 2 of 0.1901. There were no significant correlations between allodynia and OX42, IL-1β, CGRP, or GFAP at these time points. Fig. 8 Regression analyses of IHC markers versus von Frey scores. Individual scores for both von Frey at the time point just prior to perfusion and integrated density (or cell count for pp38) from both paradigms of CFA and drug treatment were plotted for all drug groups. pp38 and P2X7R are consistently correlated with pain. P values are listed on each graph Full size image Latent sensitization While we expected that morphine-treated animals would experience greater LS compared to vehicle-treated animals, there appears to have been a “bottoming out” effect after naltrexone administration, causing both groups to appear equally affected. However, ZH853-treated animals consistently did not experience LS unmasking regardless of type of pain or order of drug treatment. Corder et al. [ 4 ] described the unmasking of LS following recovery from CFA in mice and showed that both mechanical allodynia (von Frey) and thermal hyperalgesia (Hargreaves) scores indicated a relapse to pain. We were surprised to find that in both paradigms of the CFA model, LS unmasking was apparent in von Frey testing (Figs. 2 a and 3 a) but not in the Hargreaves test, even with a 5 mg/kg naltrexone dose (Figs. 2 b and 3 b). After paw incision then drug treatment, allodynia was apparent in vehicle- and morphine-treated groups, but there was no relapse to allodynia in the reverse paradigm (Fig. 4 b). However, relapse to thermal hyperalgesia was clearly present in both paradigms of paw incision (Fig. 5 b). To see if there was any connection between the lengths of time in pain and whether or not LS developed, we plotted a summary of the total number of days in pain, charted according to pain type and sequence of drug and pain timing (Fig. 9 ). The total number of days in pain were calculated by two-way ANOVA as described in Tables 1 , 2 , 3 , and 4 . In addition, asterisks mark the instances where vehicle- or morphine-treated animals showed LS unmasking after naltrexone was administered. It is visually clear that LS occurred in instances where there was prolonged pain, but not if pain was under 21 days for allodynia (Fig. 9 a) or under 11 days of thermal hyperalgesia (Fig. 9 b). ZH853-treated animals consistently experienced less time in a hypersensitive state than other groups, which could be a major contributing factor to ZH853’s protective effect against LS. Fig. 9 Total number of days in pain. a The number of days that rats experienced allodynia by pain type and drug/pain sequence. b The number of days that rats experienced thermal hyperalgesia by pain type and drug/pain sequence. Asterisks indicate which scenarios produced LS in the morphine and vehicle groups Full size image Discussion The results of this study show that chronic use of ZH853 does not exacerbate the hyperalgesia and allodynia induced by inflammatory or post-operative pain while morphine does. Consistent with prior literature [ 1 , 2 , 16 , 17 ], morphine exposure at moderately antinociceptive doses caused increased pain sensitivity. ZH853, which produced antinociception with less tolerance than morphine, caused no such increase in pain sensitivity across all pain tests. In fact, ZH853 diminished the amount of time in pain versus morphine in all tests and even versus vehicle treatment in several instances. This was an unexpected and unprecedented finding considering that opioids are known to increase and prolong many types of pain [ 1 , 2 , 16 , 17 ]. Finally, ZH853 protected against the development or susceptibility to LS while morphine did not. In addition to classical pain tests, we assessed operant behavior using the CatWalk and found that injury-induced deficits were transient and detectable at earlier time points than in the classical von Frey and Hargreaves tests. Significant impairment of gait by morphine, but not ZH853, was observed in several parameters during the drug-then-CFA paradigm. Gait analysis for pain monitoring is becoming increasingly developed for use in humans [ 34 , 35 ] and, similarly to the animals in this study, pain-inducing inflammatory conditions of the human foot result in a loss of heel strike [ 35 ]. Antalgic (pain-avoidance) gait is defined by a reduced stance phase relative to swing phase, which is observed in humans with pain as well as animals [ 28 , 35 , 36 ] and observed in both paradigms of this study following CFA (Figs. 2 c and 3 c). In the pain-then-drug paradigm (Fig. 2 c), drug dosing began as functional impairments were nearly back to baseline, which was likely the reason that no differences were apparent between drug groups. In the drug-then-pain paradigm (Fig. 3 c), animals exposed to morphine showed greater gait impairments across several parameters up to the 7th day after CFA injection, indicating a decreased use of the left hind foot (guarding) versus vehicle- and ZH853-treated animals. The drug-then-CFA paradigm was the only test that captured a drug-induced difference in sensitivity within the first few days. For the drug-then-paw incision test, a less robust hypersensitivity, and therefore smaller dynamic range of responses, may have masked differences. For both the CFA and paw incision then drug tests, recovery had occurred by the end of dosing. The fact that functional impairments were observed from day 1 after CFA until day 3 or day 7 but not after that suggests that this test is not measuring mechanical allodynia as some groups have proposed [ 31 ]. The recovery timelines observed in this study mimicked those previously described in mice [ 18 ] and rats [ 37 ], showing short-term functional impairments (guarding, spontaneous pain) and long-term increases in allodynia and thermal hyperalgesia. In a comprehensive study by Djouhri et al. [ 38 ], spontaneous foot lifting (SFL) was used to measure spontaneous pain in comparison to von Frey and Hargreaves testing after CFA injections. In the first 2 days after injury, rats exhibited significant SFL which did not correlate with von Frey scores. However, they found that spontaneous C-fiber activity had greatly increased in these early time points. Although not directly compared in this study, it is not unreasonable to suggest that gait impairments are indicative of spontaneous pain, but many investigators have commented on the inability of painkillers to reverse these gait impairments acutely. Interestingly, some aspect of prior morphine, but not ZH853, managed to make these gait impairments worse, possibly through an increase in spontaneous activity at Aδ and/or C-fibers [ 39 ], which was not directly tested here. Finally, naltrexone was unable to provoke antalgic gait changes, which may differentiate this phase of pain from allodynia or thermal hyperalgesia that are susceptible to relapse (LS). Rats treated with morphine generally showed signs of increased spinal inflammatory markers in immunohistochemical tests when compared to vehicle- and ZH853-treated animals. This supports findings from several other studies [ 1 , 2 , 11 , 16 , 17 , 40 ]. While vehicle-treated animals usually had inflammatory expression somewhere between ZH853- and morphine-treated animals, ZH853-treated animals consistently showed the least amount of inflammation. Previous reports from our lab and others have shown pro-inflammatory effects of morphine [ 11 ], and additive inflammatory effects of morphine in an animal in pain [ 1 , 2 , 16 , 17 ]. Anti-inflammatory drugs like minocycline can block the development of tolerance, addiction, and dependence [ 41 , 42 , 43 , 44 ], but in humans it is not always possible or beneficial to treat with these drugs long-term. Several mechanisms by which morphine activates glia or enhances trauma-induced activation have been proposed (Fig. 10 ), including (1) induction of neuronal CGRP that activates microglial CGRP receptors [ 45 ], (2) binding at toll-like receptors (TLRs) alone or in concert with damage-associated molecular patterns (DAMPs) [ 46 ], and (3) activation of purinergic P2X receptors including by upregulation through MOR [ 47 ] (but see [ 48 ]). Each of these mechanisms causes phosphorylation of the mitogen-activated protein kinase (MAPK) p38, which activates nuclear factor kappa-light-chain-enhancer of activated B cells (NFκB) to increase transcription and translation of cytokines, including interleukin-1β (IL-1β [ 49 ], example shown), as well as IL-6, IL-18, and tumor necrosis factor-α (TNF-α). P2X7R activation also induces NOD-like receptor family, pyrin domain containing 3 (NLRP3) to become a complex (inflammasome) [ 50 ] that contains and releases active caspase-1 which cleaves pro-IL-1β into the active cytokine IL-1β [ 51 ]. IL-1β activates IL-1 receptors on astrocytes and neurons, further increasing inflammation. Ultimately, this proinflammatory pathway causes dorsal horn plasticity that leads to central sensitization and increased pain [ 52 ]. Pre- and/or postoperative inhibition of TLR4, phosphorylated p38 (pp38), NFκB, and IL-1β have been shown to reduce postoperative mechanical hyperalgesia [ 53 , 54 , 55 , 56 ]. Fig. 10 Schematic of postulated mechanisms of microglial activation by morphine but not by ZH853. Proposed mechanisms by which morphine activates glia or enhances trauma-induced activation include (1) induction of neuronal CGRP that activates microglial CGRP receptors [ 45 ], (2) binding at toll-like receptors (TLRs) alone or in concert with damage- or pathogen-associated molecular patterns (DAMPs, PAMPS) [ 46 ], and (3) activation of purinergic P2X receptors including by upregulation through MOR [ 47 ] (but see [ 48 ]). Each of these mechanisms causes phosphorylation of the mitogen-activated protein kinase (MAPK) p38 to increase transcription and translation of cytokines, including interleukin-1β (IL-1β [ 49 ], example shown), P2X7R activation also induces NOD-like receptor family, pyrin domain containing 3 (NLRP3) to become a complex (inflammasome) [ 50 ] that contains and releases active caspase-1 which cleaves pro-IL-1β into the active cytokine IL-1β [ 51 ]. IL-1β activates IL-1 receptors on astrocytes and neurons, further increasing inflammation. Ultimately, this proinflammatory pathway causes dorsal horn plasticity that leads to central sensitization and increased pain [ 52 ] Full size image While chronic morphine also activates all of these pathways in naïve rats, we recently showed the ZH853 activates none of them in a pain-naïve state [ 11 ]. In pain states, preemptive use of EM-1 and EM-2 reduces inflammatory (CFA) pain for days, even though the analgesic actions of the EMs only last ~ 30 m [ 57 ]. Through a number of pharmacological manipulations, Zhang et al. determined that p38 activation was significantly increased by pain but dramatically reduced by the EMs through a MOR-dependent mechanism [ 57 ]. Additionally, they found EM-induced reductions in IL-1β, TNF-α, C-C motif chemokine ligand 2 and 3 (CCL2/3), with an increase in IL-10 mRNA. Others have shown that chronic morphine upregulates CGRP [ 45 , 58 ], but this is the first time that long-term upregulation of CGRP following morphine and pain has been observed. In both CFA paradigms, vehicle- and ZH853-treated animals had similar CGRP expression, but weeks after the cessation of drug administration and CFA, morphine-treated animals had significantly higher levels of CGRP. Although not a direct component of the immune system, CGRP has been identified as a key player in the neuro-immune axis [ 59 ] with a range of functions at T cells, dendritic cells, mast cells, and macrophages that modulate pro- or anti-inflammatory factors. In particular, CGRPRs on microglia are likely a site of immune cell priming by pain in the spinal cord [ 45 ] which is then exacerbated by morphine or vice versa. Astrocytes have largely been neglected in the long-term pain and opioid studies mentioned here, but one study found that astrocyte activation, not microglia, is best correlated with allodynia over time after induction of neuropathic pain [ 60 ]. After postoperative CFA plus morphine, neither astrocytes nor microglia appeared to be activated in staining, but other markers (pp38) indicated that they were activated [ 16 ]. In the current study, reduced immunofluorescence of GFAP and other markers indicates an anti-inflammatory effect of ZH853 versus the morphine-treated group (Fig. 6 a). In the drug-then-CFA paradigm, ZH853 is anti-inflammatory versus the vehicle-treated group (GFAP, IL-1β) and, to a greater degree, versus the morphine-treated group (all markers) (Fig. 6 b). However, astrocyte activation does not correlate with CFA-induced pain scores (Fig. 8 ). Despite the seemingly clear relationship between increased inflammation and prolonged pain, pp38 and P2X7R are the only markers whose upregulation actually correlates directly with pain scores (Fig. 8 ). These findings underscore the utility of the pp38 marker in assessing pain pathways after complex manipulations; it is a unifying factor regardless of whether microglia are activated by CGRP, TLR4, or P2XRs. In addition, the results indicate that P2X7R may be of particular importance in CFA-induced pain. Given that ZH853 reduced all inflammatory factors versus morphine, it is reasonable to use the novel peptide in place of morphine to avoid production of additional pro-inflammatory, pronociceptive agitation in the CNS. LS has been demonstrated in a number of pain types and animals [ 6 , 7 , 32 , 61 ]. LS develops as a homeostatic mechanism to combat ongoing pain, allowing an organism to function effectively in the face of pain that has outlasted utility. Chen et al. discovered that top-down initiation of MOR CA is blocked by spinal lidocaine administration [ 3 ], indicating that initiation comes from the brain in response to long-lasting pain. It appears that ZH853, either prior to or after pain induction, acts to quiet pain signaling in a manner distinct from morphine, alleviating the need for development of a compensatory mechanism (MOR CA ). When the pain insult finally resolves, there is no underlying pathology in place that is susceptible to unmasking by naltrexone. Morphine, on the other hand, continues to exacerbate pro-inflammatory, pro-nociceptive signaling that ramps up the pain signal. This in turn ramps up the compensatory MOR CA , allowing LS to manifest. Additionally, opioids like morphine induce LS in the absence of pain in an N -methyl- d -aspartate receptor (NMDA)-dependent manner [ 5 ], and one group has found that N 2 O can prevent the development of opioid-induced LS by acting as an NMDA antagonist [ 62 ]. While some unpublished work in the lab has indicated that ZH853 does not bind significantly to NMDARs, no studies have been done to determine whether ZH853 acts as an NMDA allosteric modulator, but this is a possible explanation for the results described in this section. As shown in Fig. 9 , LS was produced in five of the eight paradigms tested. Those not producing LS were conditions in which the days in pain were generally shorter. This suggests that perhaps the severity and length of pain signaling has an influence on the development of LS and ZH853 might be protective against LS simply because ZH853-treated animals spent less time in pain overall. However, postoperative thermal hyperalgesia (Fig. 9 b 3rd column) was the exception to this with ZH853-treated animals experiencing 21 days in pain with no sign of LS, indicating that factors other than time in pain contribute to both LS and its blockade by ZH853. A drug that prevents the transition from acute to chronic relapsing pain would represent a true breakthrough in drug development for pain management. Not only have the mechanisms behind the shift from acute to chronic pain been elusive, but efforts to thwart this transition have been unsuccessful thus far. Two recognized markers of this transition are sustained glial activation [ 1 , 2 , 16 , 43 , 46 , 63 , 64 , 65 , 66 , 67 , 68 ] and development of LS [ 4 , 6 , 8 , 10 , 62 ], which were both reduced by ZH853. Conclusion Chronic pain patients and their doctors often face the dilemma of whether to risk serious side effects to achieve needed pain relief with opioids. Here, we show that the novel opioid, ZH853, shortens recovery time from inflammatory and postoperative pain while morphine prolongs recovery. Further, morphine does not affect latent sensitization while ZH853 blocks it. These results indicate two mechanisms of reduced transition from acute to chronic pain after treatment with ZH853. In addition, previous studies showed significant reduction, relative to morphine, of respiratory depression, abuse liability, tolerance, inflammation, and impairment of motor coordination [ 11 ] and equal or greater relief of pain in multiple chronic pain models [ 12 ]. Taken together, these results indicate that ZH853 could transform pain management. Abbreviations ANOVA: Analysis of variance Astro6: GFAP antibody AUC: Area under the curve CCI: Chronic constriction injury CFA: Complete Freund’s adjuvant CGRP: Calcitonin gene-related peptide CNS: Central nervous system DAMPs: Damage-associated molecular patterns EM: Endomorphin GFAP: Glial fibrillary acidic protein i.t.: Intrathecal IHC: Immunohistochemistry IL: Interleukin LH: Left hind LS: Latent sensitization MOR: Mu-opioid receptor MOR CA : Mu-opioid receptor constitutive activity NFκB: Nuclear factor kappa-light-chain-enhancer of activated B cells NLRP3: Nucleotide-binding oligomerization domain, leucine rich repeat and pyrin domain containing 3 (inflammasome complex) NMDA(R): N -methyl- d -aspartate (receptor) NTX: Naltrexone OX42: Anti-CD11b/c antibody (microglial marker) P2X: Purinergic receptors- e.g., P2X4, P2X7 PBS: Phosphate-buffered saline PEG: Polyethylene glycol pp38: Phosphorylated-p38 MAP Kinase RH: Right hind SEM: Standard error of the mean SFL: Spontaneous foot lifting TLR: Toll-like receptor
Morphine and other opioid-based painkillers are very effective at treating pain initially, but studies have shown that the drugs can make patients more pain-sensitive, prolonging their discomfort and increasing their risks of developing chronic pain. A new type of opioid developed by researchers at Tulane University and the Southeast Louisiana Veterans Health Care System doesn't have this side effect and accelerates recovery time from pain compared to morphine, according to a new study published in the Journal of Neuroinflammation. Previous pre-clinical studies at Tulane have shown that the drug is as strong as morphine but isn't addictive and causes fewer side effects. "A drug that prevents the transition from acute to chronic relapsing pain would represent a true breakthrough in drug development for pain management," said senior study author James Zadina, professor of medicine, pharmacology and neuroscience at Tulane University School of Medicine and director of the neuroscience laboratory at the VA. "Not only have the mechanisms behind the shift from acute to chronic pain been elusive, but efforts to thwart this transition have had little success." Scientists tested a novel opioid called ZH853 using rat models of inflammatory pain and pain after surgery. The drug is an engineered variant of the neurochemical endomorphin, which is found naturally in the body. Researchers treated rats with ZH853, morphine or a placebo. Rats treated with morphine for a few days recovered more slowly than those given a placebo. This was true whether the morphine was given before or after the injury, indicating that prior use—or abuse —of opioids could aggravate subsequent recovery from injury. "Morphine provoked central nervous system glia to produce pro-inflammatory compounds that increased pain," Zadina said. "ZH853 did not have this effect." When tested in the same inflammatory and postoperative pain conditions as morphine, the new drug unexpectedly accelerated recovery from the pain—in some cases slashing recovery time in half compared to both morphine and a placebo. In one group, pain lasted 32 days with no treatment, 46 days after morphine and only 11 days after ZH853. "ZH853 diminished the amount of time in pain versus morphine in all tests," said study first author Amy Feehan, Ph.D., a Tulane neuroscience graduate student. "This was an unexpected and unprecedented finding considering that opioids are known to increase and prolong many types of pain." Researchers also ran tests for a form of pain sensitivity that can be masked by changes in the body's endorphin system after an injury. When an injury causes pain, the body's endogenous opioid system engages to counteract it. If the opioid system is blocked—either by stress or an antagonist—the underlying pain can return even after the injury has healed and contribute to chronic pain. Unlike morphine, the new drug prevented this. "With ZH853, the underlying pain was eliminated rather than simply masked," Zadina said. "ZH853 attenuated or blocked two separate processes that contribute to the transition from acute to chronic pain, neuroinflammation and latent sensitization." Researchers hope to begin human clinical trials of the new drug within the next two years. "I believe it's vitally important to treat chronic pain as a disease of the nervous system and treat the underlying pathology of chronic pain rather than just treating the symptoms as they arise," Feehan said. "Current opioid treatments are effective in the short term for pain symptoms, but the downside is that pain ultimately can become worse because chronic opioid use can aggravate the immune system. ZH853 quiets the pain symptoms as well as morphine does, but it also diminishes inflammation, reducing recovery time and preventing relapse to pain later."
10.1186/s12974-019-1480-x
Medicine
An app twice a day keeps the dentist away
British Dental Journal , DOI: 10.1038/sj.bdj.2015.660
http://dx.doi.org/10.1038/sj.bdj.2015.660
https://medicalxpress.com/news/2015-09-app-day-dentist.html
Abstract Introduction Mobile apps are software programmes that run on smartphones and other mobile devices. Mobile health apps can help people manage their own health and wellness, promote healthy living and gain access to useful information when and where they need it. The Brush DJ oral health app was developed to use the opportunity mobile apps offer to motivate an evidence-based oral hygiene routine. A literature review has found no research investigating the use of a mobile app to motivate evidence-based oral hygiene behaviour. Objective The objective of this preliminary investigation was to assess user perception of an oral health app to give a basis for future research and development of app technology in relation to oral health. Method A cross-sectional qualitative user perception questionnaire. Results One hundred and eighty-nine people responded to the questionnaire. Seventy percent (n = 113) of respondents reported that their teeth felt cleaner since using the app. Eighty-eight percent (n = 133) reported the app motivated them to brush their teeth for longer and 92.3% (n = 144) would recommend the app to their friends and family. Four broad themes relating to how the app helped toothbrushing were reported. These themes were motivation, education, compliance and perceived benefits. Conclusion A mobile app is a promising tool to motivate an evidence-based oral hygiene routine. Introduction Mobile applications (apps) Mobile apps are software programmes that run on smartphones and other mobile devices. 1 Over 75 billion apps have been downloaded from the Apple App Store 2 since its launch in 2008, 3 with over 50 billion apps downloaded from Google Play. 4 These apps have been downloaded onto just under 1 billion Apple iOS devices 5 and over 1 billion Android devices 6 around the world. As the number of these devices has increased, the price has reduced making them an affordable alternative to traditional mobile phones, with a £26 smartphone being available to UK consumers. 7 Global smartphone subscriptions have been predicted to grow to 5.6 billion by 2019. 8 It is not just adults who own a device capable of running mobile apps, with a reported 81% of 13–18-year-old phone owners in the UK owning a smartphone 9 and 88% of 16–24-year-olds. 10 The age of those able to use these devices is decreasing with OfCom reporting that six-year-olds understand digital technology better than adults. 11 As well as being used on smartphones, mobile apps can be used on tablet computers. In 2014 62% of children in the UK used a tablet computer at home, compared to 42% in 2013. 12 Health apps The US Food and Drug Administration states: 'The widespread adoption and use of mobile technologies is opening new and innovative ways to improve health and healthcare delivery. Apps can help people manage their own health and wellness, promote healthy living and gain access to useful information when and where they need it'. 1 It is estimated that in 2015 half-a-billion people will be using healthcare mobile apps, with this figure increasing to 50% of the estimated 3.4 billion smartphone and tablet users by 2018. 13 A survey carried out among patients in 2012 found 59% of respondents indicated that mobile health apps would change the way health information is sought and 50% felt that these apps will radically change the way they manage their chronic disease. 14 Mobile devices are a useful means to deliver health interventions because of their widespread adoption, powerful technical capabilities, portability – people tend to have their mobile phones on them at most times and form strong emotional attachments to them. 15 'Sick or well, we have come to love our mobile devices. They are a source of immediate gratification: a powerful link to those we love, access to pictures, sports scores, movies and gossip about friends. That little device is so positive, so beloved. It connects us to the world.' 16 This positive emotional attachment may benefit health promotion via a mobile device being accepted more readily than traditional means, especially among those who have grown up with the technology. People spend more time with their mobile phones than with their partners or at work, meaning health intervention can be delivered anytime and anywhere. Health apps have been developed to manage various common medical conditions such as diabetes, 17 asthma, 18 pain 19 and dermatological conditions. 20 This latter example highlights the need for careful selection and regulation of apps, as an analysis of apps claiming to be able to assess melanoma risk found three out of four of the apps incorrectly classified 30% or more of melanomas as 'unconcerning'. Oral health Poor oral health can affect someone's ability to eat, speak, smile and socialise normally, due to pain or social embarrassment. 21 Caries has been reported as the most common reason for children aged between five and nine being admitted to a hospital in England. The same report found that 70,000 children from birth to 16 years of age were admitted to hospital in England as a result of dental decay in 2012/13. 22 Hospital treatment has a significant financial cost to the NHS and therefore taxpayer. It is also a traumatic experience for the child, parent/carer and all those healthcare professionals involved. Surveys have reported periodontitis affects almost half of all adults in the UK. 23 Evidence of the daily oral hygiene tasks adults and children need to carry out to prevent caries and periodontal disease is known. 24 Evidence is also available that a significant percentage of the population do not accomplish these daily tasks, with 33% of men brushing less than twice a day 25 and 59% of women regularly skipping brushing at bedtime. 26 Oral health app The Brush DJ app was developed to use the opportunity mobile apps offer to motivate an evidence-based oral hygiene routine. The app aims to motivate users to brush for two minutes by playing music, taken either from a playlist, or randomly, from the music stored on the user's device and cloud. The idea of using music to motivate brushing for longer is not new, with Clemens and Taylor reporting in 1980 the development of an audiotape combining music and instruction. 27 Listening to music in the bathroom has become more practical with the development of mobile devices with built in speakers. As well as playing two minutes of music, the app reminds users to spit out after brushing and not rinse, to maintain fluoride concentrations ( Fig. 1 ). The app allows users to set reminders to brush twice a day, use a mouthwash at a different time of the day to toothbrushing ( Figs 2 , 3 , 4 ), when their next dentist/hygienist/therapist appointment is ( Figs 5 , 6 ) and to change their toothbrush every three months. By July 2014 over 1 million reminders had been sent to users of the app. It also contains the information for patients on how to prevent dental caries and periodontal disease given in the Public Health England document Delivering Better Oral Health: an evidence-based toolkit for prevention 24 ( Figs 7 , 8 , 9 , 10 ). The app has links to the NHS Smokefree website, 28 NHS Choices Healthy Eating 29 and Alcohol websites. 30 The app also links to animated videos published on YouTube showing how to effectively use a manual toothbrush, floss and interdental brushes. 31 Figure 1 Screen shot from Brush DJ app showing reminder to spit out and not rinse after toothbrushing Full size image Figure 2 Screen shot showing reminder to brush in the morning Full size image Figure 3 Screen shot showing reminder to brush in the evening Full size image Figure 4 Screen shot showing reminder to use a mouthwash Full size image Figure 5 Screen shot showing dentist appointment reminder - sent 24 hours before scheduled appointment Full size image Figure 6 Screen shot showing hygienist appointment reminder - sent 24 hours before scheduled appointment Full size image Figure 7 Screen shot from Brush DJ app showing fluoride toothpaste information for children aged up for 3 years of age Full size image Figure 8 Screen shot from Brush DJ app showing fluoride toothpaste information for children aged 3 to 6 years of age Full size image Figure 9 Screen shot from Brush DJ app showing fluoride toothpaste information for children over 7 years old and young adults Full size image Figure 10 Screen shot from Brush DJ app showing fluoride toothpaste information for adults Full size image The Brush DJ app was launched on the Apple App Store 32 in November 2011 and the Android Market 33 (now called Google Play) in March 2012. In 2013 the app was accepted into the NHS Choices Health Apps Library, 34 which aims to make it simple for people to find safe and trusted apps to help manage their health. All apps submitted to the library are reviewed to make sure that they are clinically safe, relevant to people living in England, comply with data protection laws and comply with trusted sources of information. Once an app has met these minimum requirements it is checked to see whether it could potentially cause harm to a person's health or condition. 35 By February 2015 the Brush DJ app had been downloaded on to over 155,000 devices in 182 countries. The app is free, with no adverts or in-app-purchases and it can be used with any type of toothbrush. Before oral health apps can be recommended to patients and as a public health measure; the question of their effectiveness and cost effectiveness in comparison to existing methods of motivating an evidence-based oral health routine need to be considered. A literature review found that no research currently exists investigating the use of a mobile app to motivate evidence-based oral hygiene behaviour. Objective The objective of this preliminary investigation was to assess user perception of an oral health app to give a basis for future research. Method A cross-sectional qualitative user perception questionnaire to examine experiences and beliefs of people using the Brush DJ app was created using SurveyMonkey. 36 This was piloted, then updated from the feedback and piloted again before final distribution (contact the corresponding author for a link to the questionnaire). The final questionnaire consisted of nine multiple choice questions, with space to give further details if none of the options was suitable and one open-ended question. The invitation to respond was via a pop-up notification, which appeared when the app was opened three times. The invitation gave the option to not take part in the survey, to be invited again to respond at a later date (this would result in the pop-up invitation appearing again after another three times opening the app) or to take part, which took the user to the questionnaire website page. No incentives were offered to complete the questionnaire, which was designed to be completed on a smartphone or tablet and kept short to reduce respondent fatigue. Respondents could skip questions, if they preferred to. Ethical approval was not required as the participant in the survey were not randomised to different groups, the study did not demand changing treatment/patient care from accepted standards for any of the participants involved and the findings cannot be generalised. 37 The updated version of the app, with the pop-up invitation to respond to the questionnaire was released on the 24 October 2014 to all iOS users of the app worldwide and on the 16 December 2014 to all Android users. This paper reports on responses up to the 29 January 2015. Results Due to the nature of reporting by the App Store and Google Play, it is not possible to know how many people saw the invitation to respond to the questionnaire. The possible sample consisted of anyone who opened the app on at least three occasions on a mobile device running iOS or Android software in any of 182 countries the app has been downloaded. One hundred and eighty-nine people responded to the questionnaire. Of these 183 gave their gender, with the majority at 71.6% (n = 131) being female. The greatest number of respondents were from the 7–12 age group at 37.1% (n = 69) with 4.8% (n = 9) being under the age of seven ( Table 1 ). Sixty-five percent (n = 120) of respondents had used the app for under one week and 6% (n = 11) had used the app for a year or longer. Table 1 Age of respondents to Brush DJ app user perception questionnaire Full size table In response to the question 'How many times in an average day do you brush your teeth?' 77.4% (n = 128) of respondents brushed at least twice a day and 20.6% (n = 34) reported brushing once a day. At 44.8% (n = 56) the majority of respondents used the Brush DJ app twice a day when brushing with 30.3% (n = 50) using it once a day and 11.5% (n = 19) using it more than twice a day. Seventy percent (n = 113) of respondents reported that their teeth felt cleaner since using the app ( Fig. 11 ) and 39.3% (n = 57) reported their gums bleed less, with 39.3% (n = 57) reporting no change in bleeding. Eighty-eight percent (n = 133) of respondents reported the app motivated them to brush their teeth for longer and 92.3% (n = 144) would recommend the Brush DJ app to their friends and family. Figure 11 Response to the question 'Since using the app, do your teeth feel cleaner?' Full size image The final question asked of respondents was 'could you explain to me why the Brush DJ app helps your toothbrushing?' One hundred and thirty-one people responded to this question. These qualitative responses were examined using thematic analysis from which four broad themes relating to attitudes towards and responses to use of the app were given. These themes were motivation, education, compliance and perceived benefits. Motivation The most frequently reported theme was that the app motivated various aspects of the user's oral hygiene routine. Respondents reported that the app motivated them to brush their teeth, to brush for longer, that they no longer found the process boring and in fact looked forward to brushing as they found it fun and enjoyed the music. 'This app is very useful as brushing can be very boring, but who knew brushing could be so fun with a little bit of music. I've just recently started using this app as I never brushed my teeth very often, but with this app it motivates me! It makes that two minutes of brushing fun! Excellent app. Would recommend this to anyone who tends not to brush their teeth as it can be extremely boring!' Female aged 13–17 years old. 'Somehow two minutes seems to fly by for my son – a two minute egg timer dragged for him. He lives to get the applause at the end which is motivational.' Male aged under 7 years old. 'It makes brushing fun for my children. My youngest used to battle with me and now he loves brush time.' Female aged 35–44 years old. 'It helps me because I hated brushing my teeth and I would be too lazy to do it but now it motivates me to brush my teeth and it makes it fun.' Female aged 7–12 years old. Education After motivation, the next most common theme was education. Respondents reported that the app helped them carry out oral hygiene tasks in the correct order, that using the timer interface showed them to brush all of their teeth not just the front ones and that they found the videos helpful. 'Before I usually only brushed for about 30 seconds because I never knew how long you should brush your teeth. The video was also really helpful.' Female aged 18–24 years old. ' It makes me do things in the right order and stops me from rinsing after brushing .' Female aged 25–34 years old. ' It helps keep me on track to brush for the right amount of time and I spend a lot more time on all the areas of my mouth (especially my molars) as opposed to just the front where it's easiest to brush !' Female aged 25–34 years old. ' I use the flashing spots to help me brush each part of my mouth .' Male aged 7–12 years old. Compliance The degree to which users followed the self-care advice given by the app was reported by a small number of users. ' It reminds me that I need to brush at least twice a day and everyday! It times me too .' Female aged 13–17 years old. ' Keeps me on task. I also am a wanderer while I brush. So it keeps me in the bathroom focused on what I am doing .' Female aged 35–44 years old. ' Because I normally brush for 30 seconds to a minute and Brush DJ forces me to brush for two minutes. It's also fun because I can listen to my playlist .' Female aged 7–12 years old. Perceived benefits Respondents reported a number of perceived benefits to their own or their children's oral health as a result of using the app. ' I needed a timer to help know when I've brushed my two-year-old son's teeth long enough and this a fun and helpful tool. He will enjoy the music and I'll enjoy knowing his teeth are properly brushed .' Male aged under 7 years old. ' When I used to brush my teeth I would just put toothpaste on and scrub for about 30 seconds. After that my teeth still felt disgusting so I asked my dentist what would help. She said to time myself. I found this app and ever since then I have glisenning teeth. Thank You Brush DJ!!!! ' Female aged 7–12 years old. ' Because it helps to not get cavities and makes my teeth white .' Male aged 13–17. Discussion The objective of this preliminary investigation was to assess user perception of an oral health app to give a basis for future research. The majority of respondents were female and aged 7–12 years old. It is not possible to say whether this is the age and gender that most use the app or whether, as has been found in other studies, that young females are more likely to respond to a survey. 38 It is important to note that although the majority of respondents reported brushing their teeth twice a day, 20.6% reported only brushing once a day. The reason for this requires further investigation, the findings of which could be used to develop the app to include an effective behaviour change techniques to motivate twice a day brushing in this group. It is promising that 70% of respondents reported their teeth felt cleaner since using the app and 39.3% reported their gums bleed less. It is not possible to determine whether this perceived improvement in oral health corresponds to a reduced risk of caries and periodontal disease, but this does justify further investigation of the app with a clinical trial. Eighty-eight percent of respondents reported the app motivated them to brush their teeth for longer. A study in the US found the average length of time people spend brushing their teeth was 46 seconds 39 and research has shown brushing for two minutes removes 26% more plaque then brushing for 45 seconds. 40 This increased length of brushing reported by users of the app is promising and the use of music appears to motivate this extended brushing time. Studies have shown listening to music while carrying out sport motivate increased lengths of activity by reducing fatigue and making the experience more pleasurable. 41 Over 90% of users reported they would recommend the Brush DJ app to their friends and family. While the Friends and Family test has received criticism 42 it was felt by the authors to be a useful guide to the appeal of the app for this initial investigation. The 7% of respondents who would not recommend the app were not asked why in this preliminary investigation. This is something that needs investigating in future research to help improve the app. The final question asked of respondents 'could you explain to me why the Brush DJ app helps your toothbrushing?' gave the greatest insight into users feelings and beliefs about the app. Motivation was the most reported theme. This was motivation of the individual user of the app and for parents the motivation of their child/children to toothbrush. Motivation to brush due to the reminders ( Figs 2 , 3 , 4 ) was also given as a reason the app helped toothbrushing. The second major theme to emerge was education. It is important that if people are going to change behaviour they are being educated to use a new behaviour that is evidence-based. The helpfulness of the animated videos showing how to carry out oral hygiene tasked was reported. The use of animated videos to convey a health message has been shown to be effective resulting in long-term knowledge retention. 43 Given an estimated 5 million videos are viewed on YouTube every 60 seconds, compared to 2.66 million Google searches, video is a useful means of communication and health promotion. 44 Any future research should measure oral health knowledge before using the app and after its use in the short and long-term. Health apps have been justifiably criticised for not using recognised behaviour change techniques. 45 The app investigated in this study uses a number of recognised behaviour change techniques 46 to motivate an evidence-based oral health routine. Instruction on how to perform an activity: animated videos give instructions on how to use a manual toothbrush, floss and use an interdental toothbrush Demonstration of the behaviour: animated videos demonstrate how to use a manual toothbrush, floss and use an interdental toothbrush Prompts/cues: prompts use of the app ( Figs 2 , 3 ), prompts users to interdental clean before brushing ( Fig. 12 ), prompt to users to spit out, not rinse after brushing and cues to move to different parts of the mouth visually, by a vibration and a sound after each 30 seconds Figure 12 Screen shot from Brush DJ app reminding users to interdental clean before brushing Full size image Social reward: users can share the name of the song and artist they have listened to via social media Nonspecific reward: users are rewarded with a smile and applause when they achieve two minutes of brushing ( Fig. 1 ). Theoretically, the use of an app to raise awareness of evidence-based oral health information has financial advantages over traditional methods such as leaflets as there is no printing, storage, distribution, or disposal cost associated with an app. Apps are instantly scalable and updatable with the cost of producing one app being the same as any multiple, unlike a physical product. Apps can use local reminders generated by the app itself, so they have an advantage over text message reminders, which have been used to motivate better oral health. 47 Text messaging offers only passive engagement and can have a financial cost to the receiver and sender. Text messaging requires a person to give out their telephone number, which then needs to be stored giving possible concerns about confidentiality and data protection. The cost effectiveness of an app requires further investigation. If further research provides evidence that apps are more or equally effective as medications used to treat and prevent dental disease, prescribing an app in the same way fluoride toothpastes are currently prescribed in the UK, would be reasonable. A cardiologist in the US has reported prescribing more apps to his patients than medications. 48 In the US low income consumers are given a discount on phone services in an initiative unofficially named the 'Obama phone'. 49 This could reduce the barrier to uptake of mobile apps in a socioeconomic group that has a high risk of dental disease. 50 Study limitations For this preliminary investigation it was decided that a questionnaire was the most practical and effective way to gather information. The use of a questionnaire allowed for large amounts of information to be gathered in a relatively short period of time in a cost effective manner. As an anonymous questionnaire it allowed recruitment to be undertaken from those who downloaded the app. However, it is recognised that this study has limitations and is preliminary in nature. The option to complete the questionnaire was offered to users of the app after it had been opened at least three times, but it is not possible to know how many users saw the questionnaire and therefore it is impossible to quantify the response rate. It is recognised that the voluntary nature of participation in this study probably introduced bias to the findings, as it is known that people are much more likely to respond to questionnaires concerning subjects about which the respondent feels positive. However, even if only a small proportion of people introduced to the app react to it in the way the respondents describe, the app still appears to have great potential to improve the oral health of those who use it. The questionnaire permitted respondents to skip questions, this was deliberately allowed as it is recognised that questionnaires, which force the respondent to answer a question before continuing may discourage the completion of the questionnaire. 51 It can be argued that questionnaires cannot be accurately used to measure changes in behaviour as they rely on recall, truthfulness, interpretation of the questions, and the thought the responder has put into answering each question. 52 The questionnaire was developed by the app owner and as such was subject to researcher imposition, which might have resulted in questions being developed on the researchers own assumptions of what is important to the user. Despite its limitations, this study may mark the start of an important new era in oral health promotion, in which traditional methods are replaced by technology driven, evidence-based, psychologically sound interventions as it clearly demonstrates the substantial potential that mobile technology has to personalise and promote positive health behaviours. Conclusion A mobile app is a promising tool to motivate an evidence-based oral hygiene routine. Further research to assess effectiveness and cost effectiveness of an app compared to current methods used to motivate an evidence-based oral hygiene routine in the population is needed.
Research published in the British Dental Journal shows that Brush DJ, an app designed to encourage youngsters to adopt and maintain an effective oral health care routine using evidence-based techniques, is effective in its aims. Brush DJ was launched on the Apple App Store at the end of 2011 and in 2013 it was accepted into the NHS Choices Health Apps Library. By February 2015 Brush DJ, which is free with no advertisements or in-app purchases, had been downloaded on more than 197,000 devices in 188 countries. It can be used with any type of toothbrush. The app plays music for two minutes - the optimum time for brushing teeth - taken from a playlist or randomly from the user's own device or cloud. As well as encouraging tooth brushing for two minutes, it also reminds users to spit out after brushing but not to rinse, sets reminders to brush twice a day, use a mouthwash at other non-brushing times of the day, sets alerts for dental appointments and reminders to change toothbrushes once every three months. Fundamentally, it makes brushing teeth fun for youngsters. The British Dental Journal research was carried out by a team including a general dental practitioner and NHS Innovation Accelerator Fellow from York, a consultant orthodontist from Rotherham NHS Foundation Trust and a lead dental researcher, educator and Foundation Dean of the Peninsula Dental School from Plymouth University Peninsula Schools of Medicine and Dentistry. The research showed that 70 per cent of respondents reported their teeth felt cleaner since using the app and 88 per cent said that Brush DJ had motivated them to brush their teeth for longer. Ninety per cent said they would recommend the app to their friends and family. The research team concluded that not only had Brush DJ contributed to greater motivation for young people to care for their teeth more effectively, but it also has huge potential as a way to convey important oral health messages and information. Indeed, a recommendation from the study suggests that it would be reasonable to prescribe such an app in the same way in which fluoride toothpastes are currently prescribed in the UK. Ben Underwood, dentist, app developer, NHS Innovation Accelerator Fellow and Honorary University Fellow at Plymouth University Peninsula Schools of Medicine and Dentistry, led the study. He said: "Brush DJ showed positive effect across four main themes - motivation, education, compliance and perceived benefits. The results of our study indicate that apps such as Brush DJ are beneficial to users and open the way for further research to extend their use and effectiveness still further." Professor Elizabeth Kay, Foundation Dean of the Peninsula Dental School from Plymouth University Peninsula Schools of Medicine and Dentistry, was a co-author on the study. She added: "Caries and other dental health conditions are ultimately preventable, and the great thing about an app such as Brush DJ is that we can show that it has a positive effect for children. Bearing in mind that almost 26,000 children a year are aged between five and nine are admitted to hospital for dental treatment in the UK, for conditions which are on the whole preventable through better understanding and adoption of good oral health routines, the potential for Brush DJ and apps like it to reduce that number is huge. More research based on the findings from this study will help us to develop the app and investigate methods for its more widespread use."
10.1038/sj.bdj.2015.660
Physics
Study is the first to apply measurement methods in spin caloritronics field
D. Meier, D. Reinhardt, M. van Straaten, C. Klewe, M. Althammer, M. Schreier, S.T.B. Goennenwein, A. Gupta, M. Schmid, C.H. Back, J.-M. Schmalhorst, T. Kuschel, G. Reiss: Longitudinal spin Seebeck effect contribution in transverse spin Seebeck effect experiments in Pt/YIG and Pt/NFO, Nature Communications 6, 9211 (2015), DOI: 10.1038/ncomms9211 T. Kuschel, C. Klewe, J.-M. Schmalhorst, F. Bertram, O. Kuschel, T. Schemme, J. Wollschläger, S. Francoual, J. Strempfer, A. Gupta, M. Meinert, G. Götz, D. Meier, G. Reiss: Static proximity effect in Pt/NiFe2O4 and Pt/Fe bilayers investigated by x-ray resonant magnetic reflectivity, Physical Review Letters 115, 097401 (2015)¸ dx.doi.org/10.1103/PhysRevLett.115.097401 Journal information: Nature Communications , Physical Review Letters
http://dx.doi.org/10.1038/ncomms9211
https://phys.org/news/2015-09-methods-caloritronics-field.html
Abstract The spin Seebeck effect, the generation of a spin current by a temperature gradient, has attracted great attention, but the interplay over a millimetre range along a thin ferromagnetic film as well as unintended side effects which hinder an unambiguous detection have evoked controversial discussions. Here, we investigate the inverse spin Hall voltage of a 10 nm thin Pt strip deposited on the magnetic insulators Y 3 Fe 5 O 12 and NiFe 2 O 4 with a temperature gradient in the film plane. We show characteristics typical of the spin Seebeck effect, although we do not observe the most striking features of the transverse spin Seebeck effect. Instead, we attribute the observed voltages to the longitudinal spin Seebeck effect generated by a contact tip induced parasitic out-of-plane temperature gradient, which depends on material, diameter and temperature of the tip. Introduction Spin caloritronics is an active branch in spintronics 1 , 2 . The interplay between heat, charge and spin transport opens a new area of fascinating issues involving the use of waste heat in electronic devices. One potentially useful effect for heat collecting 3 is the spin Seebeck effect (SSE) 4 which was observed in 2008. It was reported that a spin current perpendicular to an applied temperature gradient can be generated in a ferromagnetic metal (FMM) by the transverse SSE (TSSE) 4 . An adjacent normal metal (NM) converts the spin current via the inverse spin Hall effect (ISHE) 5 into a transverse voltage V , which is antisymmetric with respect to the external magnetic field H ( V ( H )=− V (− H ), see Fig. 1a ). In this geometry, the temperature gradient is typically aligned in-plane ( ∇ T x ) and can also induce a planar Nernst effect (PNE) in FMM with magnetic anisotropy 6 which is due to the anisotropic magnetic thermopower and symmetric with respect to H ( V ( H )= V (− H )). For pure ∇ T x in a ferromagnetic or ferrimagnetic insulator (FMI) there is no PNE, since there are no free charge carriers available. However, if the NM material is close to the Stoner criterion, a static magnetic proximity effect could induce a so called proximity PNE, which in general is present in spin polarized NM adjacent to a FMM and could also occur in a NM–FMI contact ( Fig. 1a ). Reports of previous experiments with pure ∇ T x can be found for NM/FMM 4 , 6 , 7 , 8 and for NM/FMI 9 . Figure 1: Summary of effects in SSE experiments. An overview of all possible effects is given for NM/FMM and NM/FMI bilayers depending on the symmetry of the transverse voltage V with respect to the external magnetic in-plane field H for antisymmetric magnetization reversal processes. A distinction is made between symmetric effects (blue), antisymmetric effects (yellow) and the effects, which are not possible in the considered system (red). ( a ) Pure in-plane ∇ T along the x direction parallel to the magnetic field vector H for TSSE measurements. In NM/FMM systems a PNE due to magnetic anisotropy in the FMM or a proximity PNE due to the spin polarization in the NM can exist. In NM/FMI a PNE is absent due to the lack of charge carriers in the FMI. ( b ) Pure out-of-plane ∇ T along the z direction perpendicular to the magnetic field vector H for LSSE measurements. In NM/FMM systems the ANE exists. In both NM/FMM and NM/FMI the proximity ANE can appear if the magnetic proximity effect generates a spin polarization at the interface. The LSSE is proved for NM/FMI systems and should be also present in NM/FMM systems. ( c ) In-plane and unintended out-of-plane ∇ T . A combination of all effects from a and b has to be considered. Full size image For the longitudinal SSE (LSSE) 10 the spin current flows directly from the FM into an adjacent NM parallel to the temperature gradient ( Fig. 1b ), which is typically aligned out-of-plane ( ∇ T z ). In NM/FMM bilayers the anomalous Nernst effect (ANE) can occur, but is absent in the FMI. In semiconducting materials the ANE contributes to the LSSE as already shown for Pt/NiFe 2 O 4 at room temperature 11 . In addition, if the NM would be spin polarized by the proximity to the FM, an additional proximity ANE could occur 12 ( Fig. 1b ). Reports of previous experiments with pure ∇ T z can be found for NM/FMM 11 and for NM/FMI 10 , 13 , 14 , 15 , 16 , 17 , 18 , 19 . As summarized in Fig. 1c an unintended ∇ T z can hamper the evaluation of TSSE experiments with applied ∇ T x . Heat flow into the surrounding area or through the electrical contacts can induce an additional ANE in NM/FMM bilayers and NM/magnetic semiconductors as discussed in literature 8 , 20 , 21 , 22 , 23 , 24 . But since, in principle, all the effects of an LSSE experiment can be present in the TSSE experiment with unintended ∇ T z , proximity Nernst effects and especially parasitic LSSE can also be present in NM/FMI bilayers as already mentioned recently 25 . This leads to four possible effects which are antisymmetric with respect to the external magnetic field, when the temperature gradient is not controlled very carefully (see Fig. 1c ). The prevalence of multiple thermoelectric effects in conducting ferromagnets poses a significant challenge for identifying any individual contribution to the total voltage signal. Previous reports on this matter 6 , 8 , 21 , 22 , 23 are thus limited in significance and could not unambiguously prove or disprove the existence of a TSSE. Here we report on the investigation of the TSSE on the magnetic insulators Y 3 Fe 5 O 12 (YIG) and NiFe 2 O 4 (NFO). We observe the influence of out-of-plane temperature gradients when an in-plane temperature gradient is applied intentionally by varying the material, diameter and temperature of the electrical contact tips. We circumvent all of the above issues and are, for the first time, able to demonstrate that the LSSE alone accounts for the observed voltage signals. Thus, we fill a gap in the controversial discussion about TSSE investigations. Results Sample structure and characterization Magnetic insulators are of increasing importance for spintronics, magnonics and spin caloritronics 9 , 26 , 27 , 28 , 29 , 30 . The phenomena presented in Fig. 1 and the discussions of side effects in TSSE experiments have not been treated systematically in the literature for NM/FMI bilayers so far ( Table 1 ). For the measurements in this work we used YIG films with a thickness of t YIG =180 nm. The films show a coercive field of about 100 Oe and a saturation magnetization of M S =120 kA m −1 . As common for many thin films these values deviate slightly from bulk properties. Previous studies 14 , 15 demonstrated, however, that the LSSE is very robust to variations in the magnetic properties. Our results should thus also be applicable to bulk samples and other thin films. YIG films with similar properties are also relevant for further spin transport phenomena, for example, the recently observed spin Hall magnetoresistance 31 , 32 . For reference measurements on another magnetic insulator system we used NFO films with a thickness of t NFO =1 μm and a coercive field of about 160 Oe. On top of the magnetic insulator systems a thin Pt strip with a thickness of t Pt =10 nm was deposited on one side as a spin detector material. Table 1 Publications on TSSE experiments in different materials. Full size table Experimental setup If not mentioned otherwise the experiments are performed under ambient conditions as described in (ref. 8 ) mirroring those of the very first TSSE measurements on FMIs by Uchida et al. 9 . The ends of the Pt strip ( l Pt =5 mm) were contacted with a microprobe system with Au and W tips of different diameters. Furthermore, one Au tip was equipped with a 1.5-kΩ resistor for heating the tip (only thermally connected to the tip, not electrically) to intentionally induce a ∇ T z ( Fig. 2a) 8 . Figure 2b shows the nearly linear relation between the power P needle dissipated in the resistor and the tip temperature T needle as determined with a type-K thermocouple glued to the tip in a calibration measurement. The heated Au needle is labelled T needle =RT+x with room temperature RT=296 K. The voltage V at the Pt strip was measured with a Keithley 2182A nanovoltmeter. An external magnetic field H in x direction was applied in a range of ±600 Oe for YIG and of ±1,000 Oe for NFO films. The voltage values V sat quoted throughout the remainder of the manuscript are the respective averaged voltages levels at saturation. Figure 2: Measurement configuration. ( a ) In-plane temperature gradient ∇ T x applied parallel to the external magnetic field H in the x direction from the magnetic north (N) to the magnetic south (S) for positive magnetic fields H with respect to the measurement of the voltage V at the Pt strip between Hi (+) and Lo (−) of the multimeter. An additional out-of-plane temperature gradient ∇ T z can be induced by using thick contact tips or heating one tip with a voltage at a resistor R attached to the tip (only thermally connected, not electrically). ( b ) The temperature T needle at the tip of the Au needle as a function of the power P needle at the resistor R obtained in a reference measurement. The error bars mirror the systematical error of the temperature and power determination. Full size image Measurements in vacuum Before measurements under ambient conditions were performed we did reference measurements on Pt/YIG under vacuum. The Pt strip was contacted by Au bonding wires with 25 μm in diameter. The results are shown in Fig. 3 for Pt strip on the hot side ( Fig. 3a ) and Pt strip on the cold side of the YIG film ( Fig. 3b ) for various in-plane temperature gradients ∇ T x . For both sample sides no significant effect within the measurement sensitivity limit below ±20 nV can be observed. The same behaviour was obtained for Pt/NFO where no significant effect could be observed for Pt strip on both sample ends ( Supplementary Fig. 1 ). Figure 3: Pt/YIG contacted with Au bonding wires in vacuum. H dependence of V measured at the Pt strip on YIG using Au bonding wires with a diameter of 25 μm for various in-plane temperature differences Δ T x performed under vacuum. ( a ) Measurements for Pt strip at the hot side of the YIG film. ( b ) Sample and measurement configuration for the data in a with the in-plane temperature gradient ∇ T x parallel to the external magnetic field H . ( c ) H dependence of V measured at the Pt strip located on the cold side of the YIG film for various in-plane temperature differences Δ T x . ( d ) Sample and measurement configuration for the data in c . Full size image Measurements under ambient conditions Afterwards, the samples were mounted in the setup described in (ref. 8 ) performed under ambient conditions. First, the ISHE voltage from the Pt/YIG sample was measured for various Δ T x . The Pt strip was on the hot side and contacted with W tips carefully. Again, the voltage shows no significant variation within the sensitivity limit of about ±40 nV when H is varied ( Fig. 4a ). No Nernst effects are observed due to the insulating magnetic layer and no evidence for an additional ∇ T z can be detected. Therefore, the clamping and heating of the sample in our setup and the contacting with thin W tips results in a pure ∇ T x as already shown in (ref. 8 ). TSSE is not observable although the Pt strip is located on the hot side of the YIG film and far away from the center where the TSSE should vanish. The comparison with Pt strip at the cold sample end is shown in Fig. 4b for the thinnest W tips and different temperature gradients ∇ T x . For the largest ∇ T x we can observe a noticeable effect given by a difference between the voltages in saturation for positive and negative magnetic fields. Figure 4: Pt/YIG contacted with W tips under ambient conditions. ( a ) H dependence of V measured at the Pt strip on YIG located at the hot side using thin W tips and low contact force (tip contact area of A =0.003 mm 2 ) for various Δ T x . ( b ) Pt strip at the cold side and the same W tips used in a for various Δ T x and using low contact force. ( c ) Pt strip at the hot side and different W tips with various contact areas with a fixed Δ T x =15 K using a high contact force. ( d ) Pt strip at the cold side with a fixed Δ T x =15 K and with the same W tips as in c using a high contact force. ( e ) Sample and measurement configuration for the data in a and c with the in-plane temperature gradient ∇ T x parallel to the external magnetic field H . ( f ) Sample and measurement configuration for the data in b and d . Full size image Contact area dependence In the next step we contacted again and pushed the thin W tips with more pressure into the Pt strip, which increases the effective electrical contact area A (blue curve in Fig. 4c ). Afterwards, the tips were changed by other W tips with a larger contact area. The observed voltages are plotted in Fig. 4c with Pt strip at the hot sample end and Δ T x =15 K. The voltages vary significantly when H is changed. V is antisymmetric with respect to H and the voltage in saturation increases for larger contact areas. In Fig. 4d the same increase of antisymmetric effect is obtained for different W tips with increasing contact area at the cold sample side, but the magnitude of effect is generally smaller compared with the hot side. However, the sign of V in saturation is the same for Pt strip at the hot and the cold side. This behaviour could be verified for Pt/NFO films ( Supplementary Fig. 2 ) and excludes the TSSE which has to show a sign change in V between hot and cold side ( Supplementary Note 1 ). In Fig. 5 the magnitudes V sat obtained for all W tips with different contact areas ( Fig. 4 ) are plotted against the contact area A of the tip for Pt strip on the hot and cold side for a temperature difference Δ T x =15 K. Furthermore, a variation of Au tips with different contact areas were added which show the same behaviour of V sat depending on A for both sample sides ( Supplementary Fig. 3 ). Due to the fact that the real attached contact area could not 100% be reproduced we estimated the average between the possible maximum contact area and a sufficiently low area ( Supplementary Fig. 4 ). This leads to relatively large error bars. However, for Pt strip on the hot and the cold side the sign of V sat is the same. Furthermore, the absolute value of V sat decreases for smaller contact areas of the used tips for both materials (W and Au). The effect size for Pt strip on the cold side is generally smaller compared with the hot side when the same temperature difference of Δ T x =15 K is applied. We explain this behaviour of V sat by an unintended heat flux through the tips leading to a vertical temperature gradient ∇ T z and, therefore, to a LSSE induced spin current into the Pt. Figure 5: Influence of the contact area. V sat as a function of the determined contact area A for various Au and W tips with different diameters for Pt strip at the hot and the cold side and a constant Δ T x =15 K in Pt/YIG. The deviation of the average contact area to the estimated areas is used as the experimental error which takes into account the difference of the contact area when the sample is recontacted or contacted with stronger pressure. The experimental error of V sat was assumed to a value of ∼ ±50 nV which takes into account the estimated deviation between high and low contact force. Full size image Furthermore, in Fig. 5 , V sat does not tend to zero when A gets sufficiently small, which can be explained by an additional influence of ∇ T z contributions due to different thermal conductivities of the investigated film and substrate materials. Influence of contact tip heating In the next step we used the thickest Au tips ( Fig. 6 ) with a resistor glued to the tip. The contact area A was about 0.28 mm 2 . It can be seen that the measured voltage is antisymmetric with respect to the magnetic field ( Fig. 6a–d ). Next, we applied a voltage to the Au tip resistor to increase the temperature of the tip and change the out-of-plane heat flow. Figure 6: Pt/YIG contacted with heatable Au tips under ambient conditions. H dependence of V measured at the Pt strip on YIG using thick Au tips and different tip heating voltages inducing an additional ∇ T z for various ∇ T x . ( a ) Pt strip at the cold side for Δ T x =10 K. ( b ) Pt strip at the cold side for Δ T x =30 K. ( c ) Pt strip at the hot side for Δ T x =5 K. ( d ) Pt strip at the hot side for Δ T x =10 K. ( e ) Sample and measurement configuration for the data in a and b with the in-plane temperature gradient ∇ T x parallel to the external magnetic field H and an additional out-of-plane temperature gradient ∇ T z . ( f ) Sample and measurement configuration for the data in c and d . Full size image In Fig. 6a for T needle =RT a small antisymmetric effect of about V sat =−50 nV is obtained when the Pt strip is at the cold side of the YIG film. When the needle is heated to T needle =RT+12 K the ISHE voltage changes its sign to a value of V sat =+95 nV and changes further to V sat =+590 nV for T needle =RT+24 K. The Au needles with larger contact areas compared with thinner W tips or Au bonding wires generate an additional out-of-plane heat flow at the cold side of the sample. This heat flow changes its sign with increasing T needle , which can be detected by the sign reversal of the measured voltage. When ∇ T x is increased (Δ T x =30 K in Fig. 6b ) the ISHE voltage at the Pt is V sat =−170 nV for T needle =RT and therefore three times larger than for Δ T x =10 K. The ISHE voltage again increases with increasing T needle and changes sign. For a Pt strip at the hot side V sat without tip heating is larger than at the cold side. For Δ T x =5 K the magnitude is about V sat =−130 nV ( Fig. 6c ) and can be decreased to V sat =−300 nV for Δ T x =10 K ( Fig. 6d ). The sign and the magnitude of V sat can also be controlled by T needle and, therefore, by ∇ T z . When T needle is fixed at RT+31 K, V sat is about +180 nV for Δ T x =5 K and +90 nV for Δ T x =10 K. V sat measured for Pt/YIG at the hot and cold side is plotted as a function of T needle for different Δ T x in Fig. 7 . A non-heated Au needle results in the same sign of V sat for all Δ T x , while | V sat | is smaller on the cold side compared with the hot side. Again, this behaviour can be explained by an unintended heat flux through the Au needles creating a vertical temperature gradient ∇ T z and thereby an LSSE voltage. We note that this interpretation also holds when a rigorous sign check is applied 33 . Figure 7: Influence of contact tip heating. V sat as a function of the Au needle temperature T needle for various in-plane temperature differences Δ T x in Pt/YIG and Pt strip on the hot (red triangles, yellow squares) and on the cold sample side (green circles, blue rhombuses). The experimental error of T needle was estimated in the reference measurement and describes the deviation from the average value over several measurements. The experimental error of V sat describes the deviation between high and low contact forces. Full size image For a heated Au needle V sat increases and crosses zero ( Fig. 7 ). Here the tip heating compensates the out-of-plane heat flux induced by Δ T x ( ∇ T z =0). After the sign change of V sat (and therefore, ∇ T z ), the values increase with a larger (smaller) slope for the cold (hot) side. The temperature difference between the sample at the hot side and the non-heated Au tip, which is at room temperature, is larger compared with the temperature difference between the colder sample side, which is closer to room temperature, and the non-heated Au tip, which is at room temperature. When the Au tip is heated on the hot sample side the amount of heating power is larger to reverse the direction of the out-of-plane temperature gradient than on the cold sample side. Therefore, the slope of the curves in Fig. 7 for Au tip at the cold sample side is larger compared with the slope of the curves for Au tip at the hot sample side. Calculation of the magnon-electron temperature difference Xiao et al. 34 discussed the temperature difference Δ T me between the magnon temperature in the FM and the electron temperature in the NM as the origin of the thermally induced spin current. Δ T me can be inferred from the recorded voltage as 25 , 34 Here V a is the magnetic coherence volume, g r is the real part of the spin mixing conductance, γ is the gyromagnetic ratio, k B is the Boltzmann constant, e is the elementary charge, Θ SH is the spin Hall angle, ρ Pt is the resistivity of the sample and λ is the spin diffusion length of the NM material. The two temperature model has proven successfully in relating Δ T me to the phonon temperature, accessible in experiments 25 , 35 . We simulate the phonon and magnon temperatures assuming one-dimensional transport in our films and disregard the influence of thermal contact resistance other than the coupling between magnons and electrons. This yields a value Δ T z , the (phonon) temperature drop across the YIG film, for which the experimentally measured V sat is obtained due to the LSSE. We assume ρ Pt =40 μΩm. All material dependent YIG parameters were taken from (ref. 25 ). We calculated Δ T me for the largest and smallest V sat taken from Fig. 7 at RT (red curves in Fig. 6a,d ) and the corresponding Δ T z . For | V sat |=50 nV ( Fig. 6a ) we obtain Δ T me =0.5 μK and a corresponding Δ T z =2 mK. For | V sat |=300 nV ( Fig. 6d ) the corresponding values are Δ T me =3.2 μK and Δ T z =12 mK. The obtained Δ T z are in the order of a few millikelvins. It is reasonable to assume that such values can be induced by for example, thick contact tips, especially considering that our initial simplifications should lead to an overestimation of Δ T z 25 . These calculations for the TSSE configuration were investigated in detail in (ref. 25 ). It was found that for Δ T x =20 K the obtained Δ T me is well below 1 μK, even at the very edge of the sample where Δ T me is maximized. This further supports the notion that spurious out-of-plane temperature gradients are responsible for the voltages observed in our samples. Measurements on Pt/NFO For Pt/NFO a similar behaviour of V sat could be observed for various combinations of in-plane temperature gradient, Au needle temperature and Pt strip location (hot or cold side) ( Supplementary Fig. 5 , Supplementary Note 2 ). The magnitudes of V sat obtained for Pt/NFO are about an order of magnitude larger when compared with V sat in Pt/YIG (see Supplementary Fig. 2 and Supplementary Fig. 5 ) for similar values Δ T x and T needle . This difference can be explained by different film thicknesses and thermal conductivities of NFO and YIG as well as by different spin mixing conductances 36 , 37 . Discussion For all experiments we performed in vacuum and using thin bonding wires for the electrical contacts, we could not observe any evidence for a TSSE as well as any other transport phenomena shown in Fig. 1 . For the most experiments we performed under ambient conditions and using contact tips, we can clearly observe an antisymmetric behaviour of the voltage V with respect to the external magnetic field H which can be addressed to the LSSE due to an unintended out-of-plane temperature gradient Δ T z . For all remaining experiments, there is no evidence for any transport phenomena given in Fig. 1 . For both investigated sample systems the spin mixing conductance is large enough to observe a thermally driven spin current across the NM/FMI interface as proven by the results of experiments with LSSE configuration for Pt/YIG ( Supplementary Fig. 6 ) and Pt/NFO 11 , respectively. In addition to the LSSE, we now discuss other parasitic effects like the ANE and proximity ANE, which can be produced by an unintended Δ T z (see Fig. 1 ). We can exclude an ANE for YIG due to the lack of charge carriers. For NFO we observed an ANE which is one order of magnitude smaller at RT than the LSSE 11 . This ANE can be explained by the weak conductance of NFO at RT due to thermal activation energies of a few hundred meV depending on the preparation technique 11 , 29 . Absence of proximity ANE in Pt/NFO is ensured by X-ray resonant magnetic reflectivity measurements finding no interface spin polarization with the experimental limit of 0.02 μ B per Pt atom 38 . In case of Pt/YIG Geprägs et al. presented X-ray magnetic circular dichroism measurements (XMCD) with no evidence for any spin polarization in Pt 39 , while Lu et al. could show XMCD measurements indicating magnetic moments in Pt on their YIG samples 40 . Future investigations with X-ray resonant magnetic reflectivity can give more insight to this discrepancy. However, Kikkawa et al. 41 could show that a potential contribution of a proximity ANE additional to the LSSE is negligibly small. This supports our conclusion that the main antisymmetric contribution in our measurements on both Pt/YIG and Pt/NFO is the LSSE, which is driven by an out-of-plane temperature gradient. We do not observe any symmetric contribution for ∇ T x without tip heating. Therefore, PNE and proximity PNE contributions can also be excluded. Nevertheless, we find a small symmetric contribution for strong tip heating as demonstrated in Fig. 6b for Δ T needle =RT+31 K. In the region of H C small peaks are visible under symmetrization of the voltage. This hints at the existence of an additional magnetothermopower effect potentially induced by a temperature gradient ∇ T y along the Pt strip and will be part of future investigations. Recently, Wegrowe et al. 42 used anisotropic heat transport as an interpretation for the measured voltages using in-plane temperature gradients. In their work, they derived the anisotropic field-dependent temperature gradient in FMM and FMI from the Onsager reciprocity relations. Therefore, the thermocouple effect between the FM, the NM and the contacting tips can generate field-dependent voltages if there is a difference in the Seebeck coefficients. In our investigated systems, the Seebeck coefficients are indeed different for FM, Pt and the contact tips. However, since we do not observe a significant field-dependent variation of the ISHE voltage when the samples are bonded or carefully contacted with W tips, any anisotropic field-dependent heat-transport can be excluded as the reason for the observed voltages. In summary, we investigated the relevance of TSSE in Pt/YIG and Pt/NFO systems. We found no significant ISHE voltages upon applying an in-plane temperature gradient and using 25-μm thin Au bonding wires or sharp W tips (0.003-mm 2 contact area) as electric contacts. Increasing the contact area (up to 0.28 mm 2 ), however, induces an additional out-of-plane heat flux accompanied by a LSSE ISHE voltage. This antisymmetric effect can be identified as LSSE which was verified by controlling the needle temperature, or changing the tip diameter accompanied by the contact area and therefore varying the out-of-plane temperature gradient. Taken together, in all our experiments, we thus only observe LSSE-type signatures. These LSSE voltages can be reminiscent of a TSSE-type response if an unintentional (or intentional) ∇ T z is present. This shows that utmost care is required if one is to interpret magnetothermopower effects in terms of the TSSE. Methods Sample fabrication The YIG films were deposited on gadolinium gallium garnet (Gd 3 Ga 5 O 12 ) (111)-oriented single crystal substrates with width and length w = l =5 mm by pulsed laser deposition from a stoichiometric polycrystalline target. The NFO films with a thickness of about t NFO =1 μm were deposited on 10 × 5 mm 2 MgAl 2 O 4 (100)-oriented substrates by direct liquid injection-chemical vapour deposition (DLI-CVD) 11 , 43 . After a vacuum break and cleaning with ethanol in an ultrasonic bath a t Pt =10 nm thin Pt strip was deposited by dc magnetron sputtering in an Ar atmosphere of 1.5 × 10 −3 mbar through a 100 μm wide split-mask on one sample side of the YIG and NFO films with a length of l Pt =5 mm. Additional information How to cite this article: Meier, D. et al. Longitudinal spin Seebeck effect contribution in transverse spin Seebeck effect experiments in Pt/YIG and Pt/NFO. Nat. Commun. 6:8211 doi: 10.1038/ncomms9211 (2015).
An experiment at Tohoku University (Japan) in 2008 laid the foundations for research on 'spin caloritronics' – a field that aims to develop more effective and energy-saving data processing in information technology. Since then, many new spincaloric effects have been studied, but the key experiment in Japan could not be replicated. Researchers at Bielefeld University's Faculty of Physics have now found an explanation for this. They have published their findings in the journal Nature Communications. By applying a new measurement method available at major research facilities, they have also extended the experimental repertoire in spin caloritronics. These results can be found in the journal Physical Review Letters. As well as having an electrical charge, electrons possess an intrinsic angular momentum, called the electron spin. This spin generates a magnetic momentum and influences the spin of the neighbouring electrons in a solid state. In some materials, this can be used to transmit magnetic signals through a solid state without the electrons themselves moving. Because this does not involve the transport of an electric charge as in an electric current and it is also the spin that is passed on as information, the procedure is called spin current. 'Because the electrons themselves do not move, passing on the signal generates less heat. That is an advantage over electric current,' says Daniel Meier, a doctoral student in the 'Thin Films & Physics of Nanostructures' research group headed by Professor Dr. Günter Reiss. The scientists at Bielefeld are generating pure spin currents in magnetic materials that do not conduct electric current – so-called magnetic isolators. They are doing this with thin magnetic films made of nickel ferrite or iron garnet. 'Just as you can use electric current to build up an electric voltage in materials that conduct electricity, you can use a spin current to build up a spin voltage in magnetic isolators. This is called spin accumulation,' is how Dr. Timo Kuschel describes the parallels between classic electronics and spintronics. Kuschel is responsible for the spin caloritronics team in the research group headed by Günter Reiss. In the reported experiment, the team has now shown that thermal spin currents can be generated through differences in temperature. However, their explanation and their effect differ from what was originally anticipated. 'Nonetheless, the true effect is a very effective means of generating thermal spin currents. That is why we are naturally still very thankful to our Japanese colleagues for their research. It was the first experiment worldwide that got the ball rolling in the field of spin caloritronics,' says Günter Reiss. He is carrying out the experiments in cooperation with Universität Regensburg, the Walther-Meissner-Institute in Garching, and the Center for Materials for Information Technology in Alabama (USA). In addition, the researchers are also working on finding proof of spin accumulations. For this, they are using major research facilities such as the DESY (Deutsches Elektronen-Synchrotron) in Hamburg. 'The x-ray radiation generated in these electron storage rings is many times more intensive than x-ray sources in a university laboratory or a hospital,' says Christoph Klewe, who is doing his dissertation on the spin accumulation in bilayers of platinum and magnetic isolators. Previous experiments with x-ray radiation designed to detect spin accumulations did not produce clear results. Therefore, physicists at Bielefeld started searching for an unequivocal measurement method. 'With magnetic X-ray reflectometry, we have found a method that can also provide us with additional information compared to earlier approaches,' emphasizes Timo Kuschel. 'Magnetic X-ray reflectometry is still a new method and has yet to be applied in the field of spin caloritronics.' In cooperation with Osnabrück University, the scientists at Bielefeld University have published an article on this in Physical Review Letters. For Timo Kuschel, it is very clear that 'the findings ensure the need for further discussions and research in the field of spin caloritronics.'
10.1038/ncomms9211
Medicine
Researchers uncover potential novel therapeutic targets against natural killer/T-cell lymphoma
Jianbiao Zhou et al, Super-enhancer-driven TOX2 mediates oncogenesis in Natural Killer/T Cell Lymphoma, Molecular Cancer (2023). DOI: 10.1186/s12943-023-01767-1
https://dx.doi.org/10.1186/s12943-023-01767-1
https://medicalxpress.com/news/2023-05-uncover-potential-therapeutic-natural-killert-cell.html
Abstract Background Extranodal natural killer/T-cell lymphoma (NKTL) is an aggressive type of non-Hodgkin lymphoma with dismal outcome. A better understanding of disease biology and key oncogenic process is necessary for the development of targeted therapy. Super-enhancers (SEs) have been shown to drive pivotal oncogenes in various malignancies. However, the landscape of SEs and SE-associated oncogenes remain elusive in NKTL. Methods We used Nano-ChIP-seq of the active enhancer marker histone H3 lysine 27 acetylation (H3K27ac) to profile unique SEs NKTL primary tumor samples. Integrative analysis of RNA-seq and survival data further pinned down high value, novel SE oncogenes. We utilized shRNA knockdown, CRISPR-dCas9, luciferase reporter assay, ChIP-PCR to investigate the regulation of transcription factor (TF) on SE oncogenes. Multi-color immunofluorescence (mIF) staining was performed on an independent cohort of clinical samples. Various function experiments were performed to evaluate the effects of TOX2 on the malignancy of NKTL in vitro and in vivo. Results SE landscape was substantially different in NKTL samples in comparison with normal tonsils. Several SEs at key transcriptional factor (TF) genes, including TOX2, TBX21(T-bet), EOMES, RUNX2, and ID2 , were identified. We confirmed that TOX2 was aberrantly overexpressed in NKTL relative to normal NK cells and high expression of TOX2 was associated with worse survival. Modulation of TOX2 expression by shRNA, CRISPR-dCas9 interference of SE function impacted on cell proliferation, survival and colony formation ability of NKTL cells. Mechanistically, we found that RUNX3 regulates TOX2 transcription by binding to the active elements of its SE. Silencing TOX2 also impaired tumor formation of NKTL cells in vivo. Metastasis-associated phosphatase PRL-3 has been identified and validated as a key downstream effector of TOX2-mediated oncogenesis. Conclusions Our integrative SE profiling strategy revealed the landscape of SEs, novel targets and insights into molecular pathogenesis of NKTL. The RUNX3-TOX2-SE-TOX2-PRL-3 regulatory pathway may represent a hallmark of NKTL biology. Targeting TOX2 could be a valuable therapeutic intervene for NKTL patients and warrants further study in clinic. Introduction Extranodal natural killer/T-cell lymphoma (NKTL) is an Epstein-Barr virus (EBV) associated, aggressive non-Hodgkin lymphoma (NHL) that is predominantly localizes to the upper aerodigestive tract but can involve non-nasal sites [ 1 , 2 ]. The incidence of NKTL shows a significant ethnic and geographic predilection, constituting approximate 10% of NHL in Asia and South America, but only 1% in North America and Western Europe [ 1 , 2 ]. Combined chemotherapy-radiotherapy is standard treatment for NKTL patients, but often associated with high relapse rate and serious side effects [ 3 ]. New drugs, including anti-PD1 antibody pembrolizumab, have been explored [ 3 , 4 ]. Overall, treatment for NKTL patients remains a challenge in clinic [ 5 , 6 ]. Novel insight into the molecular mechanisms of this disease would guide the development of effective targeted therapies to improve the survival of NKTL patients, especially for those refractory or relapsed cases [ 7 ]. Gene expression profiling studies have reported deregulated signaling pathways underlying the pathogenesis of NKTL, including Janus Kinase/Signal Transducer and Activator of Transcription (JAK/STAT) pathway, PDGF pathway, NOTCH-1 signaling pathway, NFκB pathway [ 8 , 9 , 10 , 11 ]. Increased expression of BIRC5 ( Survivin ), RUNX3 , AURKA ( Aurora Kinase A ), and EZH2 are found in NKTL tumors relative to normal NK cells and they play important roles in the disease progression [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. Furthermore, alterations in epigenetic program have been implicated in the pathogenesis of NKTL [ 20 ]. Dysregulated microRNAs (miRNAs) possibly induced by MYC activation affect target pathways relevant to oncogenesis of NKTL [ 21 ]. Promoter hypermethylation-mediated silencing of tumor suppressor genes such as BIM1 , PRDM1 , p73 , DAPK1 , PTPN6 , and PTPRK , have been reported in NKTL patients and cell lines [ 22 , 23 ]. In addition, somatic mutations have been identified in epigenetic regulator genes, including ARID1A , ASXL3 , CREBBP , KMT2D ( MLL2 ), KDM6A , EP300 and TET2 in NKTL cases [ 11 , 24 ]. Enhancer is a region of DNA-regulatory elements that increases the activated transcription of a gene to higher levels via long-range chromatin interaction with its promoter [ 25 ]. Super-enhancers (SEs) are defined as large clusters of enhancers in proximity of 12.5 kb with one another [ 26 ]. SE regions are often characterized by high level bindings of acetylation of histone H3 lysine 27 (H3K27ac), coactivators and transcription factors (TFs). Common coactivators are mediator complex subunit 1 (MED1), bromodomain containing 4 (BRD4) and EP300 [ 27 , 28 , 29 ]. Aberrant assembly and activation of oncogenic SEs have been reported in various solid tumors and hematological malignancies [ 30 , 31 , 32 ]. However, the landscapes of SE and their biological functions roles in NKTL remain elusive. In this study, we aim to define the SE landscapes of NKTL for a better understanding of the molecular pathogenesis of NKTL and to identify novel therapeutic targets. Materials and methods NKTL cell line and patient samples A panel of NKTL cell lines including NKYS, NK-92, NK-S1, and HANK-1 were used in this study. Detailed characteristics of these NKTL cell lines and their culture conditions were described in supplemental Table S 1 . Normal NK cells were purchased from Lonza Bioscience (Basel, Switzerland). Primary tumor samples (NKTL4, NKTL9, NKTL10) and their matched normal tonsil tissues were collected at National Cancer Center Singapore with the approval from Institutional Review Board (CIRB Ref: 2018/3084) and informed consent. The clinicopathological characteristics of these 3 patients were presented in supplemental Table S 2 . Super-enhancer peak calling and identification Nano-chromatin immunoprecipitation followed by sequencing (NanoChIP-seq) was performed on 3 primary NKTL tumor samples and 3 normal tonsil tissues (controls), using polyclonal anti-H3K27ac (Abcam, ab4729) antibody. Library construction and sequencing on the Illumina HiSeq 4000 platform were performed by Exploit Technologies, A*Star (Singapore). Conventional ChIP-seq was conducted on HANK1 and NKYS cell lines using same anti-H3K27ac antibody. ChIP-seq datasets were aligned to the hg19 human genome by Bowtie2 version 2.4.1 with –no-unal and –sensitive parameter. Regions of H3K27ac ChIP-seq peaks were identified by MACS2 2.2.7.1. Constituent enhancers that occurred within 12.5 kb were further stitched together and excluded those that were fully contained within ± 2 kb from TSS for SE identification by Rank Ordering of Super Enhancers (ROSE) with the parameter –s 12,500 and –t 2000. Enhancer regions were plotted in an increasing order based on their H3K27ac ChIP-Seq signal. Enhancers above the inflexion point of the curve were defined as SEs. SEs were assigned to genes with TSS flanking a 50 kb window of the SEs. Cell viability assay CellTiter-Glo® Luminescent Cell Viability Assay (CTG assay, Promega, Madison, WI) was used to determine the cell growth and viability as previously described [ 33 ]. Each experiment was in triplicate. Lentivirus infection EGFP-tagged scramble (Scr), TOX2 specific- and RUNX3 specific-shRNAs, FLAG-TOX2 overexpression vector were purchased from VectorBuilder (Chicago, IL, USA). These shRNA sequences were listed in supplemental Table S 3 . PRL-3-sh1 and -sh2 were previously described [ 34 ]. More details of lentivirus infection were described in supplemental Methods. RNA-seq and data analysis Total RNA was extracted using the RNeasy mini kit (Qiagen). RNA-Seq was performed in same 3 pairs of primary NKTL samples and normal tonsils, as well as NKYS treated with scramble shRNA and TOX2-shRNA1 and -shRNA2. The RNA library construction and RNA-sequencing services were provided by Novogene Singapore. Detailed data processing was described in supplemental Methods. Gene Ontology (GO) and pathway analysis conducted by R Bioconductor package GSVA 1.28.0. Immunoblotting assay Cells were lysed with proteinase inhibitor cocktail and phosphatase inhibitor cocktail for 30 min on ice. Immunoblotting was performed using SDS-PAGE followed by protein transfer to PVDF membrane. Primary antibodies were incubated overnight in cold room. Secondary antibodies were incubated for 1 h at room temperature. The following antibodies were used: GAPDH: (Santa Cruz Biotechnology, sc-47724); β-Actin (1:1000, Cell Signaling Technology, CST#4970); TOX2 (Proteintech, 21,162–1-AP), RUNX3 (SC-376591), Cleaved Caspase-3 (CST#9661); Cleaved Caspase-7 (CST#9491), Cleaved PARP (CST#5625). PRL-3 antibody (clone 318) was kindly provided by Dr Qi Zeng (IMCB, A*Star, Singapore). Flow cytometric analysis of GPF + cells and cell cycle The analysis of GFP positive cells was performed on a BD LSR II (Becton Dickinson, USA) flow cytometer, using BD FACSDiva™ software. The cell cycle analyses were carried out using propidium iodide (PI) dye (BD Pharmigen, USA) according to manufacturer’s instructions. ChIP-PCR ChIP followed by PCR (ChIP-PCR) analysis was performed on NKYS cells to evaluate TOX2 binding in the promoter region of PTP4A3. Rabbit polyclonal antibody to TOX2 (21,162–1-AP; Proteintech, USA) or its respective IgG isotype control was used for ChIP. Primers for amplification of the regions with or without consensus TOX2 binding sequence in the promoter of PTP4A3 were included in supplemental Table S 3 . Enhancer luciferase assay and site-directed mutagenesis Selected enhancer regions within TOX2-SE, as well as mutant RUNX3 binding motif were cloned into the PGL4.26 vector (primer sequences provided in supplemental Table S 3 and their activity was assayed using a dual luciferase reporter assay (Promega). A region outside TOX2-SE with low H3K27Ac signal was cloned as negative control (TOX2-eNC). Site-directed mutagenesis for RUNX3 binding motif was performed using QuikChange II Site-Directed Mutagenesis Kit (Agilent, Santa Clara, CA) following the manufacturer’s instructions. PCR-amplified enhancer candidates were inserted downstream from the firefly luciferase gene at the KpnI and NheI site in the pGL4.26 vector and cotransfected with a Renilla luciferase encoding plasmid (pGL4.75) into HEK293T cells on 96-well plates. Luciferase activity (firefly/Renilla) was measured on the Glomax 20/20 Luminometer (Promega) following the manufacturer's protocol. CRISPR/dCas9-KRAB interference To generate the dCas9-KRAB-T2A-mCherry expression vector, the GFP expression cassette of the dCas9-KRAB-T2A-GFP lentiviral vector (Addgene plasmid #71,237) was replaced by mCherry sequence. sgRNAs targeting the TOX2 enhancer region were designed using the online tool ( ). The sgRNA oligos were synthesized, annealed and cloned into inducible gRNA vector with GFP reporter FgH1tUTG (Addgene plasmid # 70,183) which were BsmBI digested and dephosphorylated. The sgRNA sequences were listed in supplemental Table S 3 . Lentiviruses were produced by co-transfecting dCas9-KRAB-T2A-mCherry plasmid with pMDLg/pRRE, pRSV-Rev, and pMD2.G into HEK293T cells using X-tremeGENE HP DNA Transfection Reagent (Roche). Lentiviral supernatant was harvested 72 h post-transfection. NKYS cells were infected with dCas9-KRAB-mCherry-expressing lentivirus in the presence of polybrene (Millipore) followed by sorting for mCherry positive population using FACSAria Flow Cytometer (BD Biosciences). The inducible sgRNA lentiviruses were then transduced into NKYS cells with stable dCas9-KRAB-mCherry expression. Doxycycline was added at a concentration of 1 µg/ml following infection to allow for repression of enhancer activity. Control cells were treated with DMSO. Multiplex immunofluorescence analysis of validation study cohort Existing tissue microarray (TMA) samples of patient diagnosed with NKTL between 1992 and 2017 ( n = 42) in the Department of Pathology, National University Hospital (NUH) were used for multiplexed immunofluorescence (mIF) as previously described [ 35 ]. This study was approved by the National Healthcare Group Domain Specific Review Board B (2009/00212). The clinicopathological characteristics, therapeutic regime and outcome of these patients were described in supplementary Table S 9 . We developed an automated mIF staining protocol for CD3/PRL-3/TOX2/RUNX3 panel through Leica Bond Max (SN: M211523) based on a published protocol [ 35 ]. CD3 was used for NKTL tumor cell marker. Traditional DAB immunohistochemical staining was used to optimize the staining parameters, for each antibody separately using the Leica Biosystems Bond Polymer Refine Detection Kit (DS9800). For mIF staining, briefly, the slides were baked and dewaxed followed by heat induced epitope retrieval (HIER) at 100 °C in antigen retrieval buffer for 20 min. The slides were then peroxidase blocked (only for the 1 st marker) for 10 min. Markers were prepared in DAKO antibody diluent followed by the polymeric HRP-conjugated secondary antibody (DS9800) and opal fluorophore-conjugated TSA (Akoya Bioscience) at 1:100 dilution before dispensing to the slide sequentially. Slides were rinsed with 1 × washing buffer after each step. After staining the opal fluorophore for the 1 st marker, slides were heated at 100 °C again to strip the primary and secondary antibodies bound to the tissue for labelling of the next marker. These steps were repeated until all remaining markers were labelled. The antibody sequence, dilution, antibody-opal pairs and antigen retrieval conditions for the multiplex staining were as follow: TOX2 (ProteinTech, 21,162–1-AP, dilution 1:500–20 min, HIER solution 1–20 min)– Opal 520; RUNX3 (Santa Cruz, sc-376591, dilution 1:500–20 min, HIER solution 2–20 min) – Opal 570; PRL-3 (ProteinTech, 15,186–1-AP, dilution 1:100–30 min, HIER 1 solution -20 min)– Opal 540; CD3 (DAKO, A4052, dilution 1:200–20 min, HIER solution 2–20 min) – Opal 620. Finally, DAPI (Akoya Biosciences, FP1490) at 1:10 dilution was added as a nuclear counterstain. Slides were imaged using Vectra 2 Single Slide (Akoya Biosciences, S/N: VT1447N8001). The component images for each marker were exported through Inform software (Akoya Biosciences, Version 2.4.8). The staining signal for each marker for the mIF images were unmixed through Inform (Akoya Biosciences, Version 2.4.8). After unmixing the staining signal, the component images for each marker were also exported through Inform software. All these component images were imported to Visiopharm (Denmark) for image analysis. A purchased APP “Nuclei Detection, AI (Fluorescence)” was used for cell segmentation. The deep learning APP for CD3 phenotyping were trained on multiple CD3 positive/negative labelling images with a variety of CD3 staining intensity. Data were then exported from Visiopharm for analysis. In vivo xenograft model For the human NKTL cell line xenograft model, we used female NOD.Cg -Prkdc scid Il2rg tm1Wjl /SzJ, NGS mice (6—7 weeks old), purchased from The Jackson Laboratory (Bar Harbor, ME, USA) through InVivos (Singapore). The animals were maintained in specific pathogen-free conditions. Ten million of NK-S1-scramble and NK-S1-TOX2-sh1 cells were mixed with Matrigel (50%) and subcutaneously injected into each side of loose skin between the shoulder blades and the hind leg of NGS-recipient mice ( n = 5), respectively. The length (L) and width (W) of the tumor were measured with calipers every 2 -3 days, and tumor volume (TV) was calculated as TV = (L × W 2 )/2. At the end of experiments, mice were euthanized and tumors were dissected. The protocol is reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) in compliance to the guidelines on the care and use of animals for scientific purpose (protocol number: R18-1254). Statistical analyses Prism 9.0 software (GraphPad Software, San Diego, CA, USA) was used to perform statistical analysis and make graphs. Survival curves were constructed and compared with the Kaplan–Meier method. Chi-square test was performed to analyze the categorical correlation. Student's t-test and Mann–Whitney test were used to analyze parametric and nonparametric variables, respectively. Statistical significance was achieved with a p-value of less than 0.05. Results Mapping super-enhancer landscape in NKTL To understand the epigenetic regulations in NKTL, we carried out ChIP-seq using antibodies against H3K27ac on 3 NKTL tumor, 3 normal tonsil control samples and 2 NKTL cell lines (HANK1 and NKYS). H3K27ac is a major active enhancer-associated chromatin modification and significant clustering of H3K27ac is a distinct feature of SE. After the initial quantification and alignment of the reads to the human genome, we first performed the global analysis for H3K27ac histone modification (Fig. 1 A). To map the SE landscapes in NKTL, we performed the ROSE analysis, which identifies super-enhancers based upon H3K27ac ChIP-seq data. A total of 1266 SEs were identified in more than 2 out of 3 primary NKTL tumors but not in their normal tonsils (Fig. 1 B). The complete list of the SE-genes was presented in supplemental Table S 4 . Fig. 1 The Super enhancers landscape of primary NKTL patient samples, NKTL cell lines and the controls. A Enhancer regions of in 3 primary NKTL patients. Enhancers were ranked by increasing H3K27Ac signal, and enhancers above the inflection point of the curve were defined as SEs, and the number of SEs was shown for each sample. Examples of SEs associated genes found in at least two primary MM cases were also presented. B Schematic diagram of the selection criteria for high-confident candidate SE-associated genes. C The list of final 191 SE-associated genes selected according to the criteria shown in ( B ) was classified into different function group. D NKTL-SE genes were enriched in multiple signaling pathways related to NK cell function. E Track view of H3K27ac ChIP-seq density profile centered at the TOX2 gene loci of NKTL cell line HNAK1 and NKYS (top panel), 3 tonsil controls (middle panel) and 3 primary NKTL patient samples (lower panel). Locations of the SEs regions were marked by black bars Full size image As SEs often drive high transcriptional outputs, we hypothesized that combining SE profiles with gene expression data derived from same samples would allow us to pinpoint novel oncogenes that are critically involved in NKTL pathogenesis. To this end, we filtered genes that were associated with super-enhancers and significantly overexpressed in NKTL tumors compared to normal tonsils. RNA-seq revealed overexpression of 1478 genes (false discovery rate < 0.001, log2 fold change ≥ 1, supplemental Table S 4 ). Using this rigorous strategy, we pinned down a list of 191 SE-associated genes that are over-expressed in NKTL (from this point known as NKTL-SE genes) which is worthy of further investigation (supplemental Table S 4 ). The list of genes could be classified into different function groups, including Drug resistance, Cell adhesion and migration, Metabolism, Epigenetic regulator, Signaling transduction, Transcription regulator, NK/T cell function, Solute carrier, Histone and Others (Fig. 1 C). Signaling pathway analysis revealed that these NKTL-SE genes were highly enriched in key pathways related to NK/T cell function and signaling (Fig. 1 D). However, several known pathways closely related to different molecular subtype of NKTL such as JAK-STAT pathway in TCR-negative NKTL, RAS-MAPK pathway in TCR-positive NKTL, and others, did not show high rank in our pathway analysis [ 11 ], Xiong J and colleagues developed a novel algorithm based on a quantitative gene expression metrics of NK-cell and T-cell associated genes to categorize patients into NK-cell origin and T-cell origin [ 11 ]. We intend to categorize NK/T origin of patients from our RNA-seq data. Then using a two-sample Kolmogorov–Smirnov based method developed in house [ 36 ], and the signatures from Xiong’s study [ 11 ], we estimated the score of NK-cell origin or T-cells origin. Consistent with the dot plot shown, all 3 NKTL cases were estimated to originate from NK cells (supplemental Figure S 1 ). Pivotal TFs known to regulate NK cell development and function were highly enriched in the list: TOX2, TBX21(T-bet), EOMES, RUNX2, GATA3, and ID2. Notably, TOX2 is also a member of a small subfamily of proteins (TOX, TOX3, and TOX4) that share almost identical high mobility group (HMG)-box sequences. TBX21 and EOMES are members of T-box protein family. We then scrutinized the SE constituents of these 3 genes (TOX2, TBX21 and EOMES) and found that only TOX2 harbored remarkably high SE peaks specifically in all NKTL samples and two NKTL cell lines, in contrast, only background signals were presented in normal tonsil tissues (Fig. 1 E and supplemental Figure S 2 ). TOX2 is overexpressed in NKTL and associated with poor survival The presence of TOX2-SE in all NKTL samples, but not in normal controls suggests that this gene is specifically and highly activated in NKTL cells. In fact, TOX2 expression was significantly higher in the primary NKTL samples and cell lines compared with normal NK cells (microarray dataset GSE80632 and RNA-seq datasets SRA200820) (Fig. 2 A, 2B). qRT-PCR and Western blot analysis further confirmed the overexpression of TOX2 mRNA and protein in NKTL cell lines relative to normal NK cells (Fig. 2 C). Fig. 2 The expression and prognostic value of TOX2 in NKTL. A Expression (lg2) level of TOX2 in a collection of normal NK cells, NKTL cell line and NKTL patient samples derived from a microarray dataset in Gene Expression Omnibus (GEO) database (accession number: GSE80632). B Volcano plot demonstrating gene expression level in a collection of normal NK cells and NKTL patient samples derived from an RNA-seq dataset deposited in the Sequence Read Archive (SRA) database, under the accession code SRA200820. Y-axis represents p value (lg10). X-axis indicates the fold change (lg2) of genes differentially expressed between normal NK cells (left) and NKTL patient samples (right). TOX2 was labelled. C Quantitative RT-PCR of TOX2 gene expression in 3 normal NK cell samples and NKTL cell line NKYS, NK-92, HANK and NK-S1 (upper panel). The expressions of TOX2 gene were normalized to GAPDH level (internal control) for each sample and are presented as relative fold changes ( n = 3, mean ± SD). * p < 0.01 for comparison of NKTL cell lines vs. normal NK cells. Western blotting analysis of TOX2 protein in one normal NK cell sample and 4 NKTL cell lines (lower panel). β-actin was used as a loading control. This result is representative for three independent biological replicates. D Utilizing data (GSE90784) from GEO database, TOX2 expression was categorized into TOX2- High (≥ 50%) and TOX2-Low (< 50%) group. Kaplan–Meier survival curves were constructed for NKTL patients based on TOX2 expression levels (TOX2-Low vs TOX2-High). Significance ( p ) was evaluated by Log-rank test. HR: hazard ratio Full size image In order to establish the clinical significance of TOX2 in NKTL, we conducted survival analysis on our published gene expression dataset (GSE90784). Most importantly, a higher expression of TOX2 was associated with worse overall survival (Log rank p value: 0.021, hazard ratio: 2.63), demonstrating its prognostic significance (Fig. 2 D). Taken together, these data argue for the implication of TOX2 in the pathogenesis of NKTL. TOX2 mediates NKTL cell growth, proliferation and colony formation After demonstrating its overexpression and prognostic value, we proceeded to further study TOX2’s functional roles in NKTL. Two individual TOX2 specific-shRNAs tagged with GFP were transfected into NKYS and HANK1 cells. qRT-PCR and immunoblotting analysis confirmed the decreased TOX2 mRNA and protein induced by TOX2-sh1 and –sh2 compared to scr-shGFP (control) (Fig. 3 A). To assess whether TOX2 knockdown inhibits the growth of NKTL cells, we quantified the GFP + % TOX2-sh1-transduced NKYS cells at 2-day intervals starting 3 days post transduction. Because the growth of NKYS and HANK1 cells requires human cytokine IL-2, we performed two sets of experiments with or without IL-2 in culture medium in parallel at day 3. We observed a markedly decreased GFP + % cells compared with scr-shGFP-transduced cells in both settings (Fig. 3 B ) , however, the difference was more remarkable in medium containing IL-2. These results indicate that silencing TOX2 imposes a strong negative selection pressure on NKTL cell growth (Fig. 3 B). Cell cycle analysis revealed that inhibition of TOX2 had impact on cell cycle distribution in NKYS and HANK1 cells. Compared with control samples, TOX2-sh1- and –sh2-treated samples had a significant increase in G 0 /G 1 -phase populations (Fig. 3 C). Next, we overexpressed TOX2 in NKYS cells, and confirmed that TOX2 mRNA and protein were increased in NKYS cells overexpressing FLAG-TOX2 plasmid relative to empty vector (EV) control cells (Fig. 3 D). This pair of cells were then cultured with or without the addition of IL-2 cytokine and CTG assay were done at different time points. The result showed that there is no significant difference in cell viability between these two lines in presence of IL-2 (Data not shown), however, NKYS-TOX2 cells maintained significantly higher cell viability than NKYS-EV in the culture without IL-2 (Fig. 3 E). These data suggested that TOX2 confers growth advantage for NKTL cells in absence of cytokines. To evaluate the effect of TOX2 on the clonogenicity of NKTL cells, colony growth was determined by CFU assay in NKYS-TOX2 and –EV cells. The number of CFU was significantly increased in NKYS-TOX2 cells compared with NKYS-EV cells (Fig. 3 F, p = 0.021). The results demonstrate that TOX2 is effective in enhancing the clonogenic capacity of NKTL cells. Fig. 3 Oncogenic properties of TOX2 in NKTL cells. A NKYS and HANK1 cells were infected with either scramble shRNA, or TOX2-sh1 or TOX2-sh2 tagged with green fluorescent protein (GFP) for 3 days, then subjected to mRNA and protein extraction. Quantitative RT-PCR (upper panel) and immunoblotting analysis (lower panel) of TOX2 transcript and protein level in these populations. Three independent experiments were conducted. For qRT-PCR analysis, data were normalized to GAPDH level (internal control) for each sample and are expressed as the fold change vs scramble control population (mean ± SD). * p < 0.05. For immunoblotting analysis, GAPDH and β-actin were used as loading controls in NKYS cells and HANK1 cells, respectively. Representative blotting images were shown. B Flow cytometric analysis of the percentage of GFP + cells post-infection of NKYS and HANK1 cells. The quantification started at day 3 post-infection at 2-day intervals up to day 11. The percentage of GFP + cells at day 5, 7, 9, 11 was normalized to day 3, respectively. Two sets of cell culture medium with or without human IL-2 (10 ng/ml) were used. Each data point was representative of three biological replicates (mean ± SD). * p < 0.05; ** p < 0.01. Representative FACS plots show NKYS cells infected with TOX2-sh1 lentivirus at day 3 and day 11. C Cell cycle analysis of NKYS and HANK1 cells infected with either scramble shRNA or TOX2-sh1 or TOX2-sh2 lentivirus. These cell cycle experiments were triplicated and presented in mean ± SD. * p < 0.05. D Quantitative RT-PCR of TOX2 gene expression in NKYS cells transduced with either empty vector (EV) or FLAG-TOX2 overexpression vector. These data show mean ± SD of 3 independent experiments. ** p < 0.01 (left panel). Western blot analysis of TOX2 protein level in EV-NKYS cells and FLAG-TOX2-NKYS cells. GAPDH was used as loading control (right panel). E Quantification of the percentage of GFP + subpopulation among NKYS-EV and NKYS-FLAG-TOX2 cells at 2-day interval up to day 8. Human IL-2 was removed from culture medium. This experiment was repeated 3 times. F TOX2 increases colony formation of NKTL cells. Representative images of colony formation captured from NKYS-EV and NKYS-FLAG-TOX2 cells (upper panel). The numbers of colony in 10 random field was illustrated in mean ± SD (lower panel). These data were from three independent experiments. * p = 0.021 Full size image TOX2 expression is driven by super-enhancer To investigate correlation between super-enhancer activity and H3K27Ac signals on TOX2-SE identified, we cloned 3 different enhancer regions and examined their enhancer activity in an enhancer reporter assay. Cloned SE regions significantly increased luciferase signal (enhancer activity), while a cloned region outside of SE with background H3K27Ac signal failed to do so (Fig. 4 A). This finding suggested that this TOX2-SE has regulatory activity. Fig. 4 Functional importance of TOX2-SE in NKTL cells. A Enhancer activity was identified in a reporter assay for TOX2-eNC (a low H3K27Ac region outside of TOX2-SE on Chr20), TOX2-e1, TOX2-e2 and TOX2-e3 regions, respectively. The position of each region on chr20 was indicated (not in size scale). Enhancer activity is expressed as relative fold change of TOX2-SE regions (-e1, -e2, -e3) vs control region (-eNC). Three biologically independent assays were performed. Error bars represent SD. ** p < 0.001. B A schematic diagram of the pairs of sgRNAs designed to target 3 valley bases (P1, P2 and P3) on H3K27Ac track of the SE region of TOX2 . Two pairs of sgRNAs were used to direct the dCas9-KRAB transcription repression system to target 2 sites of each valley base (T1-2 for P1; T3-4 for P2; T5-6 for P3). C Decreased mRNA expression of TOX2 target gene after activation of pairs of sgRNAs (T1, T3-6) guided dCas9–KRAB repression system targeting the TOX2-SE region ( n = 3 biologically independent samples of NKYS cells). dCas9: stable NKYS-dCas9 cells without pairs of sgRNAs. Dox: doxycycline. Student’s t-test was applied for all statistical comparisons of TOX2 expression in cells + Dox versus -Dox (** p < 0.01). D TOX2, PRL-3, and apoptosis-related proteins were analyzed by Western blot in NKYS-dCas9 cells after transfected with pairs of sgRNA in condition of + Dox or -Dox. Detection of β-actin protein was used as an internal loading control. Three independent experiments were conducted and representative blot images were shown. E Cell proliferation assays with different pairs of sgRNA transfected NKYS-dCas9 cells with or without Dox induction. The number of cells over 9 days was recorded under each condition as indicated. Data of three biological replicates (mean ± SD) were used to construct these growth curves. ** p < 0.001 for the different of -Dox versus + Dox group Full size image Next, we assessed whether the TOX2-SE is functional and causative for the TOX2 dysregulation in NKTL. To this end, we synthesized 6 pairs of sgRNAs T1 to T6 targeting the SE peaks spanning the ~ 0.3-Mb genomic region and transduced into the NKYS line stably expressing dCas9-KRAB (Fig. 4 B, supplemental Table S 5 ). T2 pair was excluded from analysis due to unsuccessful lentiviral packaging. All the remaining 5 pairs enabled effective KRAB-dCas9-mediated epigenetic silencing and reduced TOX2 expression on both mRNA (Fig. 4 C) and protein level (Fig. 4 D) upon doxycycline (Dox) induction. The inhibition of SE activity in sgRNAs-CRISPR/dCas9-transfected cells led to a significant decrease in cell growth (Fig. 4 E). Furthermore, we also observed that expression of several active apoptosis markers such as cleaved caspase-3, cleaved caspase-7, and cleaved PARP, were increased in Dox-treated cells, compared to cells without Dox treatment or without sgRNAs (dCas9 only) (Fig. 4 D). These findings supported the regulatory activity of TOX2-SE on the elevated expression of TOX2 and reflected the importance of TOX2-SE on the downstream functional effect of TOX2. Genetic inhibition of TOX2 reveals key TOX2-regulated oncogenes required for NKTL cell survival To gain insight into the role of TOX2 in pathogenesis of NKTL, we conducted a transcriptomic analysis of NKYS cells expressing Scramble shRNA or TOX2-sh1 or TOX2-sh2. Using twofold as cut-off level (FDR < 0.05, p value < 0.05), 65 genes showed decreased expression and 66 genes had increased expression in both NKYS-TOX2-sh1 and -TOX2-sh2 expressing cells compared to NKYS-Scramble shRNA cells (Fig. 5 A, supplemental Table S 5 ). In addition to TOX2 gene, metastatic oncogene PTP4A3 (Protein Tyrosine Phosphatase 4A3, also known as PRL-3), SPP1 , ITGB7 , SLAMF1 ( CD150 ) and CD244 ( SLAMF4 ) genes were downregulated in TOX2-sh-treated cells. GO term analysis revealed that genes involved in immune response and regulation of NK cell activity showed the most significant change (Fig. 5 B). Pathway analysis identified top five canonical pathways, including Allograft rejection, Endosomal/Vacuolar pathway, Proteins with altered expression in cancer immune escape, Immunoregulatory interactions between a lymphoid and a non-lymphoid cell, and MHC1 causes antigen presentation failure (Fig. 5 B). We also used GeneMANIA online program to interrogate the 65 downregulated genes and constructed gene—gene interaction networks [ 37 ]. This analysis also revealed that TOX2 sat on the top of the network. A physical interaction of sub-network, comprising MHC family members was formed (supplemental Figure S 3 ). It appears that SLAMF1 might play a role in regulation of this cluster of MHC family genes. It has been reported that SLAMF1 expression is restricted to some hematologic malignancies including cutaneous T-cell lymphomas, a few types of B-cell non-Hodgkin's lymphoma, chronic lymphocytic leukemia, Hodgkin's lymphoma, and multiple myeloma [ 38 ]. Thus, it is potentially viable approach to target NKTL cells with anti- SLAFM1 antibody or measles virus (MV) oncolytic therapy because SLAFM1 serves as a cellular receptor for wild type as well as vaccine strains of MV [ 38 ]. Fig. 5 Genetic inhibition of TOX2 in NKTL cells. A Overlap analysis (left panel) and heatmap (right panel) of genes that were differentially expressed induced by knockdown of NKYS cells. Here, FDR of 0.1 was used as a cutoff. Significant gene expression changes are defined by DESeq2 algorithm with fold change ≥ 2 and adjusted p < 0.05. Selected 6 genes including TOX2 were highlighted on the heatmap. B Gene ontology enrichment analysis (upper panel) and pathway analysis (lower panel) of TOX2-regulated genes revealed by RNA-seq analysis. C TOX2 occupancy on TOX2 binding sites in the PRL-3 (PTP4A3) promoter was examined by ChIP using anti-TOX2 antibody with IgG as negative control. ChIP-qPCR was conducted using primers flanking TOX2 binding sites in PRL-3 promoters (P1 and P3). A region without TOX2 binding site (P3) was used as a control. The occupancy of TOX2 on these sites were calculated as percentage of the respective input DNA concentration and expressed as relative signal after normalized against the IgG samples (set as 1). Values are shown as mean ± SD of four independent experiments. **, significantly higher ( p < 0.01) than the respective IgG samples. n.s., not significant. Negative and positive numbers indicate the regions relative to the TSS of PRL-3. D NKYS cells were transfected with scramble shRNA (Scr) or PRL-3-sh1, -sh2. Efficacy of PRL-3 silencing measured by qRT-PCR. Data were normalized to GAPDH level (internal control) for each sample and are expressed as the fold change relative to scramble control population (mean ± SD) * p < 0.05. E Cell viability was assessed by CTG assay every 2 days up to day 8. The percentage of cell viability of day 2, 4, 6, 8 was compared with day 0 as baseline (100%). Data shown are the average of 3 independent experiments and each experiment was done in triplicate. ** p < 0.01, significant difference between Scr group and PRL-3-sh group Full size image To examine the mechanisms whereby TOX2 regulates target genes, we interrogated a publicly available dataset, TOX2 ChIP-seq of a neuroblastoma cell line, SK-N-SH (ReMap2022, Experiment ID: ENCSR226NRS) on these 65 genes for their genomic regions overlapping with TOX2 binding (ChIP-seq peak). These analyses identified 23 TOX2-bound genomic sites (supplemental Table S 6 ). TOX2 was enriched at promoter-transcription start sites (TSS) of PTP4A3 and LIMCH1 gene, while all other binding occurred at 3’-UTR (untranslated region), intergenic, or intron sites (supplemental Table S 6 ). Next, to determine whether PTP4A3 is a direct target gene of in vivo, we identified two stretches of regions harboring consensus TOX2 binding motif (VSSSGVVGCG) in PTP4A3 promoter (supplemental Table S 7 ). Next, we conducted qPCR using two pairs of primers covering these TOX2 binding motifs in the ChIP DNA extracted from NKYS cells. Our analysis confirmed genomic TOX2 binding within a region spanning the approximate -200 bp to + 200 bp relative to the PTP4A3 TSS (Fig. 5 C, supplemental Table S 7 ). Importantly, in above-described CRISPP-interference experiments, we observed decreased PRL-3 protein level in parallel to reduced TOX2 expression upon Dox-induction (Fig. 4 D). PRL-3 was chosen for further functional study because of its widely reported oncogenic role in the literature, while the function of LIMCH1 appears inconsistent in different type of cancers. Therefore, NKYS cells were stably transduced with lentivirus-mediated shRNA targeting PRL-3 (-sh1, -sh2) or Scramble shRNA (Scr). The knockdown effect of PRL-3-sh1 and -sh2 was confirmed by qRT-PCR analysis ( p < 0.05, Fig. 5 D). Compared with the NKYS-Scr control cells, NKYS-PRL-3-sh1 and -sh2 cells exhibited significantly decreased viability ( p < 0.01; Fig. 5 E). These results indicate that PRL-3 is regulated by TOX2 and plays an important role in TOX2-mediated oncogenesis in NKTL cells. Taken together, these data reveal TOX2-regulated pathways, networks in which PRL-3 is a key downstream oncogene. RUNX3 binds on TOX2-SE and promotes TOX2 gene transcription Runt-related transcription factor (RUNX) proteins, including RUNX1, RUNX2 and RUNX3, belong to a transcription factors family shared conserved DNA-binding sequences-PPPYP (RUNX domain) [ 39 ]. RUNX1 and RUNX3 are important for hematopoietic cell differentiation, RUNX2 is essential for osteogenesis. RUNX3 also regulates growth of gastric epithelial cells [ 40 ]. During latent infection of EBV, RUNX3 is a direct target of the viral transcription factor EBNA2 and the induced RUNX3 protein binds to the conserved RUNX binding site near the TSS of RUNX1 P1 promoter, leading to the repression of RUNX1. Expression of RUNX3 and repression of RUNX1 are required for efficient proliferation of B cells immortalized by EBV [ 41 , 42 ]. Indeed, we confirmed RUNX3 expression is significantly higher in NKTL cell lines and NKTL patient tumor samples when compared to normal NK cells ( p = 2.8E-05 and 1.1E-04, respectively). In contrast, RUNX1 expression is lower in NKTL cell lines relative to normal NK cells ( p = 0.036), but its level is not statistically different among NKTL patient tumor samples and normal NK cells ( p = 0.099) (supplementary Figure S4). Taken together, these data imply that RUNX3, but not RUNX1, is potentially relevant in NKTL disease. Furthermore, we previously reported that RUNX3 was overexpressed in NKTL with functional oncogenic properties [ 17 ]. These rationales led us to further characterize RUNX3 in NKTL. Next, we performed correlation analysis on publicly available GEP dataset (GSE90784) and identified RUNX3 expression was significantly correlated with TOX2 (Fig. 6 A, Pearson's R = 0.64; Pearson's p value = 8.39E-09). To study whether RUNX3 could regulate TOX2 transcription, we next knocked down RUNX3 using two independent shRNAs. A significant decrease in TOX2 mRNA and protein expression was observed (Fig. 6 B and 6C) in both NKYS and HANK1 cells, implying that TOX2 could be a downstream target gene of RUNX3 . Importantly, inhibition of cell growth was confirmed in both cell lines infected with RUNX3-shRNA lentiviral particles (Fig. 6 D). To confirm the specificity of TOX2 as a target of RUNX3, we created co-transduced NKYS cells expressing RUNX3-sh1 with FLAG-EV or FLAG-TOX2 construct. NKYS cells depleted of RUNX3 with the FLAG-EV had a cell proliferation rate that was reduced by up to 55%, and the effect of this knockdown was completely reversed by ectopic expression of FLAG-TOX2 (supplemental Figure S 5 ). Fig. 6 RUNX3 bound to the SE and activated the expression of TOX2. A Correlation between RUNX3 expression with TOX2 expression in NKTL patients from GEP dataset: GSE90784. A significant positive correlation was determined by Pearson's p value = 8.39E-09, and R = 0.64. B, C The mRNA (B) and protein (C) levels of RUNX3 and TOX2 were detected by qRT-PCR and Western blot analysis upon transfection with two different pairs of RUNX3-shRNA (RUNX2-sh1, -sh2) or the scramble shRNA (Scr) in NKYS and HANK1 cells. GAPDH was measured for data normalization (B) and β-actin was used as the loading control (C). n.s.: non-specific band produced by anti-RUNX3 antibody (A-3 clone, sc-376591) in addition to specific bands at 48, 46 kD. All these results were representative for three independent biological replicates. D Relative cell growth was measured in NKYS and HANK1 cells transduced with RUNX3-shRNAs or Scr-shRNA. For each condition, cell number was counted at day 2, 4, and 6, then converted to fold change relative to the starting number at day 0. Same number of cells were seeded at day 0 and comparison was made at indicated time points for relative fold changes of cells transduced with RUNX3-shRNA versus Scr. Three biologically independent experiments were performed (mean ± SD). * p < 0.05, ** p < 0.01, *** p < 0.001. E RUNX3 binding site locates within the TOX2-SE loci. F ChIP-PCR confirmed the interaction between RUNX3 and SE region of TOX2 in NKYS and HANK1 cells. Data are expressed as fold change of RUNX3 antibody-IP vs IgG control-IP. Data are representative of 3 independent IPs. Error bars indicate SD. ** p < 0.01, *** p < 0.001 by two-sample, two-tailed t-test compared with the control. G Indicated vectors were transiently transfected into 293 T cells, and luciferase activity was measured using a Dual-Luciferase system. Firefly luciferase activity was normalized to co-transfected Renilla luciferase and calculated as relative fold change to pGL4.26 empty vector. Data shown represent means ± SD of three independent experiments. ** p < 0.01, compared with each RUNX3-WT group (E1, E2, and E3), respectively. WT: wild type; MUT: mutant Full size image We further investigated if this positive correlation is due to RUNX3 binding to the SE of TOX2 gene. Analyzing ChIP-seq data for motif discovery via Factorbook developed by the ENCODE consortium [ 43 ], several binding sites of RUNX3 including E1, E2, and E3, were located within the TOX2-SE region (Fig. 6 E and supplemental Table S 8 ). Notably, we performed ChIP-qPCR, confirming that the ChIP enrichment signal of RUNX3 was specific within SE region of TOX2 (Fig. 6 F and supplemental Table S 8 ). We hypothesized that the consensus recognition motif ACCACA is essential to TOX2-SE activity. Mutations were introduced to these motifs in pGL4.26 constructs containing E1, E2, and E3 (Fig. 6 G). Destroying the RUNX binding site in these TOX2-SE regions resulted in a reduction of luciferase activity from 9 -13-fold to 2—fourfold above empty vector ( Fig. 6 G). The results indicate that these binding motifs are required for optimal function of the TOX2-SE. Collectively, our results show the SE region of TOX2 is bound by RUNX3, providing a mechanism by which SE-driven TOX2 activation is dependent, at least partially, on oncogenic transcription factor RUNX3 in NKTL. Confirmation of correlations among TOX2/PRL-3/RUNX3 and their prognostic values in an independent study cohort To support our above-mentioned findings, we used mIF approach to further study the association between TOX2 protein expression with PRL-3 and RUNX3 in CD3 + NKTL tumor cells in an independent study cohort (supplementary Table S 9 ). Representative images of TOX2, PRL-3 and RUNX3 expression markers in CD3 + tumors were illustrated (Fig. 7 A). Our analysis showed that the mean intensity of TOX2 expression was significantly correlated with the mean intensity of PRL-3 expression (Fig. 7 B , Pearson's R = 0.65; p < 0.001) and RUNX3 (Fig. 7 C, Pearson's R = 0.50; p = 0.001). Kaplan–Meier analysis demonstrated that patients with higher TOX2 expression (TOX2-High, ≥ median expression) were associated with shorter overall survival compared to patients with lower TOX2 expression (TOX2-Low, < median expression) ( p = 0.0284, HR = 9.11) (Fig. 7 D). Similarly, patients with higher PRL-3 expression (PRL-3-High, ≥ median expression) demonstrated a worse overall survival than patients with lower PRL-3 expression (PRL-3-Low, < median expression) (p = 0.040, HR = 2.49) (Fig. 7 E). Overall, our mIF data in the validation cohort confirmed the correlation of TOX2 expression with PRL-3 and RUNX3 and high levels of TOX2, PRL-3 expression were associated with poor outcome in an independent set of NKTL patient samples. Fig. 7 Multiplex immunofluorescence (mIF) validation of TOX2, RUNX3 and PRL-3 expression in an independent cohort of clinical samples from 42 NKTL patients (NUH). A Representative images of protein expression of CD3, RUNX3, TOX2 and PRL-3 in NKTL patient samples using mIF method. Left columns represented protein expression of CD3 (membrane, magenta), PRL-3 (cytoplasm, red), RUNX3 (nuclear, cyan), and TOX2 (nuclear, green) in NKTL with multiplexed immunofluorescence staining. Right columns indicated the corresponding image analysis masks. Double positive cells were in white; single positive cells were marked in the corresponding immunofluorescence staining color; while negative cells were in blue. The scale bars indicate 50 µm. B Correlation between TOX2 expression with PRL-3 expression in NKTL patients ( n = 42) was determined by mean intensity of staining quantified with Visiopharm program. A significant positive correlation was determined by Pearson's p < 0.001, and R = 0.65. C Correlation between TOX2 expression with RUNX3 expression in NKTL patients ( n = 42) was determined by mean intensity of staining quantified with Visiopharm program. A significant positive correlation was determined by Pearson's p = 0.001, and R = 0.50. D Kaplan–Meier analysis was performed on the overall survival between patients ( n = 30) expressing higher TOX2 expression (TOX2-High, ≥ median expression) and lower (TOX2-Low, < median expression). E Kaplan–Meier analysis was performed on the overall survival between patients ( n = 30) expressing higher PRL-3 expression (PRL-3-High, ≥ median expression) and lower (PRL-3-Low, < median expression). In D and E statistical significance (p) was evaluated by Log-rank test and p < 0.05 was considered as significant. HR: hazard ratio Full size image Silencing TOX2 impairs tumorigenicity in vivo To determine the tumorigenic role of TOX2 in vivo, we used this pair of NK-S1-scramble and NK-S1-TOX2-sh1 cells, to subcutaneously inject into one side of NSG mice ( n = 5). NK-S1-scramble cells formed palpable tumor mass at day 20 post inoculation and progressed rapidly up to 1880 ± 287 mm 3 at day 28 after cell inoculation in immunodeficient mice. Strikingly, the tumors of NK-S1-TOX2-sh1 cells developed significantly smaller in size (710.7 ± 232 mm 3 ) when compared with NK-S1-scramble tumors (Fig. 8 A, p < 0.001). The images of naked tumors were shown in Fig. 8 B. Consistently, we detected a significant reduction in NK-S1-TOX2-sh1 tumor weights when compared to NK-S1-scramble tumor (Fig. 8 C, p < 0.001). Therefore, these data suggest that TOX2 confers important oncogenic function in NKTL cells in vivo. Fig. 8 Mouse xenograft models of NK-S1-scramble and NK-S1-TOX2-sh1 cells. A The tumor volume was measured by caliper every 2 -3 days. The tumor growth curves were constructed according to the average tumor volume of each group ± SD (mm 3 ). B Mice were sacrificed, and then the images of xenograft tumors were captured after dissection. Scale bar, 1 cm. C Tumor weights of NK-S1 xenografts in scramble (control) verse TOX2-sh1 group. N = 5. ** p < 0.01; *** p < 0.001; **** p < 0.0001. D Schematic representation of molecular mechanism involving in TOX2-SE-driven oncogenesis in NKTL Full size image Discussion Overall, we describe the aberrant SE landscape and transcriptional program in NKTL patient samples and cell lines. To our knowledge, this is the first study providing the comprehensive changes in SE profiling and gene expression in NKTL. The analysis reveals novel insights into the pathogenesis of NKTL and uncovers TOX2 as a critical SE-associated oncogene and a potential therapeutic target. Similar to many other types of cancer, the oncogenic transformation of NKTL cells rely on dysregulation of a core set of TFs. Consistent with this notion, we identified a list of SE-associated TFs which are important in NK/T cell functions, suggesting that SE establishment plays a key role in NKTL biology. The high mobility group box (HMG-box) superfamily are non-histone proteins, regulating DNA-dependent process by changing chromatin structure [ 44 ]. Thymocyte selection-associated HMG box (TOX), a transcription factor in HMG-box superfamily, consists of four subfamily members: TOX, TOX2, TOX3, and TOX4 [ 45 ]. Among them, the oncogenic property of TOX has been reported in T-cell acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) [ 46 , 47 ]. In contrast, the role of TOX2 in hematological malignancies and solid tumors has not been established. TOX2 regulates the development of NK and follicular helper (Tfh) cells through TBX21 and BCL6, respectively [ 48 , 49 ]. We here identified TOX2 as a novel NKTL-SE oncogene and used this SE gene as an example for further study. We provided compelling evidence demonstrating the oncogenic function of TOX2 and dissected molecular mechanism of SEs on the activation of oncogenes in NKTL. We found that TOX2 was not only overexpressed in NKTL primary tumors and cell lines, but also negatively associated with patient survival. We functionally characterized the impact of TOX2 gene on NKTL cell growth, cell cycle, apoptosis and colony formation. Notably, silencing TOX2 decreased the tumor size in vivo. Several downstream target genes revealed by RNA-seq analysis of TOX2-knockdown cells may account for these biological consequences. PRL-3 belongs to the phosphatase of regenerative liver (PRL) family (PRL-1, -2, -3) [ 50 ]. A number of studies from our group and others reported that PRL-3 is widely overexpressed in a majority of solid tumors and hematological malignancies [ 51 , 52 , 53 , 54 , 55 , 56 ]. PRL-3 has been characterized as pro-metastasis and poor prognosis factor [ 50 , 57 ]. TOX2 is enriched at promoter-TSS region of PRL-3 gene. In addition, a number of other downregulated genes play important roles in cancer cell proliferation, disease progression, poor prognosis and resistance to therapies. A large body of evidence shows that high LGALS3BP expression in tissues and serum are associated with unfavourable clinical outcomes in a wide variety of malignancies, including breast, lung, ovarian, pancreatic, prostatic, liver, gastric cancers and melanoma [ 58 ]. Adhesion to LGALS3BP has been documented as a mechanism for drug resistance in lymphoma, lung cancer and ovarian cancer [ 59 , 60 , 61 ]. SPP1 (Osteopontin, OPN), has diverse roles in regulation of immune response, anti-apoptosis, cellular viability, and NK cell development and function [ 62 ]. Absence of OPN in the bone marrow niche leads to a significant decreased NK population and deficiency of intracellular OPN [ 63 ]. NK cells with deficient expression of OPN display defective responses to IL-15 and diminished responses to metastatic tumors [ 64 ]. OPN has been widely implicated in cancer invasion and metastasis, poor prognosis and resistance to radiation and chemotherapy through promoting cancer stem cell-like properties and binding with CD44 or integrin receptors [ 65 , 66 , 67 ]. Other important genes in this list downregulated by TOX2-shRNA, such as ITGB7 [ 68 ], SLAMF1 [ 38 ], CD244 [ 69 ], DPYSL3 [ 70 ], KRT80 and KRT7 [ 71 , 72 ], have been implicated in cancer progression or drug resistance. Taken together, the list of genes affected by TOX2 elimination play pivotal roles in drug resistance, cancer progression, metastasis, and worsen clinical outcomes. Collectively, they drive the development of NKTL and contribute to therapy resistance and disease progression in patients with NKTL. Using the CRISPR/dCas9 interference tool, sgRNAs targeting five different constituent sites on the SEs of TOX2 significantly reduced the TOX2 transcription and protein levels. Consequently, we observed attenuated cell proliferation and induced cell apoptosis in NKTL cells. Our findings suggested TOX2 as a novel SE-controlled oncogene in human NKTL. Importantly, we uncovered a positive correlation between TOX2 and RUNX3 mRNA and protein expression in NKTL patients. EBV contributes to several types of human cancers, including NKTL. Earlier studies demonstrated that the EBV infection induces RUNX3 expression regulated by EBNA2 [ 42 , 73 ]. Until recently, Zhou and colleagues delineated the landscape of EBV-activated super-enhancers (EBV-SEs) for the first time [ 74 ]. Interestingly, EBNA2-SEs are found to be localized near RUNX3 genes in EBV-transformed lymphoblastoid cell lines (LCLs) [ 74 ]. Subsequently, two independent studies confirmed that SEs for RUNX3 are required for cell proliferation in EBV-infected B cells [ 75 , 76 ]. In this study, we demonstrate, for the first time, that RUNX3 binds to TOX2-SE and drives TOX2 expression NKTL tumors. NKTL is an EBV-associated cancer. In addition, consensus sequences of RUNX3 is enriched at the binding sites in the SE region of RCAN1.4 in breast cancer [ 77 ]. Based on these findings in EBV-infected B cells, it therefore suggests that EBNA2 might induce RUNX3 expression in EBV-infected NKTL cells. Consequently, overexpressed RUNX3 increases TOX2 transcription through the binding to TOX2-SE. Furthermore, among these 65 genes for their genomic regions overlapping with TOX2 binding (Supplemental Table S 6 ), two potential TOX2 binding sites were identified in the intron 3 of TOX2 gene (NM_001098797). These data imply that TOX2 could create a positive regulatory feedback loop that establishes expression of TOX2 in NKTL. RUNX3 has been implicated as a tumor suppressor or oncogene in different type of cancers [ 78 ]. We and others demonstrated that RUNX3 is oncogenic and its overexpression is correlated with poor prognosis and drug resistance in NKTL and anaplastic large cell lymphoma (ALCL) [ 17 , 79 , 80 ]. Interestingly, RUNX3 expression is also regulated by super enhancers in EBV-positive malignant B cells [ 76 ]. In this study, RUNX3 is recruited and bound to the SE of TOX2 , thus further driving expression of the SE-related oncogene in cooperation with mediator complex and other TF co-activators. The RUNX3-TOX2-SE-TOX2-PRL-3 regulatory pathway may represent a hallmark of NKTL biology, which could be therapeutically exploited (Fig. 8 D). BET (bromodomain and extra-terminal domain) proteins and cyclin-dependent kinases (CDKs) are key components of super-enhancer. A number of BET inhibitors and CDK inhibitors have being tested in different phases of clinical trials in hematologic malignancies [ 30 ]. Our study provides rational for evaluating the clinical efficacy of these inhibitors in NKTL patients. Although, direct TOX2 inhibitor is not available, developing proteolysis-targeting chimera (PROTAC) molecules targeting TOX2 is a promising strategy, supported by the fact that two PROTAC degraders (ARV-110 and ARV-471) have progressed into phase II clinical trials [ 81 ]. PRL3-zumab, a First-in-Class humanized antibody drug against PRL-3 oncoprotein, has been approved for Phase 2 clinical trials in Singapore, US, and China to treat all solid tumors [ 82 , 83 ]. Therefore, examining the clinical utility of PRL3-zumab against NKTL is timely needed. In conclusion, we, for the first time, describe the SE landscape discovery in NKTL cells. We use TOX2-SE as example to demonstrate that the discovery strategy of key SE-associated genes in the current study is a useful tool for uncovering novel, cancer-unique oncogenes. The changes in SE-dependent regulatory networks such as RUNX3-TOX2-SE-TOX2-PRL-3 identified in this study offer valuable opportunities for therapeutic targeting NKTL. Availability of data and materials The datasets supporting the conclusions of this article are available in the GEO repository. All H3K27ac ChIP-Seq data were deposited in the GEO database (Accession number: GSE190925). All RNA-Seq data were deposited in the GEO database (Accession number: GSE189632). Abbreviations SE: Super enhancer NKTL: Natural killer/T cell lymphoma ALCL: Anaplastic large cell lymphoma; EBV: Epstein-Barr virus ALL: Acute lymphoblastic leukemia AML: Acute myeloid leukemia TSS: Transcription start site TF: Transcriptional factor TOX: Thymocyte selection-associated HMG box HMG-box: High mobility group box PRL-3: Phosphatase of regenerative liver 3 ChIP: Chromatin (Ch) immunoprecipitation (IP) shRNA: Short hairpin RNA CRISPR: Clustered Regularly Interspaced Short Palindromic Repeats Cas9: CRISPR-associated protein 9 dCas9: Cas9 endonuclease dead sgRNA: Single guide RNA GFP: Green fluorescent protein
A team of researchers from the Cancer Science Institute of Singapore (CSI Singapore) at the National University of Singapore (NUS) has discovered that a transcription factor, TOX2, was aberrantly increased in patients with natural killer/T-cell lymphoma (NKTL). The increased TOX2 level leads to the growth and spread of NKTL, as well as the overproduction of PRL-3—an oncogenic phosphatase that is a known key player in the survival and metastasis of several other types of cancers. This breakthrough discovery presents a potential novel therapeutic target to treat NKTL. NKTL is an Epstein-Barr virus (EBV) associated, aggressive non-Hodgkin lymphoma (NHL) with very poor treatment outcomes in the advanced stages. It is prevalent in Asia and Latin America but rare in Europe and North America. Combined radiation therapy and chemotherapy is the consensus standard therapy for NKTL patients; however, they are also often associated with high relapse rate and serious side effects. Thus, improved knowledge of the molecular mechanism leading to NKTL progression, as well as the development of novel targeted therapy strategies, has to be addressed urgently. Professor Chng Wee Joo and Associate Professor Takaomi Sanda from CSI Singapore, along with Dr. Ong Choon Kiat from Duke-NUS Medical School, reported their findings in a paper published in journal Molecular Cancer on April 10, 2023. Collective efforts from Dr. Jianbiao Zhou, Dr. Tze-King Tan, Ms Sabrina Hui-Min Toh, Miss Sinan Xiong, and the rest of the team, have contributed to these pioneering revelations. Their findings are also the first to show the involvement of TOX2 and PRL-3 in NKTL. These findings were validated in both cell lines and in a large set of patient tumor samples. In addition, the team analyzed the clinical features of 42 NKTL cases in an independent cohort and found that TOX2 was not only overexpressed in NKTL primary tumors, but also negatively associated with patient survival. Currently, there are no TOX2-specific inhibitors. As such, targeting TOX2, or its downstream PRL-3, could be a valuable therapeutic intervention for NKTL patients and warrants further study in the clinic. Prof Chng, who is the co-lead author of the study, said, "We have now identified novel treatment targets, TOX2 and the downstream PRL3, in NKTL, where new treatment is greatly needed. We can use different strategies to target these. Proteolysis-targeting chimera (PROTAC) targeting TOX2 to degrade TOX2 protein may be a viable NKTL therapy option." "A humanized antibody, PRL3-zumab, has been approved for Phase 2 clinical trials in Singapore, US, and China to treat all solid tumors. With our findings from this study, it is definitely timely to evaluate PRL3-zumab's effect in patient with NKTL." "Overall, treatment for NKTL patients remains a challenge in the clinic. Novel insight into the molecular mechanisms of this disease would guide the development of effective targeted therapies to improve the survival of NKTL patients, especially for those refractory or relapsed cases," said Dr. Jianbiao Zhou from CSI Singapore, the first author of this study. Moving forward, the group is currently testing novel agents for targeting TOX2 and PRL-3 in NKTL. The long-term goal is to bring these novel agents into clinical trials.
10.1186/s12943-023-01767-1
Chemistry
Researchers investigate the role of long-chain fatty acids in cellular respiration
M. Tanvir Rahman et al, An engineered variant of MECR reductase reveals indispensability of long-chain acyl-ACPs for mitochondrial respiration, Nature Communications (2023). DOI: 10.1038/s41467-023-36358-7 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-36358-7
https://phys.org/news/2023-04-role-long-chain-fatty-acids-cellular.html
Abstract Mitochondrial fatty acid synthesis (mtFAS) is essential for respiratory function. MtFAS generates the octanoic acid precursor for lipoic acid synthesis, but the role of longer fatty acid products has remained unclear. The structurally well-characterized component of mtFAS, human 2E -enoyl-ACP reductase (MECR) rescues respiratory growth and lipoylation defects of a Saccharomyces cerevisiae Δ etr1 strain lacking native mtFAS enoyl reductase. To address the role of longer products of mtFAS, we employed in silico molecular simulations to design a MECR variant with a shortened substrate binding cavity. Our in vitro and in vivo analyses indicate that the MECR G165Q variant allows synthesis of octanoyl groups but not long chain fatty acids, confirming the validity of our computational approach to engineer substrate length specificity. Furthermore, our data imply that restoring lipoylation in mtFAS deficient yeast strains is not sufficient to support respiration and that long chain acyl-ACPs generated by mtFAS are required for mitochondrial function. Introduction Fatty acids serve a living organism as building blocks of biomembranes, energy storage, cell signaling molecules and ligands in post-translational protein modification. Due to their economical and nutritional values there is a continuous interest in engineering of fatty acid synthesizing pathways in various host organisms to manipulate the carbon chain lengths and modifications of their products 1 , 2 , 3 . Frequently used approaches are heterologous expression of enzymes with different substrate specificity, generation of parallel alternative metabolic pathways or engineering of catalytic properties of endogenous enzymes of these pathways in the organisms used as biofactories. Applying structural knowledge, computer aided design and tools that allow in vivo testing of engineered variants provide avenues towards tailoring enzymes to exhibit desired properties. Aside from commercial value, appropriately designed proteins can also be helpful in the dissection of the physiological function of particular pathways. Recently the mitochondrial fatty acid synthesis pathway (mtFAS) has been identified as a mechanism to regulate mitochondrial respiratory chain (RC) function to meet the substrate availability (acetyl-CoA) to tricarboxylic acid cycle 4 , 5 . MtFAS follows the prokaryotic type II mode where individual enzymatic steps are carried out by separate enzymes (Fig. 1a ) Acyl groups synthesized by mtFAS are linked to 4-phosphopantetheine moiety of the acyl carrier protein (ACP) via a thioester bond. Acyl-ACPs form complexes with adapter proteins that contain a leucine-tyrosine-arginine motif (LYRM) 6 . Acyl-ACP-LYRM protein complexes facilitate assembly of mitochondrial respiratory complexes. Two acyl-ACP molecules can be found associated with LYRM proteins as components of complex I in the mitochondrial RC. Furthermore, an acyl-ACP-LYRM protein complex stabilizes the iron-sulfur cluster synthesis machinery and the LYRM protein, AltMiD51 associated with ACP, participates in the assembly of the large subunit of mitoribosomes 7 , 8 , 9 , 10 , 11 , 12 . MtFAS provides also the octanoic acid precursor required for the endogenous synthesis of lipoic acid, an essential cofactor in oxidative decarboxylation of α-ketoacids and glycine 13 . MtFAS is capable of generating fatty acids longer than eight carbons, but the precise function of these longer fatty acids has remained unclear. We are interested in studying the function of these long-chain acyl groups synthesized by mtFAS and associated with ACPs. The most convenient model to investigate basic aspects of mtFAS function is yeast Saccharomyces cerevisiae , serving as a fast read-out system and the function of a deleted yeast mtFAS enzyme can be replaced by counterparts from heterologous sources 14 , 15 . Fig. 1: Schematic representation of the mitochondrial fatty acid synthesis (mtFAS) pathway, wild-type and engineered MECR and the reaction catalyzed by MECR/Etr1. a Schematic depiction of the mtFAS pathway. The indicated abbreviations (yeast (blue) /human (red)): Mct1/MCAT malonyl-CoA transferase, ACP acyl carrier protein, Cem1/OXSM 3-ketoacyl-ACP synthase, Oar1/KAR1 ketoacyl reductase, Htd2/HTD2 3-hydroxyacyl-thioester dehydratase, Etr1/MECR enoyl-thioester reductase, Lip5/LIAS lipoic acid synthetase. b Schematic representation of the wild-type and engineered MECR. The shown wild-type and engineered enzymes are liganded with C16- and C8-ACP molecules, respectively. The fatty acyl binding cavity extends from the catalytic site near the nicotinamide group of NADPH towards Ile129, which identifies the end of the cavity in the wild-type MECR. The engineered MECR mutant (shown as G165X) possesses a shortened substrate binding cavity discontinuing the synthesis of long-chain fatty acyl-ACP species by mtFAS. c MECR/Etr1 catalyzes the reduction of 2E -enoyl substrates to their saturated counterparts in a NADPH-dependent manner. MECR accepts fatty acyl groups that are attached to either CoA or ACP via a thioester bond. Full size image The last step of mtFAS is carried out by 2E -enoyl-ACP thioester reductases MECR and Etr1 in human and yeast, respectively. These enzymes are members of the medium-chain dehydrogenase / reductase (MDR) superfamily 16 , catalyzing the NADPH-dependent reduction of 2E -enoyl thioesters into acyl thioesters (Fig. 1c ) 17 , 18 . Wild-type human MECR complements the yeast respiratory deficient phenotype of the Δetr1 strain. The crystal structures of the unliganded enzymes of human MECR and the Candida tropicalis Etr1, as well as the structures of the NADPH binary complex, and the NADPH-crotonoyl-CoA ternary complex of the latter enzyme have been solved 19 , 20 , 21 . The structure of MECR shows a bent cavity of sufficient dimensions to accommodate acyl substrates with carbon chain lengths from C4 to C16 19 . In vitro, MECR accepts fatty acyl groups that are attached to either CoA or ACP via a thioester bond as substrates. Here, we describe our work on molecular modeling and simulations guiding the engineering of MECR in order to obtain variants that are unable to accept long-chain fatty acyl substrates. We tested these variants in vivo in Etr1-deficient yeast followed by in vitro studies (Fig. 1b ). Our results show that unlike the wild-type human MECR, the engineered G165Q variant of MECR did not rescue the yeast respiratory deficient phenotype of the Δetr1 strain, although protein lipoylation was restored, demonstrating that the octanoyl/lipoyl synthesizing branch of the mtFAS is supported by enzyme variant. Our data indicate that provision of the octanoic acid precursor by mtFAS is not sufficient to support mitochondrial function and long acyl tail(s) must be generated by mtFAS to allow the cells to maintain a respiratory competent mitochondrial population. Results To study the physiological role of long-chain fatty acids generated by mtFAS, we generated a yeast strain that cannot synthesize long-chain fatty acids in mitochondria. This was done by engineering the fatty acyl binding cavity of human MECR based on in silico modeling and further in vivo and in vitro studies of the MECR variants. Computational design of the MECR mutants Computational modeling was initiated by building the MECR/NADP + complexes with a series of substrates from C8 (octanoic acid) to C16 (Fig. 2 and Supplementary Fig. 1 ). Potential substitution points, where mutations will block the long chain substrate binding (while not distorting the binding of 2E -octenoyl-CoA and shorter substrates) were evaluated (Supplementary Tables 1 and 2 ). Therefore, the following approaches were considered: (i) substitution of amino acid residues in the cavity by bulkier hydrophobic residues that block the space for the hydrophobic alkyl substrate chains, and (ii) modification of the above amino acid residues to polar/charged moieties to create a stable interaction network shortening the cavity. We have previously generated variants I129M, F324Y and G165S in MECR and in vitro analysis of their kinetic properties showed that these variants were capable of reduction of the long chain 2E -enoyl-CoA substrates 19 . Visual inspection of the whole active site identified I129 and G165 as the most promising substitution sites (Fig. 2 ) that could discriminate between 2E -C8 and longer substrates, while F324 was considered to be located too far from the predicted location of C8 of the 2E -enoyl tails. I129 and G165 are conserved in the MECR sequences of higher eukaryotes (Supplementary Fig. 2 ). I129 is positioned in the bottom of the fatty acyl binding cavity at the end of β-strand β5, whereas G165 is located in the α-helical region in the center of the cavity between helices αD and αa (Figs. 1 b, 2 ). The backbone geometry of the modeled MECR at position 165 (Phi/Psi angles = −70°/−18°) would allow for substitution by a residue with a side chain, as would the backbone geometry at position 129 (−122 °, 159°). Accordingly, I129 and G165 were computationally replaced with amino acid residues satisfying the aforementioned design assumptions. This approach allowed us to prioritize the following MECR single-point mutations: G165H, G165L, G165Q, G165F, I129H and I129F (Supplementary Fig. 3 ). In silico models showed more steric clashes for G165H and G165F than for the other G165 mutants, but these interferences occurred with residues located in the loops. All these six mutants were chosen for in vivo screening. Fig. 2: The model of the wild-type MECR. The wild-type MECR holoenzyme was modeled based on the crystal structure with PDB entry 2VCY; the cofactor NADP + (white carbons) and the C16-substrate fragment (orange carbons) were modeled-in based on the C. tropicalis ETR1 crystal structure (PDB entry 4WAS). The two residues mutated in this work and the catalytic residue Tyr94 are shown. The two subunits of the MECR enzyme dimer are displayed as cartoon colored green and cyan. Non-polar hydrogens are not shown. Full size image In vivo screening of the MECR mutants in yeast The mutations proposed by in silico modeling were cloned into a pYE352 yeast expression vector containing human MECR cDNA and N-terminally appended with a yeast mitochondrial targeting sequence 15 . These constructs were introduced into the Δ etr1 yeast strain, and growth properties as well as protein lipoylation were examined to determine the effects of the amino acid substitutions in an in vivo system. It has been previously established that respiratory deficient phenotype of Δ etr1 strain can be complemented by mitochondrially targeted human MECR 15 . Growth of the transformants was tested on fermentable (glucose) and non-fermentable (glycerol) carbon sources (Fig. 3a ). All strains grew wild-type or near wild-type levels on synthetic complete dextrose (SCD) and YPD media, which are fermentable media. Yeast strains carrying MECR with G165H, G165L or G165F mutation were viable on non-fermentable, glycerol medium (SCG), while mutants G165Q, I129H and I129F did not grow or grow very poorly on non-fermentable, glycerol medium, indicating respiratory deficient phenotype. Fig. 3: In vivo screening of human MECR mutants in yeast. a Ten-fold dilution series testing for growth of wild-type and mutated human MECR variants on glucose (SCD-URA and YPD) and glycerol (SCG-URA) growth media. First row: Wild-type Bj1991α yeast strain with empty vector pYE352, Second row: Null mutant (Δ etr1 ) Bj1991α yeast strain with empty vector pYE352, third row: Bj1991α Δ etr1 yeast strain with mitochondrially targeted MECR in pYE352, fourth row to nineth row: Bj1991α Δ etr1 with MECR single mutant in pYE352. b Western blot analysis of protein lipoylation and MECR expression in BJ1991α yeast expressing wild-type and mutated MECR variants. Actin and Ponceau staining were used as loading controls and the experiments were repeated three times with individual biological samples. Source data are provided as a Source data file. Full size image Because mtFAS produces octanoic acid for lipoid acid synthesis, the ability of the mutants to generate mitochondrial fatty acids up to C8 was examined by western blot using anti-lipoic acid antibody (Fig. 3b ). The results show that mutations of G165 allow protein lipoylation of Kgd1 but not of Lat1, indicating that MECR mutants G165H, G165L, G165Q and G165F are able to synthesize fatty acids at least up to C8. Yeast strains carrying MECR mutants I129H or I129F did not show any lipoylated proteins. Nevertheless, in all yeast strains expressing wild-type MECR or its point mutated variants, western blot analysis confirmed the presence of the expressed MECR protein (Fig. 3b ). Taken together, this in vivo analysis demonstrates that the mutation G165Q in MECR allows Δ etr1 strain to synthesize lipoic acid but does not restore respiratory competence. Protein purification and analysis Based on in silico prediction and in vivo screening the MECR G165Q variant was chosen for further functional and structural studies. Wild-type and mutant G165Q MECR appended with a C-terminal His-tag were expressed in Escherichia coli and purified to apparent homogeneity with Ni-NTA affinity chromatography, followed by size-exclusion chromatography. The circular dichroism (CD) spectra show that the gross secondary structure elements of wild-type MECR and MECR G165Q proteins are congruent (Supplementary Fig. 4a ). Thermal stability titration curves are similar for both proteins and the calculated melting points are T m = 45.4 °C and T m = 45.7 °C for wild-type MECR and MECR G165Q, respectively (Supplementary Fig. 4b ). These results indicate that the G165Q mutation does not affect the overall structure and stability of MECR. Analysis of kinetic properties of wild-type and G165Q MECR To investigate how the G165Q mutation affects the catalytic properties of MECR, we determined the kinetic parameters of wild-type and G165Q variants of MECR (Table 1 ). 2E -enoyl-CoA thioesters with varying fatty acyl tail lengths (C6, C8 and C10) were used as substrates. Because long chain 2E -enyol-CoA esters are poor substrates for the G165Q variant, only specific activities were determined for 2E -dodecenoyl-CoA, 2E -tetradecenoyl-CoA and 2E -hexadecenoyl-CoA substrates (Table 2 ). Interestingly, the k cat of G165Q MECR with 2E -octenoyl-CoA substrate was 6.9 times higher than the k cat of wild-type protein. The K M values of wild-type MECR and the G165Q mutant enzyme decrease from 90.6 to 15.9 µM for wild-type and 111.2 to 8.15 µM respectively when the chain length of the substrate increases from six carbons (C6) to ten carbons (C10) (Table 1 ). Thus, the K m value of wild-type MECR with C10 substrate is 18% of the K m value with C6 substrate, while in G165Q mutant enzyme the K M value with C10 substrate is 7% of the K M value with C6 substrate. The systematic variation of the k cat and K M values for the C6, C8 and C10 substrates is intriguing, but it is difficult to correlate these variations with the structural properties of wild-type MECR and its G165Q variant. Due to the complicated catalytic cycle of the MECR reaction the physical meaning of k cat is not known (it could be related to the off dissociation of the product), likewise the K M depends on the relative rates of the different steps of the reaction and its value cannot be directly correlated with the affinity of the substrate. Table 1 Kinetic parameters of wild-type MECR and the MECR G165Q variant Full size table Table 2 Specific activities of wild-type MECR and the MECR G165Q variant with C12, C14 and C16-CoA substrates Full size table Both wild-type and the G165Q variant were able to accept C10, C12 and C14 substrates, but the low catalytic rates for the G165Q variant did not allow characterization of kinetic properties in detail for the C12 and C14 substrates. Of note, the observed catalytic rate for the G165Q variant was eight times lower with C12 substrate and six times lower with C14 substrate when compared to the wild-type enzyme. The activity of the G165Q variant with C16 substrate was below the detection limit. The residual activity with the C12- and C14-tail substrates could well be related to the increased structural flexibility of the G165Q variant, as suggested by the B-factor properties (Supplementary Fig. 6 ). Crystallographic studies of MECR Next, we aimed to obtain information on structural features resulting in diminished catalytic efficiency with long chain enoyl fatty acid substrates due to the G165Q point mutation in MECR. G165Q variant did not provide crystals under conditions previously used in MECR crystallization 19 , so completely new crystallization parameters were screened. Both wild-type and the G165Q MECR formed hexagonal crystals at +4 °C in a condition where the main precipitation agents were PEG 6000 and NaCl and the pH was 4.5 as described in Material and Methods. Crystal structures were solved at 1.85 Å and 2.02 Å resolution, respectively, in the trigonal space group P 3 1 21 (Fig. 4 , Supplementary Table 3 ). The asymmetric unit contains one MECR chain, but the enzyme forms a dimer via a crystallographic 2-fold axis. Compared to the available structures of human MECR (PDB entry 2VCY, 2.41 Å or PDB entry 1ZSY, 1.75 Å), the condition, space group and the crystal packing are different. Previously, human MECR crystals were obtained at room temperature, at pH 8.5 (2VCY) or pH 6.5 (1ZSY) under ammonium sulfate-based conditions, and the crystals exhibited tetragonal (2VCY, space group P 4 2 2 1 2) or orthorhombic (1ZSY, space group I222) symmetry lattices. The 2VCY structure includes a MECR dimer in the asymmetric unit 19 , whereas 1ZSY has only one subunit in the asymmetric unit. The dimeric structure of the MECR trigonal crystal form of this study, obtained by using the crystal symmetry, is similar to the dimeric structure (found in the asymmetric unit) of the tetragonal MECR crystal form. Fig. 4: The structure of human wild-type MECR and the MECR G165Q variant. Crystal structure of the ( a ) wild-type MECR of this study (PDB entry 7AYB), and ( b ) the MECR G165Q mutant (PDB entry 7AYC). Only one monomer of the dimeric MECR is shown. The catalytic domain is colored pale cyan and the cofactor binding domain is colored light pink. The mutation site is highlighted. NH2 and COOH label the N-terminus and C-terminus. c The structural comparison of the superposed wild-type MECR and MECR G165Q variant. The stereo diagram showing the conformational changes in MECR G165Q variant structure due to the mutation. Residues of wild-type protein are shown in light blue color and the residues of mutant protein are labeled in magenta color. The side chain of the selected residues are shown in stick representation with N and O atoms colored blue and red, respectively. It can be noted that the (phi/psi) values of G165 in wild-type would allow for the mutation into a residue with a side chain: (phi/psi) G165 (wild-type) = −73/−11; (phi/psi) Q165 (G165Q) = −84/−55. d Detailed view of the fatty acyl binding cavity of wild-type MECR with a modeled C16-substrate. The cavity for fatty acyl tail is shown as “cavity surface” option (light gray + transparent) and the bottom of the cavity is shown with red arrow. e Fatty acyl binding cavity in the G165Q structure is shown as “cavity surface” option (light gray + transparent) and the bottom of the cavity is shown with red arrow. Hydrogen-bonding network is changed significantly and the loop region containing A133 and N132 moves slightly as also seen in the panel c. Full size image The overall fold of the MECR structures determined in this study is maintained, with RMSD value of 0.63 Å for the Cα-atoms of 332 aligned residues (Fig. 4 ), indicating that the structure of MECR G165Q is not changed. This conclusion is also supported by CD analysis data (Supplementary Fig. 4a ). In the trigonal crystal form, MECR adopts the closed conformation, previously observed in MECR structures 2VCY and 1ZSY, as well as in the CtEtr1p structure complexed with NADPH 20 . In the MECR structures presented here, the first N-terminal residues (residues A31-A41) are not defined by the electron density maps, but the C-terminus (till L374) including the six residue long His-tag sequence is seen well. In the tetragonal and orthorhombic crystal forms determined earlier, the last visible residue is M373. The first visible residue in the current and 2VCY structures is R42. Otherwise, the current and previous wild-type structures are very similar (RMSD 0.70 Å for 332 aligned residues). There are some small differences in certain loop regions such as 240-252. In the current structure this region has high B factors, but residues are still well defined by the electron density maps. This region had very weak and fragmented structure in the previous tetragonal MECR structure. This loop has been proposed to be important for the recognition of the ACP moiety of the substrate molecule. The electron density map of the MECR G165Q crystal structure shows the structural changes due to the mutation (Fig. 4b ). The structure of the Q165 side chain is well defined by the electron density map (Supplementary Fig. 5 ). The mutation site locates in the domain interface in between the α helix αD of the catalytic domain and α helix αA of the cofactor binding domain. The side chain points across the acyl tail binding tunnel, making a good hydrogen bond between NE2(Q165) and O(P130). This interaction causes changes in the main chain conformation of the residues near P130 and Q165, as well as of the side chain conformations of K316 and E107. In wild-type MECR, the side chain of E107 is hydrogen bonded to backbone nitrogen atoms of G165 and V166, whereas in the MECR G165Q, the side chain of E107 is hydrogen bonded to backbone nitrogen of V166. Furthermore, in wild-type MECR, the side chain of K316 is hydrogen bonded to the backbone oxygen of G165 (Fig. 4d ), but in the MECR G165Q variant, the lysine side chain is hydrogen bonded with the side chain of Q165 and the backbone oxygen of P130 (Fig. 4e ). The hydrogen-bonding network around Q165 is somewhat different in the experimental crystal structure as compared to the computational model of MECR G165Q (Supplementary Fig. 7 ). The MECR G165Q structure has a higher overall B factor than wild-type MECR (Supplementary Table 3 ). Especially the region close to the catalytic Y94 moiety, which has interactions with the loop regions near P130 and E107, has higher B factors compared to the corresponding wild-type structure (Supplementary Fig. 6 ). Molecular docking of substrates to wild-type and G165Q MECR variants Finally, we analyzed the effect of G165Q mutation on substrate binding by in silico docking studies, presenting 2 E -enoyl substrate fragments with the length of C8-C16 (Supplementary Fig. 1 ) to the wild-type MECR (Supplementary Fig. 8 ) and MECR G165Q (Supplementary Fig. 9 ) structures, both crystallized in this work. The aligned docking results of substrate fragments to the wild-type MECR and G165Q variant are presented in Supplementary Fig. 10 . These dockings were made to dimeric MECR structures where the B chain was generated using the crystallographic symmetry operators. The results obtained from these modeling studies are summarized in Supplementary Table 4 . In the wild-type structure, substrates up to C16 are positioned in the previously identified substrate tunnel (Supplementary Fig. 8 ), which is overall consistent with the data from studies of MECR 19 . In the MECR G165Q variant pocket, for the 2 E -C14 substrate no docking poses were returned and the poses of 2 E -C12 and 2 E -C16 did not reach the end of the binding pocket (Supplementary Fig. 9d, e ), due to the blockage created by the side chain of Q165. Clearly, the G165Q mutation blocks the acyl tail binding tunnel (Fig. 4e ), in good agreement with the changed substrate specificity. The hydrocarbon chains of the docked substrates have multiple unrestricted rotational degrees of freedom. We assumed that the docking solutions obtained represent the ensemble of accessible conformations of the flexible substrate hydrocarbon chains. In wild-type, for 2E -C8 to 2E -C14, the number of poses is similar (on average 8, Supplementary Table 4 ), whereas for the longest substrate, 2E-C16, just one pose was obtained. In contrast, the number of substrate docking poses for the MECR G165Q variant decreases with the increasing substrate length, starting from 2E -C8. This confirms that the more confined pocket of the G165Q variant reduces the number of possible states of longer substrate chains and suggests an unfavorable conformational entropy contribution to longer substrate binding in the MECR G165Q variant. The average RMSD values of the docked poses with respect to the substrate core fragment and selected energy estimates over each ensemble of the docked poses were analyzed for each ligand and for each docking variant (Supplementary Table 4 ). The average docking scores (estimates of substrate binding free energies, negative is favorable) are only slightly negative, with the exception of the unfavorable positive values for the poses of 2 E -C12 and 2 E -C16 substrates in the G165Q variant. These relatively high docking scores are probably reflective of the importance of receptor conformational changes upon substrate binding, which are not accounted for in the docking simulations. The average scores were less favorable in the MECR G165Q variant than wild-type for all the docked substrates that returned any poses. However, it is worth noting that 2 E -C8 and 2 E -C10 have slightly more favorable average van der Waals energy contributions to binding in the G165Q variant than in wild-type (2 E -C8: −27.1 vs. −26.7 kcal/mol, 2 E -C10: −32.0 vs. −28.9 kcal/mol, respectively), in contrast to 2 E -C12 (−25.0 vs. −31.4 kcal/mol, respectively) and 2 E -C16 (−17.0 vs. −31.3 kcal/mol, respectively). The ligand average internal energy for 2 E -C8 and 2E-C10 is also distinctly more favorable (lower) in the G165Q variant than in wild-type (2 E -C8: 2.8 vs. 4.5 kcal/mol and 2 E -C10: 3.4 vs. 5.9 kcal/mol, respectively). This contrasts with 2 E -C12 and 2E-C16, for which this relation is inverted (2 E -C12: 8.7 vs. 6.5 kcal/mol and 2E-C16: 13.2 vs. 6.2 kcal/mol, respectively). The average RMSD values (Supplementary Table 4 ) for the substrate core for 2 E -C8 and 2 E -C10 are lower in the G165Q variant than in wild-type (2 E -C8: 0.5 vs 0.7 Å, 2 E -C10: 0.5 vs 0.8 Å, respectively), while higher for 2 E -C12 and 2 E -C16 (2 E -C12: 0.9 vs. 0.6 Å, 2 E -C16: 0.8 vs. 0.7 Å). This implies that the catalyzed moiety of the substrate is more “correctly” positioned for 2 E -C8 and 2 E -C10 in the G165Q variant than in wild-type. Taken together, the docked 2 E -C8 and 2 E -C10 ligands are less strained in the G165Q variant than in wild-type (internal energy), and both 2 E -C8 and 2 E -C10 have slightly better steric fit (van der Waals energy) and lower RMSD of the catalytic core in the G165Q active site than in wild-type. The above observations are consistent with the slightly lower K M of 2 E -C10 in the G165Q variant as compared to wild-type and the higher k cat for 2 E -C8 in G165Q than in wild-type (Table 1 ). Effect of MECR G165Q mutation on the total fatty acid profile and Cox1 expression The effect of MECR G165Q mutation on cellular fatty acids was studied by liquid chromatography / mass spectrometry. There were no changes in cellular fatty acid profiles between wild-type, Δ etr1 or Δ etr1 cells carrying plasmid expressing MECR or the G165Q variant (Supplementary Fig. 11 ). The majority of cellular fatty acids are produced by cytosolic FAS I pathway and the contribution of mtFAS to the total cellular FA pool is negligible 22 , thus this result was expected. To confirm that octanoyl-ACP and lipoic acid are synthetized in our yeast strains, we measured pyruvate dehydrogenase (PDH) and α-ketoglutarate dehydrogenase (α-KDH) activities from mitochondrial extracts (Fig. 5a and b ). PDH activities were 77 ± 11, 58 ± 8 and 54 ± 8 (mean ± SD) nmol of NADH / min x mg −1 for wild-type yeast mitochondria, for the Δ etr1 strain expressing MECR and for the Δ etr1 strain expressing MECR G165Q variant, respectively. Correspondingly, α-KDH activities were 105 ± 15, 62 ± 5 and 53 ± 7 (mean ± SD) nmol of NADH / min x mg −1 for wild-type, for the Δ etr1 strain expressing MECR and for the Δ etr1 strain expressing MECR G165Q variant. Both PDH and α-KDH activities were below detection limit in Δ etr1 strain. In both assays the number of biological replicates was 4 for wild-type yeast mitochondria, 5 for the Δ etr1 strain, 6 for the Δ etr1 strain expressing MECR and 6 for the Δ etr1 strain expressing MECR G165Q variant. Fumarase activity was measured from all samples to ensure the quality of mitochondrial extracts and no changes between the samples were detected (Fig. 5c ). α-KDH generates succinyl-CoA that is needed in heme synthesis. In previous studies, heme deficiency due to the non-functional α-KDH caused by the lack of lipoic acid was discussed as a possible cause of the respiratory-defect phenotype in ΔmtFAS yeast strains 4 . Catalase is a heme containing enzyme. To assess the heme content in yeast cells, we analyzed catalase activities in spheroplasts (Fig. 5d ). We were not able to detect any significant changes in catalase activity in wild-type, Δ etr1 and Δ etr1 expressing wild-type MECR or G165Q variant. Fig. 5: Pyruvate dehydrogenase (PDH), α-ketoglutarate dehydrogenase (α-KDH), fumarase and catalase activities and expression of Cox1. Enzyme activities for ( a ) PDH, ( b ) α-KDH and ( c ) fumarase were analyzed from wild-type (WT), Δetr1 and Δetr1 expressing wild-type MECR or the G165Q variant mitochondria. d Catalase activity from wild-type (WT), Δetr1 and Δetr1 expressing wild-type MECR or the G165Q variant was analyzed from yeast spheroplasts. All experiments were performed in duplicate. The number of biological repeats in PDH and KDH activity assays is four for wild-type yeast mitochondria, five for Δetr1 strain, and six for Δetr1 expressing wild-type MECR or G165Q variant. The number of biological repeats in fumarase activity assays is four for wild-type yeast mitochondria, six for Δetr1 strain, and six for Δetr1 expressing wild-type MECR or G165Q variant. The number of biological repeats in catalase activity assays is six for wild-type yeast mitochondria and Δetr1 strain, five for Δetr1 expressing wild-type MECR, and six for Δetr1 expressing MECR G165Q variant. Data were analyzed by two-tailed, unpaired Student’s t- test and the results are expressed as mean ± standard deviation (SD). The enzyme activity data were plotted by using GraphPad Prism computer software. e Steady state level of Cox1 in isolated yeast mitochondria was analyzed by western blotting. Porin was used as a loading control. Source data are provided as a Source data file. Full size image It has been shown previously that defects in mtFAS like deletion of Etr1 in yeast lead to severely reduced levels of Cox1, a mitochondrially encoded subunit of respiratory complex IV 4 . It was also concluded that reduced Cox1 is due to a decrease in Cox1 translation caused by a respiratory complex IV assembly defect. Here we employed the investigation of Cox1 abundance as a readout to obtain information on the cause of the respiratory deficiency of Δ etr1 yeast strain expressing the G165Q MECR mutant. Extracts of the Δ etr1 strain transformed with the plasmid carrying the MECR G165Q mutation were analyzed by western blot using Cox1 antibody (Fig. 5e ). In agreement with the previous results, Cox1 was visualized in the extract from wild-type yeast cells, but barely detectable from Δ etr1 cell extract. Cox1 was visualized on the wild-type level on extract from Δ etr1 cells expressing wild-type MECR, but the extract from Δ etr1 cells expressing G165Q mutant MECR was depleted of Cox1. Because expression of Cox1 in Δ etr1 and Δ etr1 cells expressing G165Q variant was negligible, one possibility is that the cells have lost their mitochondrial DNA. This was tested by introducing back Etr1 in a plasmid and then following the respiratory growth on glycerol plate (Supplementary Fig. 12 ). Previously it has been shown that Δ etr1 cells after introduction of Etr1 back are able to grow on SCG-plates 14 . Here we show that also Δ etr1 strain expressing MECR G165Q variant is able to grow on SCG-plate after introduction of Etr1. Thus, these data suggest that both the strains are rho + . Discussion MtFAS acts as an integrator between mitochondrial availability of acetyl-CoA and respiratory chain function 4 , 5 . To mediate this coordinator function, acyl-ACPs generated by mtFAS interact with LYRM adapter proteins and these complexes facilitate assembly and functions of several components of the respiratory chain, mitoribosomes and in Fe-S cluster biogenesis. In addition to these LYRM protein dependent actions, mtFAS provides the octanoyl-groups that are used in endogenous lipoic acid synthesis and is essential for lipoylation and function of mitochondrial α-keto acid decarboxylase complexes. Due to this multiplicity in function, deficiencies in mtFAS in humans result in a pleiotropic phenotype of affected individuals. Of note, currently there is no evidence that mtFAS provides fatty acyl groups to structural lipids, with the exception of a lipid A-like molecule described in plants 23 . A study published by Brody and Mikolajczyk 24 showed that mitochondrial ACP isolated from the fungus Neurospora crassa carries the 3-hydroxytetradecanoyl-group. Proteolytic analysis of respiratory complex I in bovine heart mitochondria by Carroll et al. in 2003 25 revealed that ACP in complex I contained an extra mass that suits 3-hydroxytetradocanoic acid. More recently, cryo-electron microscopy structural studies of the porcine heart complex I identified a decanoyl-phosphopantetheine group on ACPs in complex with LYRM proteins NDUFA6/LYRM6 or NDUFB9/LYRM3 26 . All these data indicate that ACP in complex I can be attached to a long-chain acyl group. Additionally, ACP has been reported to interact with the LYRM protein ISD11/LYRM4 in iron-sulfur cluster biogenesis. Both structural and mass spectrometric analysis carried out by multiple groups have shown that this ACP-ISD11 interaction is supported by a long-chain fatty acid bound to ACP 8 , 27 , 28 , 29 . It has remained unclear if short chain acyl groups attached to ACP are sufficient to mediate ACP-LYRM protein functions or if acyl groups longer than C8 are indispensable for respiration. To shed light on the role of long-chain fatty acids produced by mtFAS, we generated a yeast strain in which synthesis of mitochondrial long-chain fatty acids was abolished by mutating the substrate binding cavity of enoyl reductase catalyzing the last step of mtFAS. In silico molecular modeling methods were used to identify mutations with the potential to exhibit the desired properties. Six mutations suggested by the models to shorten the substrate binding cavity were tested in vivo for complementation of the respiratory and lipoylation deficient phenotypes of the Δetr1 yeast strain. The MECR G165Q mutation was chosen for further analysis on the following grounds: (i) The immunoblotting experiments indicated that the mutated human MECR variant was expressed in the Δetr1 yeast strain. (ii) The yeast strain expressing the MECR G165Q variant showed protein lipoylation suggesting that mtFAS was functional and fatty acids to minimum length of eight carbons were synthesized. (iii) However, the testing the growth on the glycerol as carbon source, showed the strain was not respiratory competent. At this point we concluded that the respiratory deficiency of the Δetr1 yeast expressing the MECR G165Q variant was due to changes in stability, structure or kinetic properties of the engineered protein. To differentiate these options, the MECR G165Q was further characterized in vitro. We expressed and purified both wild-type and MECR G165Q and analyzed their CD spectra and thermal stability. These studies showed that G165Q mutation in MECR does not affect the structure and the stability of the protein. Enzyme kinetic studies indicated that the G165Q mutation resulted in increase in catalytic activity when C8 was used as a substrate, but there was no effect with the C10 substrate and the activities were significantly lower with C12 and C14 substrates. The changes in enzymatic activity of MECR G165Q variant are similar to the previously analysed G165S mutant: both mutations increase the activity when using C8 substrate but the activities with longer fatty acyl substrate like C14 are much lower when compared to wild-type 19 . We also solved the crystal structures of wild-type and G165Q mutant MECR at higher resolution compared to earlier studies 19 . This structure agreed quite well with the modeling studies of the G165Q variant, though there are some differences in the hydrogen-bonding network that involve Q165, and some other residues in the catalytic domain, which were not predicted by modeling (Supplementary Fig. 7 ). In the wild-type MECR, the substrate binding cavity can accommodate fatty acyl substrate longer than C8 (Fig. 4d ). The introduction of the glutamine side chain to the position 165 neither affected the overall structure, nor the conformation of the catalytic tyrosine (Y94) or the NADPH binding site, but together with conformational changes of an acidic (E107) and basic (K316) amino acid, the modification induces a change of the acyl binding cavity of the mutated protein (Fig. 4e ). As a result, the fatty acyl binding cavity is much shorter in the MECR G165Q variant (Fig. 4e ). We also analyzed the binding of various chain-length substrates to both wild-type and G165Q mutant structures by in silico docking studies. Overall, the average docking score values suggested less favorable binding of substrates to the G165Q mutant. Notably, the C8 substrate exhibited a more favorable average internal energy and van der Waals energy in the G165Q mutant than in the wild-type MECR. Also, the catalyzed substrate “core” was closer to the crystallographic position for C8 in the mutant than in wild-type. We hypothesize that the latter observations might be related to the fact that the C8 substrate is catalyzed faster in the G165Q mutant than in wild-type (Table 1 ). Only the G165Q variant had the desired properties, whereas the variants with the G165H, G165L, G165F point mutations could still process long-chain substrates. Most likely, this is related to the unique hydrogen bond properties of the Q165 side chain (Supplementary Fig. 7 ), providing the acyl tail binding tunnel with its desired properties. Clearly, in the structure of G165Q the predicted mode of binding of C8 substrate is not affected by the conformation adopted by the Q165 side chain, whereas the longer tails clash with the side chain of Q165, in particular the C10, C11, C12 and C13 atoms. I129 is located at the bottom of the acyl tail binding tunnel. The I129H and I129F point mutation variants were expressed but the in vivo data show that these variants are not active (Fig. 3 ). Apparently the more polar (I129H) and more bulky, rigid (I129H, I129F) side chains introduce changes with respect to either stability and/or structure, dynamics (or both), that are not compatible with the catalytic function of MECR. The modeling calculations did not allow predicting the structural rearrangements introduced by the G165Q mutation, but nevertheless the availability of a model concerning the mode of binding of the acyl tail in its binding tunnel has been critically important for understanding the properties of the G165Q variant. Dynamics play an important role in the catalytic cycle of enzymes 30 , 31 . Notably, despite the crystallographic structures of wild-type MECR and G165Q mutant having the same space group symmetry determined in similar resolution, we observed significantly different B-factors for wild-type and G165Q MECR variant (Supplementary Fig. 6a, b ). Firstly, the B-factors are overall significantly higher for the G165Q variant than for wild-type MECR. Also, the magnitude of the differences depends on the regions of the crystallized enzyme, suggesting that some parts of the enzyme may fluctuate relatively more or less depending on the presence of this mutation. The whole catalytic domain and, in particular, the residues of the loop containing Y94, which is known to play a critical role in proton transfer during the reaction catalyzed by an enzyme homologous to MECR 18 , displays much higher B factors in the G165Q mutant than in wild-type, which suggests higher mobility of these regions in the G165Q variant (Supplementary Fig. 6c ). Overall, higher fluctuations of the loop near the active site might be related to the observed higher k cat for the C8 substrate in the MECR G165Q mutant than in wild-type MECR (Table 1 ). The B factors in the loop regions around A133 (shown in Fig. 4d, e ) are also much higher in the MECR G165Q variant (Cα B factors around 80–90) than in the wild-type MECR (Cα B factors around 40-50). Therefore, the mutation is likely causing instability in this loop region shaping the shortened substrate binding pocket of MECR G165Q, which possibly explains why the MECR G165Q variant can still catalyze fatty acyl substrates bigger than C8. Furthermore, we observe a rearrangement of hydrogen bonds of Q165 and surrounding residues in the G165Q mutant when compared with the wild-type MECR (Fig. 4d, e , Supplementary Figs. 7 and 13 ), associated with changed conformation of the MECR active site due to the mutation. In the wild-type enzyme, the K316 side chain is directly hydrogen bonded to the backbone of G165, while in the mutant it hydrogen bonds to the Q165 side chain amide and the backbone of P130, which may loosen the dynamical coupling between the two enzyme fragments. Also, breaking of hydrogen bonds of S313 (backbone) and N132 (side chain) is observed in the mutant. Finally, the interactions of the highly conserved E107 (Supplementary Table 5 ) side chain with the highly conserved N83 side chain (located at the base of the catalytic loop containing Y94) and amide hydrogen of the V166 backbone are slightly altered. In the wild-type E107 side-chain carboxylate oxygen atoms form hydrogen bonds with both N83 and V166, while in the G165Q variant a hydrogen bond is only with V166. Overall, these changes in hydrogen-bonding networks may affect structure and dynamics, and further—catalytic function, of the MECR enzyme. Complete loss of mtFAS does not affect the cellular fatty acid profile and thus there were no detectable changes in fatty acid profile in yeast strain expressing G165Q variant neither (Supplementary Fig. 11 ). These results were expected because the role of mtFAS in generation of total cellular fatty acid pool in minor compared to cytosolic FAS. We were not able to analyze directly the effect of MECR G165Q mutation on mitochondrial acyl-ACP pool. Instead, we show that the activities of lipoic acid containing enzyme complexes PDH and α-KDH are not affected. Similarly, heme-dependent catalase activity in Δ etr1 cells carrying a plasmid expressing MECR G165Q variant was on the same level with wild-type cells. These results indicates that MECR G165Q mutation does not prevent synthesis of short-chain fatty acids up to C8 and the respiratory deficient phenotype of cells carrying this mutation must be due to the lack of long-chain fatty acids. In yeast Δetr1 strain, the level of Cox1, a mitochondrially encoded subunit of respiratory complex IV, is severely reduced 4 . This is due to a decrease in Cox1 translation caused by a respiratory complex IV assembly defect. We also studied the Cox1 expression level in Δ etr1 cells that express G165Q mutant MECR and noticed that Cox1 was undetectable. This is an interesting observation because ACP has been found associated with the mitochondrial ribosome in mammalians and trypanosomas 11 , 12 . In light of the high conservation of mtFAS-related processes between yeast and mammals and the clear association of mtFAS with the mitochondrial translational processes 4 , it is conceivable that acylated ACP may also play a role in ribosomal function in yeast mitochondria. We will address the question of the role of long chain acyl-ACP in mitochondrial translation in future studies, using the tools created for this report. To conclude, our current work demonstrates the successful engineering of chain-length preference of a lipid metabolizing enzyme towards medium-chain substrates. The engineering was based on the crystal structure of MECR, molecular modeling and in vitro testing. Experiments in vivo showed that both the wild-type MECR as well as the G165Q variant having low catalytic activity towards long-chain acyl-substrate due to a shortened binding cavity recovered cellular lipoic acid levels of Δ etr1 yeast cells. However, only the wild-type MECR, but not the substrate length-restricted MECR G165Q variant was able to support growth of this strain on a non-fermentable carbon source. The results allow the interpretation that mtFAS has a dual role as an integrator in mitochondrial function: to provide long-chain acyl-ACP to maintain mitochondrial respiratory capacity and octanoyl-ACP for lipoic acid synthesis. The MECR G165Q variant now provides a tool to elucidate the regulatory mechanisms of the long-chain acyl-ACP species independent of mtFAS function in lipoylation in further studies. Methods Computational methods Modeling of the initial complex of wild-type MECR with NADP + and substrate Modeling of the complexes was performed using the Maestro suite (Schrödinger Release 2017-4: Maestro, Schrödinger, LLC, New York, NY, 2017, ). The model was based on the unliganded human MECR crystallographic structure (PDB entry 2VCY 19 ). The missing NADP + cofactor and 2E -butenoyl-CoA substrate fragment were aligned from the Candida tropicalis Etr1 structure (PDB entry 4WAS 21 ) in chain A active site of the modeled MECR dimer. Sequence identity of MECR (UniProt code Q9BV79) and C. tropicalis Etr1 (UniProt code Q8WZM3) is 34% as calculated from the alignment with the Clustal Omega program 32 at the UniProt webpage 33 (see also Supplementary Fig. 2 ). The structural alignment of the human MECR and C. tropicalis Etr1 structures (Supplementary Fig. 14 ) displays distinct conformational differences in the coenzyme A binding site, while active site protein backbone conformations are similar. Therefore, we focused on modeling the MECR complex only with the 2E -alkyl fragments of substrates. In the protonated human MECR enzyme complex, these alkyl fragments of the 2-alkenoyl substrate series (2 E -butenoyl-CoA to 2E -hexadecenoyl-CoA) were built incrementally, starting from the crystallized fragment. The two-carbon fragments were subsequently added and minimized in the cavity, beginning with the shortest and ending with the longest alkyl chain, following the shape of the cavity in the enzyme, suggested by the previous mutational study 19 . Based on this modeled complex, positions for the mutations were selected. The coordinates of the initial models for wild-type MECR and selected variants are provided in Supplementary Data 1 . Building and initial selection of the MECR mutations for experimental evaluation Mutants based on the MECR/2 E -C16-substrate complex model were built for the selected mutation points. Using Maestro, we computationally substituted the selected positions with amino acid residues, followed by visual inspection of the most frequent rotamers (using Maestro). We evaluated the fit of the particular mutation/rotamer in the protein structure (to minimize clashes) and the propensity of the mutation to block the alkyl chain of the substrate in the correct location (many clashes with ≥C10 substrates, none or few clashes with ≤C8 substrates). Residue conservation analysis The ConSurf webserver 34 , 35 , 36 , 37 (5 Sep 2018) was also used to gain insight into the amino acid conservation patterns in the sequences homologous to MECR with the PDB structure 2VCY 19 (chain A) taken as input. The alignment was performed for the UNIREF90 sequences 38 . Docking simulations to the wild-type and mutated MECR variant For the following computations, the Maestro suite ver. 2019-4 was used. Structure preparation was done with the Protein Preparation Wizard tool 39 and docking simulations were performed with Glide 40 , 41 , 42 . Docking simulations were performed to the prepared wild-type and G165Q MECR structures (dimers) obtained in this work, with NADP + and substrate fragment modeled in the chain A active site, similarly as for the initially built complex (described above). To avoid docking of very large ligands, which would be difficult even with advanced simulation approaches, the substrates were truncated to fragments containing only 2E -enoyl tails without coenzyme A or ACP, which serve as substrate carriers (Supplementary Fig. 1 ). The fragments were further prepared using the LigPrep tool and Epik program 43 , 44 at pH 7.0. In all docking simulations, to be consistent with the initial modeling data, the OPLS2005 force field was used 45 , 46 . Docking grid size and position were based on the modeled-in 2 E -C16 substrate fragment. Docking poses were restrained to the position of the substrate core (defined in Supplementary Fig. 1a ) with a standard RMSD threshold of 2.0 Å for the substrate core. This strategy was used because, on one hand, the MECR mutation considered was located far from the ACP or coenzyme A binding site and therefore the assumption that this binding site would not be significantly affected by the mutation (assuming that the MECR variant would overall fold correctly) is reasonable. On the other hand, under these assumptions, the position of the substrate “core” that undergoes the catalyzed reaction is significantly determined by the position of the bound coenzyme A (or ACP) carrier, stabilized by the precise hydrogen-bonding network with the enzyme and NADPH (as, e.g., in the C. tropicalis Etr1 structure: PDB entry 4WAS, see Supplementary Fig. 15 ). The best 10 docking poses were refined, and ten output poses were requested. Additionally, the ligand poses with RMSD over 1.4 Å were discarded in post-processing, to filter out most poses having the “core” moiety with the flipped enoyl group. The SP (standard-precision) semi-rigid docking, and not XP (extra-precision), protocol was used because the latter is more appropriate when the receptor is close to the bound conformation. In this study, the docking receptor is modeled based on the apo-enzyme, and the MECR enzyme conformation near the active site likely differs from the holoenzyme: some conformational changes were observed between the human MECR apoenzyme (PDB entry 2VCY) and homologous holoenzyme C. tropicalis Etr1 (PDB entry 4WAS, see Supplementary Fig. 14 ). These changes could be partially attributed to the sequence differences (Supplementary Fig. 2 ), but likely also to the apo/holo state of the enzyme. The coordinates for docked substrates in 7AYB and 7AYC structures are available in Supplementary Data 1 . Cloning and mutagenesis Cloning of the MECR -pYE352 plasmid containing the coding sequence of S. cerevisiae COQ3 mitochondrial targeting sequence (MTS) and expressing the H. sapiens MECR ORF under control of the yeast CTA1 promoter is described earlier 47 . This plasmid was used as a template to generate MECR variants through site-directed mutagenesis by QuickChange® Site-Directed Mutagenesis kit (Agilent Technologies, Cedar Creek, CA, USA) according to the manufacturer’s instructions. The sense and antisense primers used in mutagenesis are listed in Supplementary Table 6 . All plasmid constructs and mutations were verified by sequencing performed by the Biocenter Oulu Sequencing Center. Yeast strains, media and genetic methods Wild-type BJ1991α ( MAT α, leu2, trp1, ura3-52, pep4-3, prb1-1122, gal2) 48 and BJ1991α Δetr1 ( MAT α, leu2, trp1, ura3-52, pep4-3, prb1-1122, gal2; ybr026c::kanMX4 ) 14 mutant strains of S. cerevisiae are used in this study. Media used are as follows: YPD [1% yeast extract (DIFCO, NJ, USA), 2% Bacto-peptone (DIFCO), 2% D-glucose], SCD [0.67% Yeast Nitrogen Base without amino acids (DIFCO), 0.19% Synthetic complete drop out mix without uracil (Sigma-Aldrich, St. Louis, MO, USA), 0.008% uracil, 2% D-glucose], SCG [0.67% Yeast Nitrogen Base without amino acids (DIFCO), 0.19% Synthetic complete drop out mix without uracil (Sigma-Aldrich), 0.008% uracil, 3% glycerol], SCD/-Uracil [0.67% Yeast Nitrogen Base without amino acids (DIFCO), 0.19% Synthetic complete drop out mix without uracil (Sigma-Aldrich), 2% D-glucose], SCG/-Uracil [0.67% Yeast Nitrogen Base without amino acids (DIFCO), 0.19% Synthetic complete drop out mix without uracil (Sigma-Aldrich), 3% glycerol]. Solid media was prepared with 2% Agar (Fisher BioReagents TM , Geel, Belgium). Yeast transformations The plasmids were transformed into the BJ1991 α wild-type and Δ etr1 yeast strain by using one-step yeast transformation protocol 49 . Yeast respiratory growth assay/spotting assay The yeast strains BJ1991 α containing empty expression plasmid pYE352; BJ1991 α Δ etr1 containing empty expression plasmid pYE352; BJ1991 α Δ etr1 carrying MECR -pYE352 wild-type or mutant plasmids were grown in synthetic complete media with glucose lacking uracil (SCD-URA) overnight (16 h). Overnight yeast cultures were used to inoculate fresh SCD-URA culture to an optical density of 0.1 and grown for about 4 h. Cells were harvested and adjusted to OD 600 of 0.5. Serial dilutions of undiluted, 1:10, 1:100 and 1:1000 was made and 2 µl of each dilution of the cells were spotted on SCD-URA, SCG-URA and YPD. The plates were grown at +30 °C for 4 days. The mutant samples were spotted on two identical plates poured from the same medium preparation and both plates contained all controls (BJ1991 α containing empty expression plasmid pYE352; BJ1991 α Δ etr1 containing empty expression plasmid pYE352 and BJ1991 α Δ etr1 carrying MECR -pYE352 wild-type). To analyze the loss of mitochondrial DNA (ρ°), BJ1991 α Δ etr1 carrying MECR -pYE352 wild-type and BJ1991 α Δ etr1 carrying MECR -pYE352 mutant G165Q plasmids were transformed with yeast wild-type Etr1 plasmid 14 . These strains were tested for respiratory growth on fermentable (SCD) and non-fermentable (SCG) growth media. The plates are grown for 4 days at +30 °C. Protein isolation from yeast strains Yeast proteins were isolated according to the modified protocol from Platta et al. 2004 50 via trichloroacetic acid (TCA) precipitation. For the determination of lipoylation and MECR expression level in BJ1991 α containing empty expression plasmid pYE352; BJ1991 α Δ etr1 containing empty expression plasmid pYE352; BJ1991 α Δ etr1 carrying MECR -pYE352 wild-type or mutant plasmids were grown in SCD-URA medium at +30 °C for about 16 h. Then overnight cultures were transferred to 25 ml SC-glycerol-URA media containing 0.05% glucose and cultured until the OD 600 reached ~4–5. After reaching the desired OD, the cultures were harvested by centrifugation at 3000 x g for 5 min at + 4 °C. The cells were washed with 50 ml sterile water and centrifuged again at 3000 x g for 5 min at +4 °C. The cells were suspended to 1 ml sterile and centrifuged at full 20,800 x g for 10 s. The wet weight of the cells was determined and 1 ml of sterile water was added to 100 mg of cells. 300 µl of cell suspension transferred to Eppendorf tubes and 15 µl of 1 M potassium phosphate buffer (pH 7.4) was added. 100 µl of 50% TCA was added and mixed well. Cells were incubated at −70 °C for 30 min, thawed out on ice and centrifuged at 20,800 x g for 10 min. After discarding the supernatant, 500 µl of ice-cold 80% acetone was added to wash the lysate. The lysate was centrifuged for 5 min at 20,800 x g and the resulted pellet was re-suspended into 60 µl of freshly prepared 1% SDS/ 0.1 M NaOH solution. Protein isolation from yeast mitochondria Mitochondria isolation was performed according to Meisinger et al. 2000 51 to analyze the steady state level of mitochondrial respiratory chain complex protein Cox1. Yeast strain BJ1991 α containing empty expression plasmid pYE352; BJ1991 α Δ etr1 containing empty expression plasmid pYE352; BJ1991 α Δ etr1 carrying MECR -pYE352 wild-type or mutant plasmids were grown in SCD-URA medium at +30 °C for about 16 h. Then overnight cultures were transferred to 50 ml SCD-URA media and cultured for ~24 h followed by a 400 ml culture ~24 h until the OD 600 reached ~5. The cultures were harvested by centrifugation at 3000 x g for 5 min at room temperature. The cells were washed with 25 ml sterile water and centrifuged again at 3000 x g for 5 min at room temperature before they were resuspended in 2 mL DTT buffer (100 mM Tris-H 2 SO 4 , pH 9.4, 10 mM DTT) per gram of cells and shaken slowly (~100 rpm) at +30 °C for 20 min. The cells are centrifuged again at 3000 x g for 5 min at room temperature and washed with 25 mL of zymolyase buffer (1.2 M sorbitol, 20 mM potassium phosphate buffer pH 7.4) before centrifuged again at 3000 x g for 5 min at room temperature. Each gram of cell pellets was resuspended in 7 ml zymolyase buffer containing 5 mg of zymolyase 20 T (MP Biomedical, Irvine, CA, USA) and incubated at slow speed (~100 rpm) at +30 °C for 45 min. Homogenization was carried out by 15 strokes in a glass-Teflon potter in 30 mL ice-cold homogenization buffer (0.6 M sorbitol, 10 mM Tris-HCl buffer pH 7.4, 1 mM EDTA, 1 mM PMSF, 0.2% (w/v) BSA). The homogenate was centrifuged at 1500 x g for 5 min at +4 °C and the organelles of the supernatant were collected. The supernatant was centrifuged at 3000 x g for 5 min at +4 °C. Then, crude mitochondria were pelleted from supernatant by centrifugation at 12,000 x g for 15 min at +4 °C. Mitochondrial pellet was washed for three times with 25 ml ice cold SEM buffer (250 mM sucrose, 1 mM EDTA, 10 mM Mops, pH 7.2) by centrifugation at 12,000 x g for 15 min at +4 °C. Resulted mitochondrial pellet was re-suspended in SEM buffer. Western blot analysis Protein samples extracted from the yeast cells (by TCA precipitation) and crude mitochondrial proteins were separated on SDS-PAGE. Equal amounts of protein from TCA precipitation were loaded on 12% SDS-PAGE gel or 17 µg crude mitochondrial proteins were loaded on 4-20% 10-well Mini-PROTEAN® TGX TM precast gels (Bio-Rad Laboratories, Hercules, CA, USA). The proteins were transferred onto 0.2 µm nitrocellulose membrane (Trans-Blot Turbo Transfer pack, Bio-Rad) using Trans-Blot Turbo Transfer System (Bio-Rad). The membranes were blocked using 5% skim milk in 20 mM TRIS base, 137 mM NaCl, pH 7.6 with HCl, 0.1% Tween20 (TBST) solution. Polyclonal rabbit anti-lipoic acid antiserum (EMD Millipore Corporation, USA; 437695; RRID AB_212120; 1:2500) was used for detection of lipoylated proteins, anti-Mecr antibody (ProteinTech Group, USA; 51027-2-AP; RRID: AB_10863346; 1:3000) to detect MECR and mouse monoclonal anti-MTCO1 antibody (anti-COX1) (Abcam, UK; ab110270; RRID: AB_10863346; 1:333) was used to detect the levels of Cox1. Ponceau staining, β-actin (Abcam; [mAbcam8224]; ab8224; RRID: AB_449644, 1:3000) and anti-VDAC1/Porin (Abcam, UK; [16G9E6BC4]; ab110326 RRID: AB_10865182, 1:1000) immunodetection were used as loading controls. Secondary antibodies were anti-mouse IgG (H + L), HRP conjugate (Promega, USA; W4021; RRID AB_430834, 1:2500), or Immun-Star goat anti-rabbit (GAR)-HRP conjugate antibody (Bio-Rad; 170-5046; clone number 430RRID: AB_11125757, 1:2500). Antibody detection reaction was developed by using a Clarity™ western ECL substrate (Bio-Rad), Molecular Imager ChemiDoc TM XRS + equipment and Image Lab version 3.0 software (Bio-Rad). Catalase, PDH, α-KDH and fumarase activity assays For enzyme activity assays, yeast cells were grown as described in mitochondria isolation procedure above. For catalase activity assay, 2 ml of spheroplasts were harvested after addition of homogenization buffer (0.6 M sorbitol, 10 mM Tris-HCl buffer pH 7.4, 1 mM EDTA, 1 mM PMSF, 0.2% (w/v) BSA). The catalase activity was measured from yeast spheroplast as described 52 . Mitochondrial fractions for PDH, α-KDH and fumarase activity were prepared as same manner as described in Schonauer et al. 2009 53 . Mitochondrial pellets were suspended in breaking buffer (0.1 M potassium phosphate buffer pH 7.4, 4 mM EDTA, 10 µM thiamine pyrophosphate, 0.2 mM phenylmethanesulfonyl fluoride, 0.68 mg liter −1 pepstatin A, cOmplete™ EDTA-free Protease Inhibitor Cocktail (Roche)) followed by the addition of Triton X-100 to a final concentration of 1%. Particulate matter was removed by centrifugation for 20 min at 37,000 x g . PDH and α-KDH activity was assessed by following the formation of NADH at 340 nm as described 53 . Fumarase activities were measured as described 54 . Protein concentrations in spheroplast and mitochondrial extracts were estimated by Bradford assay (BioRad) by using bovine serum albumin (Sigma) as a standard. Total fatty acid analysis from yeast cells Lipids and fatty acids from 100 × 10 6 yeast cells were extracted by adding 170 µl 0,1 M HCl, 280 µl chloroform/methanol (50/50, v/v) and 20 µl of 10 µM [2H31] hexadecanoic acid as an internal standard. After centrifugation, upper phase was extracted again with 300 µl of CHCl3/MeOH/H2O (70/40/10, v/v/v) and lower phases were combined. 100 µl of combined lower phases was evaporated under a stream of nitrogen gas at 50 °C to dryness. The dried lipid pellet was redissolved in 180 µl methanol. After addition of 20 µl 3 M KOH (in water), the resulting solution was heated to 80 °C for 2 h. Unpolar lipids were removed by extraction with twice 500 µl hexane. After acidification with 50 µl formic acid, free fatty acids were extracted twice with 500 µl hexane each. The combined fatty acid extracts were evaporated to dryness under a stream of nitrogen at 50 °C. The dry sample extracts were redissolved in 50 µl iPrOH. For LC/MS analysis, 3 µl of sample was applied to the Accucore Biphenyl column (2.6 μm particles, 100 × 2.1 mm) (Thermo Scientific, Bremen, Germany) at 30 °C. Fatty acids were separated with mobile-phase buffer A containing 5 mM NH4OAc in acetonitrile/water (95/5, v/v), and solvent B consisting of 5 mM NH4OAc in acetonitrile/water (95/5, v/v). After sample application, the LC gradient program was 0% solvent B for 2 min, followed by a linear increase to 100% solvent B within 8 min, then maintaining 100% B for 9 min. The flow rate was maintained at 200 μL/min. The eluent was directed to the ESI source of the QE-MS from 2.0 min to 13.0 min after sample injection. MS analysis was performed on a Q-Exactive mass spectrometer (Thermo Scientific) applying the following scan and HESI source parameters: scan ranges: 150–500 m/z (negative mode); resolution: 70,000; AGC-Target 1E6; maximum injection time: 200 ms; sheath gas: 20; auxiliary gas: 1; sweep gas: 0; aux gas heater temperature: 120 °C; spray voltage: 3.0 kV; capillary temperature: 320 °C; S-lens RF level: 50.0. Signal determination and quantitation were performed using TraceFinder V3.3 software (Thermo Fisher). Escherichia coli strains The E. coli strain TOP10F′ (F′ [ lac I q , Tn 10(Tet R )] mcr A Δ( mrr - hsd RMS- mcr BC) φ80 lac ZΔM15 Δ lac X74 rec A1 ara D139 Δ( ara-leu )7697 gal U gal K rps L(Str R ) end A1 nup G) was used for plasmid cloning and propagation (Invitrogen, Carlsbad, CA, USA). E. coli BL 21 Star (DE3) pLysS strain was used to overexpress and purify wild-type and mutant protein for kinetic assay and crystallization screen. Cloning, overexpression and protein purification MECR- pET23a plasmid construct encoding wild-type MECR or MECR mutants without the mitochondrial targeting sequence and with C-terminal His-Tag was generated as described in Miinalainen et al. 2003 15 . E. coli BL 21 Star (DE3) pLysS strain was transformed with these plasmids for overexpression and cultured in LB medium containing ampicillin and chloramphenicol at +37 °C. Expression of MECR wild-type and mutant proteins were induced by 0.4 mM isopropyl-β-D-thiogalactoside (IPTG) at OD 600 0.6–0.7 for overnight at +37 °C. Cells expressing the MECR wild-type and mutant protein were harvested and resuspended in the binding buffer (5 mM imidazole, 500 mM NaCl, and 20 mM Tris-HCl, pH 7.9) and sonicated. After centrifugation, the supernatant was mixed with Ni-NTA Superflow matrix (Qiagen, Hilden, Germany) and proteins containing a His-Tag were bound to the matrix in +4 °C for 1 h. Unbound proteins were washed with binding buffer followed by washing buffer (32.5 mM imidazole, 500 mM NaCl, and 20 mM Tris-HCl, pH 7.9). The protein was eluted with linear imidazole gradient from 32.5 mM to 500 mM and the fractions containing wild-type MECR or MECR G165Q mutant with His-Tag were collected and pooled. The pooled fractions were concentrated and applied to size-exclusion chromatography column (Superdex TM 200 10/300, GE Healthcare, Uppsala, Sweden). Buffer containing 100 mM sodium phosphate, 150 mM NaCl, 1 mM EDTA, and 1 mM NaN 3 , pH 7.4 and 10% glycerol was used as an eluate. This buffer is also the protein storage buffer. The peak fractions were combined, concentrated and stored at −70 °C until used. Pure protein band was confirmed by SDS-PAGE analysis. Determination of kinetic parameters for MECR WT and MECR G165Q The 2E -enoyl-CoA reductase activity of wild-type and G165Q MECR variant were determined spectrophotometrically by using a JASCO V-660 spectrophotometer with JASCO Spectra Manager Software. The reaction mixture contained 125 µM NADPH, 100 µg of bovine serum albumin in 50 mM potassium phosphate, pH 7.5, and either wild-type or G165Q mutant MECR. The reactions were initiated by adding the substrate to a final volume of 500 µl at 25 °C. The used substrates were: 2 E -hexenoyl-CoA, 2 E -octenoyl-CoA, 2E- decenoyl-CoA, 2 E -dodecenoyl-CoA, 2 E -tetradecenoyl-CoA and 2 E -hexadecenoyl-CoA at the final concentrations of 1.25–180 µM. The kinetic data were fitted to the Michaelis-Menten plot by using GraphPad Prism computer software and catalytic turnover numbers were calculated as described as Miinalainen et al. 2003 15 . Circular dichroism (CD) spectroscopy The CD spectra of the purified proteins were obtained using a a Chirascan™ CD spectrophotometer (Applied Photophysics, Surrey, U.K.) in a quartz cuvette with 1 mm path length. Before the experiments, protein buffer was exchanged into 10 mM potassium phosphate buffer, pH 7.6. During the experiment protein concentrations were 0.34 mg/ml and 0.21 mg/ml for wild-type MECR and MECR G165Q mutant, respectively. For the determination of the T m , the sample was heated at a rate of 1 °C per minute from +22 to +94 °C. Data analysis was carried out using Pro-Data Viewer (Applied Photophysics), CDNN ( ) and Global3 (Applied Photophysics). Crystallographic studies of wild-type MECR and MECR G165Q variant Crystallization trials were carried out in the crystallization facility of the Biocenter Oulu Structural Biology core facility by using the sitting drop vapor diffusion method. Initial screening was done with the wild-type MECR (19.5 mg/ml, in 100 mM sodium phosphate,150 mM NaCl, 1 mM EDTA, 1 mM NaN 3 , pH 7.4 and 10% glycerol) by using the in-house screen 55 and IQ 96-well sitting drop plates (SPT Labtech, Melbourn, Hertfordshire, U.K.) using the Mosquito LCP nanodispenser (SPT Labtech) at two different temperatures (at +4 °C and +22 °C). The plates were imaged by using Formulatrix Rock Imagers RI54, and the crystallization results were monitored with the IceBear software expert system 56 . MECR crystals were obtained at +4 °C directly form the in-house screen and optimized in 16.82% PEG 6000, 1 M NaCl, 100 mM acetic acid, pH 4.5. The volume of the crystallization drop was 300 nl and the drop ratio was 1:2 for the reservoir and protein sample, respectively. The wild-type MECR crystals appeared in 20 h and grew to their final size in 5 days. The MECR G165Q variant (8.7 mg/ml) was crystalized in the same condition, also at +4 o C, as used for the wild-type. The crystals appeared in 2 days and grew to the final size in 5 days. For the X-ray experiments, crystals were cryocooled in the cold room by transfer into liquid nitrogen. Before cryocooling, the crystals were incubated few seconds in the cryosolution, which was made by mixing 7 µl of the well solution and 3 µl of 100% glycerol for all protein crystals. From both crystals data sets with a resolution of 2.2 Å (wild-type MECR) or 2.5 Å (MECR G165Q) were obtained, using the in-house Microstar X8 Proteum Cu-rotating anode X-ray generator (Bruker) of the Biocenter Oulu Structural Biology core facility. A synchrotron source was also used for data collection and higher resolution data sets were obtained at the Diamond Light Source (DLS, Didcot, United Kingdom) beamline I04 (for wild-type MECR 1,85 Å and for MECR G165Q mutant 2,02 Å). The home source data were integrated and processed using the Proteum2 software package (Bruker), whereas the synchrotron data were integrated with DLS autoprocessing pipelines based on XDS (wild-type MECR) or DIALS (G165Q variant) 57 , 58 . All data were scaled with AIMLESS 59 . Previously determined MECR structure (PDB entry 2VCY 19 ) was used as an initial search model in the molecular replacement calculations of wild-type MECR by PHASER and by using the 2.2 Å data collected by using the in-house diffraction data 60 . The structure of wild-type MECR was further used as a model to solve the structure of the mutated variant by using the in-house collected X-ray data. However, the structure refinement and model building were performed by using the synchrotron collected data sets. Refinement and model validation were done using the Phenix package 61 and the model building was done with COOT 62 . The final refinements excluding model building were performed with PDB-REDO web server 63 . A summary of the refinement and validation results are presented in Supplementary Table 3 . The coordinates and structure factors of the refined structures have been deposited in the Protein Data Bank with accession codes 7AYB (wild-type MECR) and 7AYC (MECR G165Q variant). Statistical analysis Results are presented as mean ± standard deviation (SD). Data were analyzed by Student’s t- test (two-tailed) and P- values <05 were considered significant. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Atomic coordinates and structure factors of wild-type and G165Q MECR mutant have been deposited to the Protein Data Bank under accession number 7AYB (wild-type MECR) and 7AYC (MECR G165Q variant). Raw diffraction images are available at IDA ( ). Previously published crystal structures used to derive the models shown are 2VCY 19 , 1ZSY and 4WAS 21 .The authors declare that all other data supporting the findings of this study are available within the paper and its supplementary information , supplementary data 1 and source data files. Source data are provided with this paper.
Cellular respiration is a complex and highly regulated process that allows cells to draw energy from nutrition. An international team of scientists in Finland, Germany and Poland have investigated the important role of long-chain fatty acids in guiding this process. The findings, published in Nature Communications, will shed light on the understanding of mitochondrial function that involve disruptions in cellular energy metabolism. They are tiny and highly efficient energy factories operating inside our cells. Often referred to as "powerhouses," mitochondria extract most cellular energy from nutrition. Researchers from the University of Oulu, (Finland), the Heidelberg Institute for Theoretical Studies (HITS, Germany), and the University of Warsaw (Poland) have now succeeded in demonstrating how long-chain fatty acids regulate the amount of energy drawn in this process called cellular respiration. The discovery is ground-breaking as the importance of long-chain fatty acids produced by mitochondria in cellular respiration had not been previously known and the results open up a completely novel approach. "This information helps us understand diseases that involve impaired mitochondrial function and cellular respiration much better than before," says M. Tanvir Rahman from the University of Oulu and lead author on the paper. The study is part of a more extensive research project investigating the connection between cellular respiration and the cell's nutritional state. The scientists used a protein engineering method, in which the mutants of the so-called MECR enzyme involved in mitochondrial fatty acid synthesis were designed using computational molecular modeling, along with structure determination by crystallography, and other experiments to validate the predictions. "Our study is an example of a successful case of targeted protein modification," says researcher Kaija Autio from the University of Oulu. The experiments in this interdisciplinary study were carried out by biochemists and crystallographers from the Faculty of Biochemistry and Molecular Medicine of the University of Oulu and Biocenter Oulu, while the molecular modeling was done by computational biophysicists from the Heidelberg Institute for Theoretical Studies (HITS) and the University of Warsaw. "This study really demonstrates the value of combining computational and experimental approaches to reveal complex biomolecular mechanisms," says Rebecca Wade from HITS.
10.1038/s41467-023-36358-7
Biology
Scientists discover the basics of how pressure-sensing Piezo proteins work
Yi-Chih Lin et al, Force-induced conformational changes in PIEZO1, Nature (2019). DOI: 10.1038/s41586-019-1499-2 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1499-2
https://phys.org/news/2019-08-scientists-basics-pressure-sensing-piezo-proteins.html
Abstract PIEZO1 is a mechanosensitive channel that converts applied force into electrical signals. Partial molecular structures show that PIEZO1 is a bowl-shaped trimer with extended arms. Here we use cryo-electron microscopy to show that PIEZO1 adopts different degrees of curvature in lipid vesicles of different sizes. We also use high-speed atomic force microscopy to analyse the deformability of PIEZO1 under force in membranes on a mica surface, and show that PIEZO1 can be flattened reversibly into the membrane plane. By approximating the absolute force applied, we estimate a range of values for the mechanical spring constant of PIEZO1. Both methods of microscopy demonstrate that PIEZO1 can deform its shape towards a planar structure. This deformation could explain how lateral membrane tension can be converted into a conformation-dependent change in free energy to gate the PIEZO1 channel in response to mechanical perturbations. Main Piezo channels are mechanosensitive, nonselective cation channels that mediate force-detection in eukaryotic cells 1 , 2 , 3 . They transduce mechanical stimuli in many different physiological processes, including touch sensation 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . As a consequence, deficiency or gain-of-function mutations have been linked to diseases, underscoring their medical importance 12 , 13 , 14 , 15 . Piezo channels are large proteins (>2,500 amino acids) with 38 predicted transmembrane helices per subunit 1 . Several partial molecular structures of mouse PIEZO1 have recently been determined using cryo-electron microscopy (cryo-EM) 16 , 17 , 18 . These structures show a triskelion-shaped homotrimer with a central pore module, a C-terminal extracellular domain and long, bent arms projecting away from the central threefold axis, with helical beams near the attachments of the arms to the pore module 16 (Extended Data Fig. 1a ). Transmembrane helix (TM)37 and TM38 form the pore module. The arms are formed from TM13 to TM36, which are arranged in six four-helical repeats (Extended Data Fig. 1b ). In cryo-EM structures, the detergent micelle follows a curved contour to satisfy the non-planar shape of PIEZO1 (Extended Data Fig. 1a , bottom), and in lipid vesicles PIEZO1 causes the membrane to curve locally into a dome 16 . It has previously been suggested that the arms of PIEZO1 might act as levers to sense force for gating 18 , 19 , 20 , and a model for sensing membrane tension, through the change in in-plane area that results from dome-flattening, has previously been proposed 16 . Various mechanical stimuli 3 , 4 , 5 , 21 , 22 have been used to activate PIEZO1 that—when open—give rise to a single channel conductance of about 29 pS, with a substantial inactivation period 2 , 19 . These methods of activation could be consistent with either of the classical models for mechanical gating: the ‘lateral membrane tension’ model 16 , 21 , 22 , 23 , 24 , 25 , 26 (Fig. 1a ) and the ‘tethered spring’ model 19 , 23 , 27 , 28 , 29 (Fig. 1b ). Owing to the complexity of cell membranes and membrane patches 30 , 31 and the potential multitude of pathways that lead to channel activation 19 , 20 , 32 , 33 , the quantitative and mechanistic identification of force transduction remains challenging. Fig. 1: Proposed activation mechanisms of PIEZO1. a , Lateral membrane tension model. Changes in membrane properties (for example, tension or curvature) lead to a gating force applied onto PIEZO1. b , Tethered spring model. The PIEZO1 channel is activated through interactions with the cytoskeleton or the extracellular matrix. CED, C-terminal extracellular domain. Full size image In this study, we analyse PIEZO1 channels in lipid vesicles of different sizes to determine how the radius of curvature of the vesicle influences the shape of PIEZO1. We also analyse PIEZO1 reconstituted into supported lipid membranes using high-speed atomic force microscopy (HS-AFM), which can simultaneously provide structural and dynamical information on single biomolecules 34 , and—importantly for the investigation of a mechanosensitive channel—permits the application of controlled force during image acquisition 35 . These data characterize the structural response of PIEZO1 to mechanical force, the biologically relevant physical stimulus for this channel. Behaviour of PIEZO1 channels in lipid vesicles We used cryo-EM to study PIEZO1 channels embedded in vesicles that consisted of 1-palmitoyl-2-oleoyl- sn -glycero-3-phosphocholine (POPC), 1,2-dioleoyl- sn -glycero-3-phospho- l -serine (DOPS) and cholesterol at a 8:1:1 (w:w:w) ratio (Fig. 2a ). In the absence of PIEZO1, these vesicles form spheres because the membrane bending energy is minimized 36 . Single PIEZO1 channels are visible in some of the vesicles (Fig. 2a , inset). In projection, and when viewed down the pore axis, the arms bend at the elbow either clockwise or anticlockwise, depending on whether a channel is being viewed from its extracellular or its intracellular surface. PIEZO1 reconstitutes with a preferred orientation in which its extracellular surface faces the inside of a vesicle—probably owing to its intrinsic curvature. Averages of these projected views fit well to top and bottom views of the atomic model of PIEZO1 16 , 17 , 18 , which indicates that PIEZO1 reconstituted in vesicles has a structure similar to PIEZO1 in detergent 16 , 17 , 18 (Fig. 2b ). Fig. 2: Reconstitutions of PIEZO1 in vesicles exhibit various orientations in cryo-EM micrographs. a , PIEZO1 channels reconstituted in POPC:DOPS:cholesterol (8:1:1) vesicles (≥1,000 images). Top- and bottom-view or side-view particles are highlighted by white or yellow arrowheads, respectively. Inset, magnified and contrast-adjusted top-view PIEZO1 with left-handed curved arms (red arrowheads). b , Averages of the top-view ( n = 322) and bottom-view ( n = 120) PIEZO1 compared to the structural model (RCSB Protein Data Bank code (PDB) 6B3R). The handedness of the three arms in projection permits the determination of PIEZO1 orientation. Scale bars, 20 nm. Full size image Viewed from the side, it is evident that PIEZO1 distorts vesicles into a teardrop shape, with the channel located at the region of highest curvature (yellow arrowheads in Fig. 2a ). Detailed inspection of individual channels shows density for the C-terminal extracellular domain inside the vesicle, and the intrinsic curvature of PIEZO1 distorting the vesicle away from its spherical shape to a surface that is more-highly curved locally (Fig. 3a ). This means that PIEZO1 is applying force onto the membrane, and that the membrane is applying force onto the channel. To investigate this interaction, we identified 1,166 side views of PIEZO1, binned them into groups according to vesicle size and generated averaged images (Fig. 3b ). We then fit circles to a small segment of arc length centred on PIEZO1 and centred exactly opposite PIEZO1 (Fig. 3c , Extended Data Fig. 2 , Methods). We define the radii of these circles as the radius of curvature ( R c ) of the inner and outer membrane leaflet projections, and the average value as the mid-membrane R c at PIEZO1 and at the vesicle pole opposite PIEZO1 (Fig. 3d ). The data lead to two conclusions. First, PIEZO1 adopts different curvatures as a function of vesicle size. Second, in larger vesicles, PIEZO1 remains more-highly curved than the membrane at the opposite pole. In other words, PIEZO1 curvature persists, which implies that PIEZO1 probably exhibits some degree of curvature even in a planar membrane (as R c approaches infinity) in the absence of applied tension. Fig. 3: PIEZO1 channels become flatter in large vesicles. a , Cryo-EM image of a vesicle with a PIEZO1 channel in side-view (representative of 1,166 particles). b , Comparison of the average membrane densities at the opposite pole (top) and at PIEZO1 (bottom). Vesicles with 13-nm ( n = 19), 19-nm ( n = 25) and 31-nm ( n = 19) R c (opposite pole) are shown. c , Circles defining the R c for outer (red) and inner (blue) membrane leaflets at the opposite pole (top) and at PIEZO1 (bottom). d , The midplane R c for PIEZO1 is graphed against the midplane R c at the opposite pole (circles and dashed curve). The straight dotted line shows the relationship for spherical vesicles. Data are mean ± 95% confidence intervals of the fitted radii ( n ≥ 15). Full size image These experiments show that PIEZO1 is capable of undergoing at least some degree of flattening in response to force applied through the membrane. In this case, the force originates from the vesicle-imposed curvature of a membrane with some degree of stiffness. In living cells, even larger forces may be expected, and may be mediated through lateral membrane tension 16 , 21 , 22 , 23 , 24 , 25 , 26 , attached tethers 19 , 23 , 27 , 28 , 29 or both. Next, we used HS-AFM to investigate whether PIEZO1 can change its shape in a reversible manner. HS-AFM of PIEZO1 in supported membranes HS-AFM imaging is mediated by raster-scanning the sample with a nanometric tip at the end of a cantilever that oscillates at resonance frequency (about 600 kHz). The topography (that is, the z dimension) is a surface that is contoured by the same oscillation-setpoint amplitude ( A set ), which must be smaller than the amplitude of the cantilever when it swings freely ( A free ). The ratio of A set to A free defines how much the oscillation is damped through the sample interaction. Thus, at constant A free , lowering A set leads to a higher applied force ( F HS-AFM ) on each tap (Fig. 4a ). The peak force and average force during an oscillation cycle can be determined by the analysis of the force trajectories from experiment 37 or by numerical simulation 38 , 39 using the point-mass model 40 (Extended Data Fig. 3 , Methods). In our HS-AFM setup, the average applied force \(\left(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \right)\) to the imaged objects can be approximated by $$\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \approx \frac{k{A}_{{\rm{free}}}}{2Q}{\left[1-{\left(\frac{{A}_{{\rm{set}}}}{{A}_{{\rm{free}}}}\right)}^{2}\right]}^{1/2}$$ (1) in which k and Q are the cantilever spring constant and quality factor, respectively. Controlling the A set / A free ratio thus enables the physical manipulation of PIEZO1 while observing its structural changes in response to \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) . The use of this approximation to quantify an average force seems justified, because the peak force application—which exceeds \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) —is applied during a short period (only around 200 ns) (Extended Data Fig. 3 ), many orders of magnitude faster than the reaction rate of PIEZO1 19 . In these conditions, the channel is expected to respond to an average force, whereas the peak force can be considered as the upper bound. Fig. 4: HS-AFM experiments of PIEZO1. a , Schematic of force-controlled HS-AFM imaging of membrane-embedded PIEZO1. The ratio A set / A free defines \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) . b , Top, simulated topographies of PIEZO1 in the detergent micelle viewed from the extracellular (left) and the intracellular (right) faces. The membrane was set as a uniform height level extending from the most-peripheral resolved transmembrane helices. Three black arrowheads indicate the three arms. Bottom, section profiles of the simulated topographies. c , d , HS-AFM images at specific \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) of PIEZO1 viewed from the extracellular ( c , about 20 pN and about 50 pN) and intracellular ( d , about 30 pN) faces. Right, section profiles (red traces) of the topographies. Extracellular face, three arms of PIEZO1 are observed within the deep ring area (as highlighted by the radial profile with approximately 120° periodicity (green trace)). The intracellular face shows a featureless dome. HS-AFM images are representative of ≥ 100 particles from ≥ 5 different samples. Full size image To investigate the morphology of PIEZO1 in HS-AFM images, we simulated the topography of PIEZO1 using the cryo-EM map of the protein in micelle (Extended Data Fig. 1a bottom, Methods). Given the two-dimensional confinement of the channels in supported bilayers, topographies viewed from opposite faces of the three-fold axis result in either a bowl-shaped topography with central plug (Fig. 4b top left, extracellular face) or a dome-shaped topography (Fig. 4b top right, intracellular face). The simulated topographies (Fig. 4b top) and height section profiles (Fig. 4b bottom) reveal a central protrusion from the membrane surface on the extracellular or intracellular face that is approximately 1 nm or approximately 9 nm high, respectively. Most notably, the extracellular face should (owing to its bowl-shaped, web-like protein-membrane architecture) produce a topography that features a triangular ring of ‘negative height’ between the C-terminal extracellular domain and the periphery of the arms, which is a recognizable feature that we term the halo. For HS-AFM, we found the best conditions were those in which PIEZO1 channels were reconstituted into small unilamellar vesicles of 1-palmitoyl-2-oleoyl- sn -glycero-3-phosphoethanolamine (POPE) and 1-palmitoyl-2-oleoyl- sn -glycero-3-phospho-(1′-rac-glycerol) (POPG) (at a ratio of 85:15, w:w). These vesicles spread into a continuous, supported lipid bilayer with embedded PIEZO1 channels. When imaged under both low (about 20 pN) and high (about 50 pN) scanning force, the extracellular face of PIEZO1 was identifiable by the halo that surrounds a central protruding cap, and three membrane-extended arms that reach out into the membrane plane (Fig. 4c left two panels). The radial profile within the halo area of the extracellular PIEZO1 face imaged at about 50 pN highlights the three-fold symmetry of the channel, with the three arms protruding with approximately 120° periodicity from a presumably suspended bilayer between the arms (green trace in Fig. 4c right, Supplementary Video 1 ). The intracellular face of PIEZO1 exhibits a featureless dome with around 8.2-nm height above the membrane (Fig. 4d ). However, the dome-like intracellular face was observed only rarely, which is consistent with vesicles bursting on mica to expose the concave extracellular face of PIEZO1 to the tip of the HS-AFM. The experimental topographies (Fig. 4c, d ) resemble qualitatively the simulated topographies (Fig. 4b ); however, PIEZO1 viewed from the extracellular side matched the structure only when imaged at low force (around 20 pN). The halo expands outwards under increasing force. Although HS-AFM tip convolution and uncertainties of the membrane level in the simulated topography can explain the differences in protrusion dimension of the C-terminal extracellular domain or indentation depth of the bowl (respectively), the experimental bowl radius of about 17 nm that was found when the channel was imaged at around 50 pN (as compared to a radius of about 13 nm in the simulated topography) could not be explained by such effects. This observation led us to realize that the PIEZO1 channel imaged at about 50 pN was more extended and flattened than the channel in the zero-force cryo-EM structure (and HS-AFM topographies at low force). We therefore next examined these structural changes as a function of applied force. Force-induced conformational changes For PIEZO1, we designed a force-sweep cycle during HS-AFM imaging through fine-tuning the A set / A free ratio 35 (blue trace in Fig. 5a ). When the imaging force is gradually increased up to approximately 60 pN by displacing the sample support towards the tip of the HS-AFM (green trace in Fig. 5a ), PIEZO1 channels undergo a circular expansion—as evidenced by the enlargement of the halo (Fig. 5a top, Supplementary Video 2 ). Upon lowering the force to about 30 pN, PIEZO1 channels return to their initial less-expanded conformation (Fig. 5a top right). Fig. 5: Mechanical response of PIEZO1 to applied force. a , Top, HS-AFM images from a force-sweep movie of PIEZO1 (dashed circles) in a POPE:POPG (ratio of 85:15 (w:w)) bilayer. Each image is an average over ten frames acquired at a specific loading force (highlighted by the five yellow shaded areas below). Bottom, force (red), A set / A free ratio (blue) and z -piezo displacement (green) as function of frame acquisition time. A free of about 1.5 nm. Representative of ≥11 independent experiments from ≥5 different PIEZO1 samples. b , Example of lateral expansion analysis. Each single molecule (left, green dashed circle in a ) is 360-fold-symmetry averaged (middle) and a kymograph (right) across the centre of PIEZO1 is calculated (white dashed line). The kymograph (representative of ≥ 100 particles) highlights the outer radius (halo) expansion (black dashed line) as a function of force. c , Normalized probability density map of outer ring radius ( R ) as a function of force. One hundred PIEZO1 particles from 11 movies acquired on ≥5 different samples. The symmetric distribution of R shows the structural reversibility upon force increase and decrease. A critical force ( F c ) of about 18 pN is required during HS-AFM operation. d , Applied force as a function of the sample displacement towards the tip of the atomic force microscope. The linear regression (red dashed line, disregarding the zero-force and negative Δz data points) provides the stiffness constant ( K ) of about 7.0 pN nm −1 and the work exerted by the HS-AFM (green-shaded area) for stressing a spring-like PIEZO1, based on our proposed dome-flattening model. Data are mean ± s.d. with n ≥ 10. Full size image We extracted individual PIEZO1 channels from the HS-AFM movies, performed 360-fold symmetry averaging of each frame to eliminate effects dependent on scan direction and calculated kymographs to observe the force-dependent structural changes (Fig. 5b ). The contour of the halo demonstrates the reversible expansion of the PIEZO1 channel as a function of image acquisition time and force (Supplementary Video 3 ). We calculated a normalized probability density map, which shows that the reversible ring expansion under an applied force is a general and reproducible property of the channels and that there is a maximum ring radius of around 17 ± 2 nm at about 55 pN (Fig. 5c ). A minority of channels (which were not included in this analysis) segregated into a second, larger subgroup (Extended Data Fig. 4 ). Although these channels had a radius about 6 nm larger than that of the majority of channels, the physical response to force was indistinguishable between the two subgroups (Extended Data Fig. 5 ). The HS-AFM experiments show that PIEZO1 undergoes a circular expansion in a direction radial to the pore axis when a compressive mechanical force is exerted parallel to the pore axis (Fig. 5c ). Qualitatively, this conformational change in PIEZO1 is consistent with the change in the mid-plane R c in the cryo-EM vesicle-imaging experiments (Fig. 3d ). In HS-AFM, because we approximate the force applied (equation ( 1 ), Extended Data Fig. 3 ) as a function of the sample displacement towards the HS-AFM tip (Fig. 5d ), we can estimate the mechanical properties of the channel. In the analysis that follows, we approximate the structure of PIEZO1 as a dome or spherical cap to represent the three curved arms that extend from the central pore, with a web of membrane in between them 16 . For a spherical cap with fixed area ( A ) and bending modulus ( k ), the energy to bring about a deformation of the cap away from its intrinsic (zero-force) R c (denoted as R 0 ) to a new R c can be expressed as 16 , 36 $$E\left({R}_{{\rm{c}}}\right)=\frac{1}{2}Ak{\left(\frac{2}{{R}_{{\rm{c}}}}-\frac{2}{{R}_{0}}\right)}^{2}$$ (2) From the geometric relationship for a spherical cap of A = 2π R c z (ref. 41 ) (in which z is the height of the cap above the membrane plane), we can express equation ( 2 ) as a function of z $$E\left(z\right)=\frac{8{{\rm{\pi }}}^{2}k}{A}{\left(z-{z}_{0}\right)}^{2}$$ (3) in which z 0 is the height of the cap in the absence of force. During HS-AFM imaging, the force is applied in the z direction. We can therefore express the force as $${F}_{z}\left(z\right)=\frac{dE\left(z\right)}{dz}=\frac{16{{\rm{\pi }}}^{2}k}{A}\left(z-{z}_{0}\right)=K{\rm{\Delta }}z$$ (4) in which \(K=\frac{16{{\rm{\pi }}}^{2}k}{A}\) is the stiffness constant in Hooke’s law. Equating F z ( z ) with \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) and Δz with the displacement of the sample (Methods), the force-displacement data for the HS-AFM data in Fig. 5a are shown with a line fit that corresponds to equation ( 4 ) (Fig. 5d ). Two quantities are extractable from this analysis. First, the area subtended by the data represents the work required to bring about the conformational change from curved to flat (green-shaded area in Fig. 5d ), which would be about 200 pN nm or about 50 k B T (if we use equation ( 1 ) to estimate the force; k B = Boltzmann constant, T = temperature). Second, the slope of the line reports the stiffness constant ( K ) of the protein, which is about 7.0 pN nm −1 or about 1.8 k B T nm −2 . The stiffness constant is proportional to the ratio of the bending modulus to surface area ( k / A ). With this relation in mind, it is notable that the PIEZO1 architecture is an outlier compared to other membrane proteins, with a uniquely large area that reflects a very low helical density in the membrane (approximately 0.2 of a helix per square nanometre) (Extended Data Fig. 6 ). The low-force projection radius (11 nm) and height (5 nm) predict a surface area for the PIEZO1 dome of π(11 2 + 5 2 ) ≅ 460 nm 2 , whereas the fully flattened projection radius under high force (about 17 nm) predicts a PIEZO1 surface area of π(17 2 ) ≅ 900 nm 2 . We can think of two potential origins for this discrepancy. First, the cryo-EM structures do not resolve the first 12 transmembrane helices, presumably because they are disordered. It is possible that these helices become ordered under force; if this were the case, as PIEZO1 flattens out it might encompass a larger area than would be predicted on the basis of the cryo-EM structures. Further work will be needed to test this hypothesis. Second, the halo that surrounds PIEZO1 extends into the membrane beyond the observed limits of the protein 42 . Again, further work will be needed to better understand the origins of the halo. These uncertainties notwithstanding, it is clear that PIEZO1 is compressible to a flatter structure under force and that this compressibility is reversible. Discussion The specific changes we observe in PIEZO1 are reversible and perfectly suited to rendering the channel sensitive to lateral membrane tension. This is because flattening produces an expansion of the in-plane area, and the difference in the in-plane area exhibited by two states of a protein is precisely a quantity that converts lateral membrane tension into a difference in free energy. For the difference in free energy between a flat and curved PIEZO1 embedded in a membrane with lateral membrane tension ( γ ), we have $$\Delta G=\Delta G\left(\gamma =0\right)-\gamma \Delta A$$ (5) in which ΔG ( γ = 0) is the difference in free energy between the flat and curved conformations in the absence of lateral tension, and ΔA is the difference in the in-plane (projected) area between the conformations 16 . ΔG equals 0 when γΔA offsets ΔG ( γ = 0). The flat and curved conformations will be equally probable (that is, in balanced equilibrium) when the following relationship holds $$\gamma =\frac{\Delta G\left(\gamma =0\right)}{\Delta A}$$ (6) Using equation ( 1 ) to estimate force, the HS-AFM experiments indicate that it takes about 50 k B T to flatten PIEZO1. Equating this energy to ΔG ( γ = 0) and ΔA to 80 nm 2 (dome area of about 460 nm 2 minus the projected area under low force, π(11) 2 ≅ 380 nm 2 ), equation ( 6 ) implies that PIEZO1 will be in balanced equilibrium at γ ≅ 0.6 k B T nm −2 . If the flattened conformation is associated with an open pore, then the channel will have an open probability 0.5 at this membrane tension. This tension value is not far from half-activation tensions that have previously been reported, of between 0.35 k B T nm −2 (ref. 21 ) and 1.2 k B T nm −2 (ref. 22 ). The average applied force calculated using equation ( 1 ) and the derived energy (Fig. 5d ) are probably lower limit estimates. If we instead use the peak force estimated from the point-mass model (Extended Data Fig. 3 , Methods), we would conclude that the PIEZO1 stiffness constant ( K ) would be about 32.5 pN nm −1 (or about 7.9 k B T nm −2 ), that the work to bring about the conformational change from curved to flat would be about 625 pN nm (about 150 k B T ), and that the tension associated with half-activation would be γ ≅ 1.9 k B T nm −2 . There is uncertainty in the precise quantification of the force in HS-AFM, and for a number of reasons—including lipid compositional differences and possible interactions between the membrane and mica—we do not expect the supported bilayer to be a perfect replica of a cell membrane. Nevertheless, the calculated range of tension sensitivity determined by HS-AFM is consistent with values that have previously been measured in electrophysiology experiments 21 , 22 . We conclude that PIEZO1 can undergo a reversible, flattening deformation when force is applied. The HS-AFM experiments apply force in a direction that is normal to the membrane surface. If tethers can attach to the channel in a cellular setting, then similarly directed forces could gate PIEZO1. If lateral membrane tension is the primary gating stimulus, then equation ( 5 ) and equation ( 6 ) present a way to think about the energetic equivalence to a normal force, such as that applied by the HS-AFM tip. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized and investigators were not blinded to allocation during experiments and outcome assessment. Proteoliposome reconstitution Full-length mouse PIEZO1 protein was expressed in HEK293S GnTI − cells and purified in C12E10 as previously described 16 . For cryo-electron microscopy, lipid mixture of POPC, DOPS (Avanti Polar Lipids) and cholesterol at a 8:1:1 (w:w:w) ratio was used for proteoliposome reconstitution. For HS-AFM measurement, the lipid composition was POPE and POPG (Avanti Polar Lipids) at a 85:15 (w:w) ratio. Lipids were prepared with 1.3% C12E10 as previously described 16 . Purified protein was incubated with the lipid/detergent mixture at a protein-to-lipid ratio of 1:20 (w:w) for 2 h at 4 °C. SM-2 bio-beads (Bio-Rad) were then added to the mixture for incubation overnight to remove C12E10. Electron microscopy sample preparation and imaging Freshly reconstituted PIEZO1 in POPC:DOPS:cholesterol was supplemented with 3 mM fluorinated fos-choline-8 (FFC-8) and frozen on glow-discharged Quantifoil 400 mesh gold R1.2/1.3 holey carbon grids as previously described 16 . Micrographs were recorded on a Talos Arctica transmission electron microscope (FEI) operating at 200 keV equipped with a K2 Summit direct electron detector (Gatan) controlled by SerialEM 43 . Super-resolution mode was used with nominal defocus range of 0.8 to 2.4 μm and a calibrated physical pixel size of 1.9 Å. The exposure time for each image was 10 s fractionated over 50 frames, with a dose rate of 15 electrons per physical pixel per second. A total of 1,373 images was combined from 2 data collection sessions. Electron microscopy image processing MotionCor2 was used for whole-frame motion correction with gain reference applied and dose weighting 44 . CTFFIND4 was used to estimate the contrast transfer function parameters for the summed images 45 . Using RELION 46 , 47 , 477 particles of top and bottom views were manually picked and extracted with a box size of 200 pixels. Two-dimensional classification in RELION generated the averaged images of top views (322 particles) and bottom views (120 particles) of PIEZO1 with a mask size of 250 Å to minimize the interference from the peripheral membrane density. One thousand, one hundred and sixty-six particles of side views were manually picked and grouped on the basis of the radii of their residing vesicles, with a bin width of 1 nm (except for the 13-nm and 31-nm groups with the bin width of 3 nm). Two-dimensional averaging was performed in RELION on 19 particles from the 13-nm, 27-nm and 31-nm groups, and 25 particles from 15-nm, 17-nm, 19-nm, 21-nm, 23-nm and 25-nm groups. The box size was 320 pixels and the mask size was 400 Å. Two-dimensional averaging was also performed on the exact opposite side to the channels of the same vesicles, following the same procedure. Control vesicles with no channels were also binned with the width of 1 nm. Two-dimensional averages were performed on 21 to 25 particles from each group (except for the 13-nm group, which contained only 12 particles). MATLAB R2018a (MathWorks) was used to measure the radii of curvature on all 2D-averaged images of vesicles, with embedded PIEZO1 (Extended Data Fig. 2a )—which share the same vesicle size—first being extracted by edge detection (Extended Data Fig. 2b ). Using a polygon to select the PIEZO1 position (Extended Data Fig. 2c ), the inner and outer membrane boundaries on the PIEZO1 side were identified as segmented lines (Extended Data Fig. 2d ); the cyan–green and green–yellow–blue contours represent the inner and outer membrane boundaries, respectively. The x and y coordinates of the edges that correspond to the outer and inner boundaries of the vesicle membrane were fitted as concentric circles using nonlinear least-squares solver (Extended Data Fig. 2e, f ). Ninety-five per cent confidence intervals of the fitted radii were calculated on the basis of the Jacobian of the fitted values with respect to the coefficients. The circle fits to the data on both boundaries give the local R c on the PIEZO1 side that is tangent to the centre of the PIEZO1 channel (Extended Data Fig. 2g, h ). For midplane radii, the error bars were calculated as the geometric mean of the error bars of inner and outer radii. HS-AFM All HS-AFM movies in this study were taken by amplitude-modulation-mode HS-AFM (RIBM), using optimized scanning and feedback parameters. Short cantilevers (NanoWorld) with spring constant of 0.15 N/m, resonance frequency of about 600 kHz and a quality factor of about 1.5 in buffer were used. A total of 2 μl of the PIEZO1 reconstituted vesicles (POPE:POPG 85:15, (w:w)) were deposited on 1.5-mm diameter freshly cleaved mica, gently rinsed with imaging buffer (20 mM Tris, pH 8.0, 150 mM NaCl) after 5 min of incubation, and then mounted in the HS-AFM fluid cell. Both the A set and A free (approximately 1.5 nm) of the cantilever oscillation were simultaneously recorded during HS-AFM force-sweep measurements 35 . The displacement of the sample ( Δz ) was linearly drift-corrected by the positions of z -piezo attenuator (and also corrected for the reduction in cantilever-oscillation amplitude) before and after force-sweep experiment. The HS-AFM movies were drift-corrected and contrast-adjusted by a laboratory build image analysis software in ImageJ. The 360-fold average and the analysis of dimensional changes of PIEZO1 particles were performed using ImageJ. The cross-correlation analysis of PIEZO1 channels was performed using MATLAB R2018a. Estimations of F HS-AFM during force-sweep measurements In the main text, we used equation ( 1 ) to estimate \(\left\langle {F}_{{\rm{HS-AFM}}}\right\rangle \) . To evaluate the validity of this approach, we undertook a thorough analysis of how the tip and the sample interact during HS-AFM operation. We monitored the real tip motion ( z ( t )) to reconstruct force trajectories of F HS-AFM applied when imaging objects in amplitude-modulated atomic force microscopy. In amplitude-modulated atomic force microscopy, the motion of the micro-cantilever–tip system can be described using the point-mass model 40 $$m\ddot{z}\left(t\right)=-kz\left(t\right)-\frac{m{\omega }_{0}}{Q}\dot{z}\left(t\right)+{F}_{{\rm{dr}}}\left(t\right)+{F}_{{\rm{ts}}}\left(d\right)$$ (7) in which m is the effective cantilever mass, z ( t ) is absolute tip motion, k is the cantilever stiffness, F dr is the drive force, F ts is the tip–sample interaction force that depends on the instantaneous gap ( d ) between the tip and the sample, and ω 0 and Q are the angular resonant frequency and quality factor, respectively. Equation ( 7 ) describes the total force that governs the tip motion, and includes the elastic response of the cantilever (the 1st term), the hydrodynamic damping with the medium (the 2nd term), the periodic driving force (the 3rd term) and the tip–sample interaction force (the 4th term). Rearranging equation ( 7 ), we can obtain equation ( 8 ), which illustrates that the sum of the driving force and tip–sample interaction force can be estimated by a detailed analysis of the tip motion (Extended Data Fig. 3a, b ) $${F}_{{\rm{dr}}}\left(t\right)+{F}_{{\rm{ts}}}\left(d\right)=kz\left(t\right)+\frac{m{\omega }_{0}}{Q}\dot{z}\left(t\right)+m\ddot{z}\left(t\right)$$ (8) Thus, our force reconstruction steps included (1) calculate the forces caused by the elastic response of the cantilever and the hydrodynamic damping with the medium, and total force that governs the tip motion (Extended Data Fig. 3c, d ); (2) sum up the above three forces, which equals F dr ( t ) + F ts ( d ) (Extended Data Fig. 3e, f ); (3) in the out-of-contact condition ( A ratio = A set / A free = 1), F ts is 0 (we do not detect attractive forces in our HS-AFM tip oscillations), and thus only F dr ( t ) remains when the tip oscillates freely (dashed line in Extended Data Fig. 2e, f ); and (4) F dr ( t ) is always constant, whereas the A ratio is changed to apply different forces. Therefore, the F ts (Extended Data Fig. 3g, h ) can be derived by calculating the difference between F dr ( t ) + F ts ( d ) ( A ratio < 1) and F dr ( t ) ( A ratio = 1). These reconstructed force trajectories (Extended Data Fig. 3g, h ) of F ts per oscillation period are equivalent to the F HS-AFM applied to the imaging objects. To experimentally determine F ts , we measured the cantilever deflections during oscillation cycles at different A ratio on mica (Extended Data Fig. 3a ) and membrane (Extended Data Fig. 3b ), respectively (traces from black to grey with decreasing A ratio ). The reduction of the tip-oscillation amplitude is caused by the tip–sample repulsive forces when the tip physically contacts the sample. Following the force reconstruction steps described above, we constructed the trajectories of F ts at different A ratio on mica (Extended Data Fig. 3g ) and membrane (Extended Data Fig. 3h ). The peak force always occurs during the first part of the F ts force-reconstruction trajectory, and is caused by the initial tip–sample clash. Two characteristic forces—the peak force of F ts (Extended Data Fig. 3i ) and the average force per cycle \(\left(\left\langle {F}_{{\rm{ts}}}\right\rangle \right)\) (Extended Data Fig. 3j ) at different A ratio —were extracted. The peak force applied by the tip to the stiff mica and the soft membrane are very similar; only when A ratio < 0.65 does the peak force grow higher on mica (compare the blue and red dashed lines in Extended Data Fig. 3i ). We compared these experimental measurements of the peak forces with the web-based atomic force microscopy simulation tool VEDA ( ) 38 to estimate the peak force on samples with different stiffness. This simulation tool solves the point-mass model differential equation numerically, and applies different contact models. We used the Hertz contact model and similar parameters as in our experiments. The resulting peak-force values (black and grey dashed lines, and grey shadowed area, in Extended Data Fig. 3i ) compare very well with the data we measured on the membrane. The \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) increases more linearly with decreasing A ratio on mica (blue dashed line in Extended Data Fig. 3j ), whereas the dependence of \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) with A ratio on the membrane increases in a somewhat fluctuating manner, and seems to reach a plateau at an A ratio < 0.7 (red dashed line in Extended Data Fig. 3h ). Finally, the \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) calculated by equation ( 1 ) (black dashed line in Extended Data Fig. 3j ) shows a similar trend and estimates \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) of the same order as the \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) measurement on the membrane. Thus, the use of equation ( 1 ) to estimate the \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) is a valid approach in our HS-AFM setup. We further evaluated the influence of A free on the peak force during HS-AFM imaging on membrane (Extended Data Fig. 3k ). A higher A free produces a larger peak force at the same A ratio . In the condition of the HS-AFM force-sweep measurements ( A free = 1.5 nm), the peak force reaches about 200 pN at A ratio = 0.5, which is roughly three times higher than the average force \(\left\langle {F}_{{\rm{ts}}}\right\rangle \) calculated by equation ( 1 ) (about 65 pN). Generation of topographies from PDB files A plugin was written for IgorPro7 (WaveMetrics), in which a cone-shaped tip with a hard-sphere of user-defined radius is scanned point-by-point ( x , y ) over the surface of a PDB structure. The sphere is lowered to the lowest position ( z ) it can occupy without clashing with any of the atoms of the PDB file. No mechanical properties of the tip or protein, or of protein hydration layers, are considered. The algorithm simply generates sphere-convoluted surface representations of PDB files at zero-force. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Any data relating to the findings presented in this Article are available from the corresponding authors upon reasonable request.
A team of scientists from Weill Cornell Medicine and The Rockefeller University has illuminated the basic mechanism of Piezo proteins, which function as sensors in the body for mechanical stimuli such as touch, bladder fullness, and blood pressure. The discovery is a feat of basic science that also opens up many new paths of investigation into the roles of Piezo proteins in human diseases and potential new therapeutic strategies. In the study, published Aug. 21 in Nature, the scientists used advanced microscopy techniques to image the Piezo1 protein at rest and during the application of mechanical forces. They confirmed this complex protein's structure and showed essentially how it can convert mechanical stimuli into an electrical signal. "Our analysis shows that tension on the cell membrane in which Piezo1 is embedded can flatten and widen the protein's structure," said co-senior author Dr. Simon Scheuring, a professor of physiology and biophysics in anesthesiology at Weill Cornell Medicine. Dr. Scheuring and his laboratory collaborated on the study with the laboratory of Dr. Roderick MacKinnon, a professor of molecular neurobiology and biophysics at The Rockefeller University. Dr. MacKinnon was co-recipient of the Nobel Prize in Chemistry in 2003 for his work determining the structures and mechanisms of ion channel proteins. Piezo1 and Piezo2 are very large and complex proteins with unique structures. They are embedded within the membranes of certain cell types, and their function is to transduce mechanical force on cells into electrical signals that alter cell activity. Piezo1 proteins work for example in bladder cells to detect when the bladder is full, and in blood vessel-lining cells to detect and help regulate changes in blood pressure. Piezo2 proteins work in sensory nerve endings in the skin and joints, helping to mediate the senses of touch, pain, and proprioception—the sense of how one's limbs are arranged. Triskelion architecture of Piezo1. Credit: Weill Cornell Medical College Advances in imaging techniques have enabled scientists in recent years to determine the basic structure of Piezo1—a structure that Piezo2 is thought to mostly share. From above this structure has a three-armed, propeller or "triskelion" appearance. From the side it looks like a shallow bowl embedded in the cell membrane, with an ion channel at its center. The latter, when opened, allows a flow of calcium and other positively charged ions into the cell. The basic mechanism by which mechanical force opens the ion channel has remained mysterious. But in the new study Dr. Scheuring and Dr. MacKinnon and their colleagues, including lead author Dr. Yi-Chih Lin, a postdoctoral associate in anesthesiology, were able to get a clearer picture of how it works. Side view of Piezo1. Red line indicates structural changes when ion channel is open. Credit: Weill Cornell Medical College They combined cryo-electron microscopy with a less well-known technique called high-speed atomic force microscopy, which produces an image of an object essentially by feeling its surface with a super-sensitive mechanical probe. They showed with these methods that Piezo1 is a springy structure that normally bends the cell membrane where it sits, but will flatten out when, for example, a mechanical force is applied to the cell membrane. "As the membrane tension increases, the structure of Piezo1 flattens and stretches out to occupy a larger area, which in turn opens the ion channel," Dr. Scheuring said. He noted the possibility that other stimuli that stretch and flatten the Piezo1 structure, such as a pulling force on its arms from the inside or on an external domain called the CED from the outside the cell, in principle could open the ion channel—making it a suitably versatile mechanism for the wide range of cell types and physiological functions in which it works. Proposed mechanisms of action of Piezo1 in response to force. Left: Changes in membrane properties, such as tension or curvature, lead to a force that opens Piezo1. Right: Piezo1 channel is activated when structures inside or outside the cell push or pull on the ion channel. Credit: Weill Cornell Medical College Moreover, given this wide range of cell types—in organs including the lungs, bladder, intestines, and pancreas, as well as in blood vessels and the sensory nervous system—the discovery of the basic Piezo-protein mechanism could lead to new ways of understanding and treating many human diseases. To take one example, Dr. Scheuring said, if the membranes of cells lining blood vessels contain excess cholesterol they would become stiffer, increasing the background tension on embedded Piezo 1 proteins and potentially disrupting these proteins' normal ability to detect and help regulate blood pressure. "Our finding leads to a great many predictions about Piezo proteins' roles in disease that we and others can now go and investigate," he said.
10.1038/s41586-019-1499-2
Physics
Elusive dark matter may be detected with GPS satellites
The paper can be found here: http://dx.doi.org/10.1038/nphys3137. Journal information: Nature Physics
http://dx.doi.org/10.1038/nphys3137
https://phys.org/news/2014-11-elusive-dark-gps-satellites.html
Abstract The cosmological applications of atomic clocks 1 , 2 , 3 so far have been limited to searches for the uniform-in-time drift of fundamental constants 4 . We point out that a transient-in-time change of fundamental constants can be induced by dark-matter objects that have large spatial extent, such as stable topological defects 5 built from light non-Standard Model fields. Networks of correlated atomic clocks, some of them already in existence 6 , such as the Global Positioning System, can be used as a powerful tool to search for topological defect dark matter, thus providing another important fundamental physics application for the ever-improving accuracy of atomic clocks. During the encounter with an extended dark-matter object, as it sweeps through the network, initially synchronized clocks will become desynchronized. Time discrepancies between spatially separated clocks are expected to exhibit a distinct signature, encoding the defect’s space structure and its interaction strength with atoms. Main Despite solid evidence for the existence of dark matter ( ∼ 25% of the global energy budget in the Universe and ρ DM ≃ 0.3 GeV cm −3 in the neighbourhood of the Solar system 7 ), its relationship to particles and fields of the Standard Model (SM) remains a mystery. Although searches for particle dark matter (DM) are being actively pursued 8 , there is also significant interest in alternatives, among which is DM composed from very light fields. Depending on the initial field configuration at early cosmological times, such light fields could lead to dark matter via coherent oscillations around the minimum of their potential, and/or form non-trivial stable field configurations in physical three-dimensional space if their potential allows such a possibility. This latter option, which we will generically refer to as topological defects (TDs), is the main interest of our paper. The light masses of fields forming the TDs could lead to a large, indeed macroscopic, size for a defect. Their encounters with the Earth, combined with the DM–SM coupling, can lead to novel signatures of dark matter expressed generically in terms of ‘transient effects’. These effects, coherent on the scale of individual detectors, are temporary shifts in the frequencies and phases of measuring devices, rather than large energy depositions as is the case for microscopic DM. In this paper we suggest the possibility of a new search technique for the topological defect dark matter (TDM), based on a network of atomic clocks. Atomic clocks are arguably the most accurate scientific instruments ever built, reaching a 10 −18 fractional inaccuracy 1 , 2 . Attaining this accuracy requires that the quantum oscillator be well protected from environmental noise and that perturbations be well controlled and characterized. This opens the intriguing prospect of using clocks to study subtle effects, and it is natural to ask if such accuracy can be harnessed for dark-matter searches. To put our discussion on concrete grounds, we introduce a collection of light fields beyond the SM that can form TDs of different dimensionality: monopoles (0D), strings (1D) and domain walls (2D). The exact nature of such defects depends on the composition of the dark sector and on the self-interaction potential 5 . For this paper we take a simplified approach, calling ϕ a generic light field from the dark sector, whether it be scalar or vector, that forms a network of TDs at some early stage of cosmological history. The transverse size of the defect is determined by the field Compton wavelength d , which is in inverse relation to the typical mass scale of the light fields, d ∼ ℏ /( m ϕ c ). The fields we are interested in are ultralight: for an Earth-sized defect, the mass scale is 10 −14 eV . In our simplified approach we capture only the gross features of TDs (ref. 5 ), and call A the amplitude of the field change between inside and outside a TD, A = ϕ inside − ϕ outside , also choosing the outside value of the field to be zero. The energy density of TDM averaged over a large number of defects is controlled by the energy density inside the defect, ρ inside ∼ A 2 / d 2 , and the average distance between the defects, L , through the natural scaling relation: where n = 0,1,2 for monopoles, strings or domain walls, and we measure A in units of energy. The right combination of parameters can give a significant contribution to, or even saturate ρ DM . The average time between ‘close encounters’ with TDs, r ≤ d , is set by the galactic velocity of such objects v g , The velocity of galactic objects around the Solar system is an input parameter that is relatively well known, and for the purpose of estimates one can take v g ≃ 10 −3 × c ≈ 300 km s −1 . If the parameter is of the order of a few years or less, then it is reasonable to think of a detection scheme for TD crossing events. The most crucial question is how the fields forming the defect interact with the SM. All possible types of interaction between TDs and SM fields can be classified using the so-called ‘portals’, a collection of gauge-invariant operators of the SM coupled with the operators from the dark sector 9 . Throughout the remainder of this paper, we will be interested in a more general form of the SM–TD interaction, in the form of the quadratic scalar portal, Because inside the TD, by assumption, ϕ 2 → A 2 and outside ϕ 2 → 0, this portal renormalizes masses and couplings only when the TD core overlaps with the quantum device. Here m e , p and ψ e , p are electron and proton masses and fields, and F μν are electromagnetic tensor components. The appearance of high-energy scales Λ X in the denominators of (2) signifies the effective nature of these operators, implying that at these scales the scalar portals will be replaced by some unspecified fundamental theory (in the same way as the electroweak theory of the SM replaces the effective four-fermion weak interaction at the electroweak scale). The SM field dependence in (2) replicates corresponding pieces from the SM sector Lagrangian density, thus leading to the identification (the second line of equation (2)) of how masses and the fine-structure constant α are modulated by the TD. Thus, for every coupling constant and SM particle mass scale X one has, to first order in ϕ 2 , A quadratic (as opposed to linear) dependence on ϕ leads to weakening of the constraints imposed by precision tests of gravitational interactions 10 . Both direct laboratory and astrophysical constraints on Λ X do not exceed ∼ 10 TeV. Further background information on TDM, the types of interaction with the SM, and plausible scenarios for its abundance are provided in the Supplementary Information . In particular, we present an explicit example of the so-called Abrikosov–Nielsen–Olesen string defect 11 , 12 , with an increased value of α inside its core. The main consequence of the interaction (2) is a temporary shift of all masses and frequencies inside the TD. Thus, the signature we are proposing to search for is a transient variation of fundamental constants. In the limit of large τ , when the size of a TD is on astronomical scales, the effect of (2) becomes identical to variations of couplings and masses over time with , in which case all the existing terrestrial constraints immediately apply 4 . In addition, during the TD crossing there is a new force acting on massive bodies, giving a transient signature that can be explored with sensitive graviometers. Also, there are other ways of coupling TDs to the SM, such as the so-called axionic portals, ∂ μ ϕ / f a × J μ , where J μ is the axial-vector current. This would lead to a transient ‘loss’ of rotational/Lorentz invariance, and can be searched for with sensitive atomic magnetometers 13 , 14 . By design, atomic clocks are less sensitive to the coupling to spin, and for that reason we concentrate on (2). Clocks tell the time by counting the number of oscillations and multiplying this by a predefined period of oscillation, 1/(2 πω 0 ), where ω 0 is the fixed unperturbed clock frequency. The experimentally relevant quantity is the total phase accumulated by the quantum oscillator, then apparently the device time reading is ϕ 0 ( t )/ ω 0 . A TD would shift the oscillator frequency and thereby affect the phase or the time reading, where δω ( t ′) is the variation in quantum oscillator frequency caused by the TD. We parameterize δω ( t ) = gf ( t ), where g ∝ A 2 / Λ 2 is the coupling strength and f ( t ) ∝ | ϕ ( r − v g t )| 2 is a time-dependent envelope ( r is the clock position), so that . Suppose we compare the phases of two identical clocks separated by a distance l (see Fig. 1 ), which encounter a domain-wall-type TD. Because the TD propagates through the network with a speed v g , the second clock would be affected by the TD at a later time, with a time delay l / v g . Formally, the phase difference (or apparent time discrepancy Δ t ) between the clocks reads By monitoring the correlated time difference Δ t ( t ) between the two clocks, one could search for TDM. Before the arrival of the TD at the first clock, the phase difference is zero, as the clocks are synchronized. As the TD passes the first clock, it picks an additional phase difference |Δ φ | max = | g | d / v g . Δ φ ( t ) stays at that level while the TD travels between the two clocks. Finally, as the TD sweeps through the second clock, the phase difference vanishes. In this illustration we assumed that d ≪ l ≪ L . In the limit of d ≲ l , frequency (instead of time) comparison can be more accurate. Figure 1: Concept of a dark-matter search using atomic clocks. By monitoring time discrepancies between two spatially separated clocks one could search for the passage of topological defects, such as the domain wall pictured here. Full size image We may further relate the TD-induced frequency shift to transient variation of the fundamental constants. The instantaneous clock frequency shift may be parameterized as where X runs over the fundamental constants. The dimensionless sensitivity coefficients K X are known from atomic and nuclear structure calculations 15 . It is important to note that different types of clocks exhibit sensitivity to different combinations of fundamental constants, with optical clocks being mostly sensitive to α and microwave clocks also being sensitive to nuclear couplings ( Supplementary Information ). The energy density stored in the TD and various couplings enters implicitly through a time-varying deviation, δX ( t ) ∝ | ϕ ( r − v g t )| 2 , of the fundamental constant from its nominal value. Then the two clocks will be desynchronized by Here we used equation (1) and the fact that contributions to the Lagrangian (2) factorize into SM and TDM parts. Notice that this result does not depend on a specific class of TDs. In practice, one needs to dissect the TD-induced desynchronization (4) from the various noise sources present in quantum devices and the link connecting the two clocks. We neglect link noise. We assume that the TD thickness d is much smaller than the distance between the clocks, as shown in Fig. 1 . One would need to resolve the ‘hump’ in the presence of background noise. Suppose we compare the clock readings every T seconds; then the total number of measurements of non-zero phase difference is N m = l /( v g T ). For a terrestrial network with an arm length of l ∼ 10,000 km, the TD sweep takes 30 s, so one could make 30 measurements, sampled every second. Because the clocks are identical and statistically independent, the variance 〈Δ φ ( t ) 2 〉 − 〈Δ φ ( t )〉 2 = 2 R φ ( T ), where R φ ( T ) is the phase auto-covariance function 16 . It can be estimated from the commonly reported Allan variance σ y ( T ), which characterizes the fractional instability of the clock frequency 17 : R φ ( T ) ≈ ( ω 0 T ) 2 σ y 2 ( T ). Thereby the uncertainty due to a single clock comparison is . As we carry out N m = l /( v g T ) measurements, the statistical uncertainty is reduced further by . The above argument leads to the signal-to-noise ratio This ratio scales up with the TD size d , the sensitivity coefficients K X , and the distance between the clocks. See Supplementary Information for a further discussion. The TD detection confidence would improve both by increasing the number of network nodes and by populating nodes with several clocks of different types. Clearly, when the TD sweep is detected, all the clock pairs should exhibit a time-correlated desynchronization signature associated with the sweep. Different clocks have distinct sensitivities to the variation of the fundamental constants, and this could help in disentangling various couplings in (2) and (3). Moreover, a large number of clocks in a network will help in determining the direction of arrival of the TD, its velocity and spatial extent. The analysis presented can be generalized to the case of point-like TD (monopoles), which under a gravitational force will behave identically to regular cold dark matter. We illustrate such a case in Fig. 2 . Here we assume that the TD is an Earth-scale Gaussian-profile cloud sweeping through a clock network. Individual clocks are perturbed at different times with different amplitudes, depending on the distance to the monopole centre. This leads to a TD-induced phase accumulation, where R ( t ) = { X 0 , Y 0 , Z 0 + v g t } and r i = { x i , y i , z i } are the positions of the TD centre and the i th clock, respectively, and d is the effective radius of the TD. Here we assumed that the TD propagates along the z axis. The coupling is rescaled depending on the clock position g i ≡ g exp{− ρ i 2 / d 2 }, with ρ i = (( X 0 − x i ) 2 + ( Y 0 − y i ) 2 ) 1/2 being the impact parameter. This translates into a differential phase accumulation between the clocks, similar to our ‘wall’ example of Fig. 1 , but with the step-on and step-off heights depending on the difference of the clock impact parameters. Having several different types of clock at each node of the network will maximize the discovery potential, increasing sensitivity to monopole and string-type objects, especially if their transverse size is much smaller than R ⊕ . In that case, direct comparison of several clocks within one node is needed. Implementing such a search with several clocks at a single node can be the first step towards a global TDM search effort. Detailed network optimization strategies for TDM searches of varying transverse size d and dimensionality n are left for future investigations. Figure 2: Effect of a monopole-type defect on atomic clocks. Simulated response of an Earth-scale constellation of atomic clocks to a 0D Gaussian-profiled topological defect (monopole) of effective radius 0.75 R ⊕ . The monopole centre is displaced from the collision axis by 0.2 R ⊕ . Earth’s centre and the clocks lie in the collision plane. The polar angles of the three clocks are π /2, π , − π /4 in a reference frame centred at the Earth’s centre. Full size image Several networks of atomic clocks are already operational. Perhaps the most well known are the Rb and Cs microwave atomic clocks on-board satellites of the Global Positioning System (GPS) and other satellite navigation systems. We envisage using the GPS constellation as a 50,000-km-aperture dark-matter detector, with added capabilities due to the extensive terrestrial network of atomic clocks on the GPS tracking stations. As TDs sweep through the GPS constellation, satellite clock readings are affected. Because accurate ephemeris satellite data are known, one could easily cross-correlate clock readings in the network. For two diametrically opposed satellites the maximum time delay between clock perturbations would be ∼ 200 s, assuming a TD sweep with a typical velocity of 300 km s −1 . Different types of topological defects (for example, domain walls versus monopoles) would yield distinct cross-correlation signatures. Although the GPS is affected by a multitude of systematic effects—for example, solar flares, temperature and clock frequency modulations as the satellites come in out of Earth’s shadow—none of the conventional effects would propagate with a velocity of 300 km s −1 through the network. Dark-matter searches can also be implemented with state-of-the art laboratory clocks 1 , 2 , using the vast network of atomic clocks at national standards laboratories used for evaluating the TAI timescale 3 . Moreover, several elements of high-quality optical links for clock comparisons have already been demonstrated in Europe, with a 920 km link connecting two laboratories in Germany 6 . Furthermore, a caesium fountain clock and a hydrogen maser are planned for installation on the International Space Station in the near future, providing high-quality time and frequency links to several metrology laboratories around the globe 18 . As an illustration of sensitivity to the energy scales Λ X of TDM–SM coupling (2), we consider a terrestrial network ( l ∼ 10,000 km) of Sr optical lattice clocks which are sensitive to the variation of α with K α = 6 × 10 −2 . For these clocks one may anticipate reaching σ y (1 s) ∼ 10 −18 at T = 1 s measurement intervals. Requiring S / N ∼ 1 in equation (5), substituting fiducial values for ρ TDM and v g , and choosing yr, we plot a sensitivity curve to the energy scale Λ α as a function of the defect size in Fig. 3 . Here we also show the sensitivity of the GPS constellation ( l ∼ 50,000 km, T = 30 s, σ y (30 s) ∼ 10 −11 ), assuming that the TDM–SM coupling is dominated by the transient variation of α ( K α = 2). Limits derived from both Sr and GPS networks would greatly exceed the Λ < 10 TeV region excluded by direct laboratory and astrophysical constraints, such as from fifth-force and violation of the equivalence principle searches 10 . Figure 3: Projected constraints on dark-matter coupling. Terrestrial and space networks of atomic clocks can impose powerful constraints on the characteristic energy scales of dark-matter interactions with baryonic matter (2). Here we show bounds on Λ α that may be derived from a terrestrial network of optical lattice clocks and GPS clocks. The horizontal axis is the topological defect size in kilometres and also includes two characteristic TD field rest mass scale values. Full size image
The everyday use of a GPS device might be to find your way around town or even navigate a hiking trail, but for two physicists, the Global Positioning System might be a tool in directly detecting and measuring dark matter, so far an elusive but ubiquitous form of matter responsible for the formation of galaxies. Andrei Derevianko, of the University of Nevada, Reno, and his colleague Maxim Pospelov, of the University of Victoria and the Perimeter Institute for Theoretical Physics in Canada, have proposed a method for a dark-matter search with GPS satellites and other atomic clock networks that compares times from the clocks and looks for discrepancies. "Despite solid observational evidence for the existence of dark matter, its nature remains a mystery," Derevianko, a professor in the College of Science at the University, said. "Some research programs in particle physics assume that dark matter is composed of heavy-particle-like matter. This assumption may not hold true, and significant interest exists for alternatives." "Modern physics and cosmology fail dramatically in that they can only explain 5 percent of mass and energy in the universe in the form of ordinary matter, but the rest is a mystery." There is evidence that dark energy is about 68 percent of the mystery mass and energy. The remaining 27 percent is generally acknowledged to be dark matter, even though it is not visible and eludes direct detection and measurement. "Our research pursues the idea that dark matter may be organized as a large gas-like collection of topological defects, or energy cracks," Derevianko said. "We propose to detect the defects, the dark matter, as they sweep through us with a network of sensitive atomic clocks. The idea is, where the clocks go out of synchronization, we would know that dark matter, the topological defect, has passed by. In fact, we envision using the GPS constellation as the largest human-built dark-matter detector." Quantum physicist Andrei Derevianko of the University of Nevada, Reno has contributed to the development of several novel classes of atomic clocks and now is proposing using networks of synchronized atomic clocks to detect dark matter. His paper on the topic is published in the journal Nature Physics. Credit: University of Nevada, Reno Their research was well-received by the scientific community when the theory was presented at renowned scientific conferences this year, and their paper on the topic appears today in the online version of the scientific journal Nature Physics, ahead of the print version. Derevianko is collaborating on analyzing GPS data with Geoff Blewitt, director of the Nevada Geodetic Laboratory, also in the College of Science at the University of Nevada, Reno. The Geodetic Lab developed and maintains the largest GPS data processing center in the world, able to process information from about 12,000 stations around the globe continuously, 24/7. The two are starting to test the dark matter detection ideas by analyzing clock data from the 30 GPS satellites, which use atomic clocks for everyday navigation. Correlated networks of atomic clocks such as the GPS and some ground networks already in existence, can be used as a powerful tool to search for the topological defect dark matter where initially synchronized clocks will become desynchronized. The time discrepancies between spatially separated clocks are expected to exhibit a distinct signature. Blewitt, also a physicist, explained how an array of atomic clocks could possibly detect dark matter. "We know the dark matter must be there, for example, because it is seen to bend light around galaxies, but we have no evidence as to what it might be made of," he said. "If the dark matter were not there, the normal matter that we know about would not be sufficient to bend the light as much as it does. That's just one of the ways scientists know there is a massive amount of dark matter somewhere out there in the galaxy. One possibility is that the dark matter in this gas might not be made out of particles like normal matter, but of macroscopic imperfections in the fabric of space-time. "The Earth sweeps through this gas as it orbits the galaxy. So to us, the gas would appear to be like a galactic wind of dark matter blowing through the Earth system and its satellites. As the dark matter blows by, it would occasionally cause clocks of the GPS system to go out of sync with a tell-tale pattern over a period of about 3 minutes. If the dark matter causes the clocks to go out of sync by more than a billionth of a second we should easily be able to detect such events."
http://dx.doi.org/10.1038/nphys3137
Medicine
Pomegranate compound with anti-aging effects passes human trial
Pénélope A. Andreux et al. The mitophagy activator urolithin A is safe and induces a molecular signature of improved mitochondrial and cellular health in humans, Nature Metabolism (2019). DOI: 10.1038/s42255-019-0073-4 Journal information: Nature Metabolism
http://dx.doi.org/10.1038/s42255-019-0073-4
https://medicalxpress.com/news/2019-06-pomegranate-compound-anti-aging-effects-human.html
Abstract Urolithin A (UA) is a natural dietary, microflora-derived metabolite shown to stimulate mitophagy and improve muscle health in old animals and in preclinical models of aging 1 . Here, we report the results of a first-in-human clinical trial in which we administered UA, either as a single dose or as multiple doses over a 4-week period, to healthy, sedentary elderly individuals. We show that UA has a favourable safety profile (primary outcome). UA was bioavailable in plasma at all doses tested, and 4 weeks of treatment with UA at doses of 500 mg and 1,000 mg modulated plasma acylcarnitines and skeletal muscle mitochondrial gene expression in elderly individuals (secondary outcomes). These observed effects on mitochondrial biomarkers show that UA induces a molecular signature of improved mitochondrial and cellular health following regular oral consumption in humans. Main During aging, there is progressive decline in the cell’s capacity to eliminate its dysfunctional elements by autophagy 2 . Accumulating evidence has highlighted the decrease in the specific autophagy, or recycling, of dysfunctional mitochondria, known as mitophagy, in aging skeletal muscle 3 . This can result in poor mitochondrial function in the skeletal muscle, and has been closely linked to slow walking speed and poor muscle strength in elderly individuals 4 , 5 . Consequently, improving mitochondrial function in elderly people by restoring levels of mitophagy represents a promising approach to halt or delay the development of age-related decline in muscle health. UA is a first-in-class natural food metabolite that stimulates mitophagy and prevents the accumulation of dysfunctional mitochondria with age, thereby maintaining mitochondrial biogenesis and respiratory capacity in cells, and, in the nematode Caenorhabditis elegans , improving mobility and extending lifespan 1 . In rodents, UA improves endurance capacity in young rats and in old mice either fed a healthy diet or placed under conditions of metabolic challenge 1 . Recently, UA was shown to have a favourable safety profile following a battery of standardized toxicological tests, including subchronic exposure for 90 d in rodent models 6 , and received a favourable review by the US Food and Drug Administration under the agency’s generally recognized as safe (GRAS) notification program 7 . In this report, we detail the outcome of a first-in-human, randomized, double-blind, placebo-controlled clinical study with UA in healthy, sedentary elderly individuals, and describe its safety, bioavailability and beneficial impact on key biomarkers of mitochondrial health ( NCT02655393 ). Physiological endpoints were not evaluated as part of this study, as the 4-week intervention was considered too short in comparison to the extended protocols (minimum 3 months) deemed necessary to improve muscle strength or physical performance parameters in elderly individuals 8 . This phase 1 study was a two-part study, with a single ascending dose (part A) followed by a multiple ascending dose (part B). As the first objective of the study was safety assessment, the dose escalation was designed to progress from the lowest to the highest UA dose investigated in both parts of the study. Dose escalation to the next higher UA dose was always twofold higher than the previous dose (see Methods and Supplementary Table 1 for the decision tree and stopping rule criteria to advance to the next higher UA dose). During part A of the study, three cohorts of eight subjects each (24 subjects) received either placebo or UA in a two-period design separated by a minimum 3-week wash-out period and at single ascending doses of 250, 500, 1,000 or 2,000 mg, either in soft gels or admixed with food (Fig. 1a , also the CONSORT diagram in Supplementary Fig. 1 ). In part B of the study, three cohorts of 12 elderly subjects were given either placebo or UA at 250, 500 or 1,000 mg once daily in soft gels for 28 d (Fig. 1a and Supplementary Fig. 1 ). The lowest dose of 250 mg was chosen on the basis of preclinical studies, where the equivalent daily dosing of 50 mg per kg (mpk) of body weight in mice demonstrated efficacy on mitochondrial and muscle function after a 6-week oral intervention 1 . Clinical study treatment groups were evenly matched for age, sex and body mass index, and all of the subjects were sedentary at the time of inclusion in the study (Supplementary Tables 2 and 3 ). All enrolled subjects completed the study, there were no major deviations in the clinical protocol or in product intake, and no subjects were excluded in the final analysis for the main study endpoints (Supplementary Fig. 1 ). Fig. 1: UA phase 1 study design, pharmacokinetic analysis and impact on plasma acylcarnitines in elderly individuals. a , Simplified schema of the clinical study design. The dose escalation was designed to progress from the lowest to the highest UA dose investigated in both parts of the study. Dose escalation to the next higher UA dose was always twofold higher than the previous dose. During part A, UA was administered as a single ascending dose, ranging from 250 to 2,000 mg on fasting, and at 500 and 1,000 mg in a fed state with a high-protein yogurt food matrix. Muscle biopsies were collected at pre-dose and 8 h after oral administration of UA only in the 2,000 mg group. During part B, UA was administered once daily in the morning on fasting for 28 d. Plasma and muscle biopsies were collected at pre-dose and at day 28 for biomarker activity measurements (see arrows). The corresponding CONSORT diagram is represented in Supplementary Fig. 1 . b , Dose-dependent increase in plasma UA, UA-glucuronide and UA-sulfate maximum concentrations and exposure during the 96-h sampling period following its administration on the last day of the 28-d treatment period for 250, 500 and 1,000 mg doses ( n = 9 biologically independent samples). Data represent mean ± s.e.m. c , Change in plasma levels of acylcarnitines compared to baseline (day 28 (D28) versus pre-dose (D –1)) ( n = 9 biologically independent samples). Data represent geometric mean ± 95% confidence interval. # 0.05 < P < 0.15; * P < 0.05; ** P < 0.01 after a two-way, repeated-measures ANOVA. Related to Supplementary Figs. 1 and 2 and Supplementary Tables 1 – 5 . Full size image As the study was a single and multiple dose escalation phase 1 study, designed according to guidelines and recommendations for first-in-human studies 9 and following standard dose escalation safety trial design 10 , 11 , it was powered to meet the primary outcome of safety and tolerability of UA in elderly humans to provide sufficient information on human safety and pharmacokinetic profile and to allow dose selection for future phase 2 efficacy trials (see also Methods ). In each part of the study, subjects underwent physical examinations and electrocardiogram (ECG) evaluations and were monitored for adverse events. A battery of laboratory safety tests (serum biochemistry, haematology and urinalysis) were conducted before and after dosing. The primary outcome was successfully met, and no serious adverse events and no product-related non-serious adverse events were reported during both part A and part B of the study. All other non-serious adverse events were of mild to moderate intensity and resolved during the course of the study (Supplementary Tables 4 and 5 ). No clinically relevant abnormal laboratory test values from the study baseline were observed for any of the biochemistry tests assessing liver and kidney function, or for any of the haematology and urinalysis tests at any of the doses investigated during the course of the study. No abnormal and clinically notable findings were observed for ECG findings for any subjects taking active intervention at any of the doses during the course of the study. This follows the favourable safety profile observed in preclinical toxicology studies, which showed no toxic effect at the highest doses tested (3,451 and 3,826 mpk in male and female rodents, respectively) 6 . As another key outcome, the pharmacokinetic profile of UA was characterized in humans. We have developed robust, precise and validated methods to measure the individual concentrations of the parent UA and its detectable metabolites in both plasma and in the human skeletal muscle. Validation of the methods for the measurements of UA aglycone and its glucuronide and sulfate metabolites was performed following guidance on method validation 12 , 13 . The levels of total UA (UA and its metabolites) observed in plasma before dosing in the enrolled subjects ranged from undetectable (69%) to low (17%), moderate (11%) and high (3%), demonstrating the substantial variability of UA exposure, probably due to differences in diet and to potential variations in the composition of the gut microflora 14 (Supplementary Fig. 2a ). UA was bioavailable in plasma at all doses tested (250–2,000 mg) in the single ascending part A of the study, and there was no food effect when UA was administered in a high-protein yogurt food matrix (data not shown). Similarly, in part B of the study, where pharmacokinetics were assessed following the last dosing on day 28, there was a dose-dependent increase in maximum plasma concentrations ( C max ) and total exposure (AUC) when escalating UA oral administration from 250 to 1,000 mg. UA was detectable in the plasma in the form of the parent compound and its two major metabolites, UA-glucuronide and UA-sulfate, with the levels of the conjugated UA metabolites in plasma being higher than those of the parent UA (Fig. 1b ). UA and its conjugate metabolites (that is, UA-glucuronide and UA-sulfate) exhibited similar kinetics, with concentrations peaking in plasma at 6 h ( T max ) post-dosing (Fig. 1b ). The half-life ( t 1/2 ) of the parent UA compound and UA-glucuronide was in the range 17–22 h, with UA-sulfate being slightly longer at 25–58 h. Both UA and its bioavailable metabolites were eliminated from plasma circulation 72–96 h after the last intake (Fig. 1b ). A dose-dependent increase in total UA steady-state levels from 250 to 1,000 mg was also observed during the 4-week UA administration (Supplementary Fig. 2b–d ). No accumulation of UA in plasma was seen when comparing the single and multiple-dosing pharmacokinetics (data not shown). Altogether, these pharmacokinetic data indicate a favourable bioavailable profile for UA. Following unblinding of the study, all subjects receiving UA showed consistent levels of UA, highlighting the high compliance of elderly subjects with the UA intervention (Supplementary Fig. 2b–d ). The presence of the conjugated forms of UA, that is, UA-glucuronide and UA-sulfate, in human plasma indicates that UA undergoes phase 2 conjugation metabolism in the liver and active enterohepatic recirculation. UA was detectable in the skeletal muscle tissue 8 h after a single oral dosing at 2,000 mg, primarily in its parent state (Supplementary Fig. 2e ). The skeletal muscle tissue of only two out of the six participants showed trace levels of UA-glucuronide, whereas UA-sulfate was not detected in any of the subjects (data not shown). To assess the impact of UA on mitochondria in humans, we tested several surrogate molecular markers for mitochondrial health, both in the plasma and in the skeletal muscle of the elderly participants. While this study was powered for safety, the effects observed on mitochondria-related biomarkers were significant and showed a global impact on mitochondrial health following 28 d of UA oral administration at doses of 500 and 1,000 mg. Dosing UA at 250 mg showed no significant improvement in mitochondrial biomarkers (data not shown). There is likely to be a dose–duration relationship in the pharmacodynamics of UA in humans, with longer treatments and larger sample sizes possibly being required to observe the benefits of UA at lower doses. Therefore, the results included here focus on UA doses of 500 mg and 1,000 mg. In the plasma compartment, we observed a dose-dependent decrease of acylcarnitine levels (C8 to C14 and >C20) in the 500 and 1,000 mg groups (Fig. 1c ). Comparing relative plasma levels of acylcarnitines in subjects before and after 28 d of dosing, no differences were observed in the placebo and 250 mg groups, while participants receiving 500 mg or 1,000 mg UA experienced a significant reduction in acylcarnitine levels compared with baseline. Acylcarnitines are the form in which fatty acids enter into the mitochondrion to undergo fatty acid oxidation. The impact of UA was especially dramatic on the shorter chain acylcarnitines (C8, C10, C12, C14:1) (Fig. 1c ), that is, intermediates of the fatty acid oxidation process, which signifies a better efficiency of the fatty acid oxidation process 15 . It is also important to highlight that free carnitine levels were not changed (data not shown), which makes it more likely that the decrease in acylcarnitines is a primary event and not a secondary event in response to changes in free carnitine availability. These systemic results in plasma support that UA administration improves fatty acid oxidation in humans, one key function of the mitochondrion, at the level of the whole body. Available literature shows that plasma levels of acylcarnitines are inversely correlated with mitochondrial function and/or exercise levels of subjects. Elevated plasma acylcarnitines are used as diagnostic biomarkers for mitochondrial diseases characterized by a defect in fatty acid oxidation 16 and they are also longitudinally increased with poor metabolic health and with the aging process, by a magnitude of about 1.5–2-fold over time 17 . On the other hand, in middle-aged male subjects, a 10-week aerobic exercise regimen known to stimulate mitochondrial function led to a decrease in plasma acylcarnitine levels, similar to that observed with 4-week UA intervention (a decrease in the range of 20–50%) 18 . The direct impact of UA at the level of the skeletal muscle (vastus lateralis) was evaluated by gene expression analysis, using a series of genes related to autophagy/mitophagy, mitochondrial biogenesis and fatty acid oxidation selected on the basis of previous preclinical efficacy data 1 . A general pattern of dose-dependent upregulation of gene expression in the human muscle, similar to that observed previously in preclinical models, was seen after 28 d of UA treatment at 500 and 1,000 mg, with some reaching statistical significance ( GABARAPL1 , FABP3 ) (Fig. 2a ). Mitochondrial abundance was also evaluated by measuring the ratio of mitochondrial DNA to nuclear DNA (mtDNA/nuDNA) by qualitative PCR (qPCR). The mtDNA/nuDNA ratio tended to increase, although this did not reach statistical significance (Fig. 2b ). Fig. 2: UA impacts markers of mitochondrial function after 28 d of treatment. a , Comparison of mRNA levels of autophagy/mitophagy, mitochondrial biogenesis and fatty acid oxidation markers as measured by qPCR in vastus lateralis of subjects who received placebo, UA 500 or 1,000 mg for 28 d ( n = 9 biologically independent samples). Results are expressed as a ratio over the placebo group for better readability. b , Change in mitochondrial abundance as measured by qPCR in vastus lateralis skeletal muscle of subjects who received placebo, UA 500 or 1,000 mg for 28 d ( n = 9 biologically independent samples). All data are means ± s.e.m. # 0.05 < P < 0.15; * P < 0.05; ** P < 0.01; *** P < 0.001 after a one-way ANOVA followed by Dunnett’s post-hoc test ( a , b ). c , Graphical representation of GSEA results. Bars represent the normalized enrichment score for the mitochondrial gene sets that are significantly upregulated with FDR < 0.1 in the vastus lateralis skeletal muscle of subjects following UA treatment at 500 mg and 1,000 mg for 28 d compared with placebo. FDR is the estimated probability that a gene set with a given enrichment score (normalized for gene set size) represents a false positive finding. The first three gene sets are upregulated by both UA 500 mg and 1,000 mg, and the others are upregulated by 1,000 mg with the 500 mg being not significant (NS; FDR > 0.1). Mb: membrane. d , e , Genes within the GO_MITOCHONDRION gene set that are upregulated (see Methods ) in vastus lateralis skeletal muscle of subjects following UA treatment at 500 or 1,000 mg for 28 d compared with placebo ( c , n = 9) and in the vastus lateralis skeletal muscle of pre-frail sedentary or active healthy elderly individuals ( NCT02472340 ) ( d , n = 11). Heat map represents change in expression over time (day 28 versus pre-dose) ( c ) or as difference between active healthy and sedentary pre-frail ( d ) as Z scores. Related to Tables 1 and 2 . Full size image To determine more broadly if mitochondrial gene expression was altered, microarray analysis was performed on the messenger RNA from the vastus lateralis skeletal muscle and analysed using gene-set enrichment analysis (GSEA), to look for over-representation of known pathways and gene functional categories. GSEA is designed to detect subtle gene expression changes at the level of a biological process or pathway 19 . Treatment with UA at 500 and 1,000 mg was seen to upregulate several mitochondrial gene sets with a false discovery rate (FDR) < 0.1, including the GO_MITOCHONDRION gene set (Fig. 2c,d and Table 1 ). Consequently, this unbiased approach indicates that 28-d administration of UA upregulates the transcription of mitochondrial genes. Taken together, these data further substantiate the results on gene expression and mtDNA measured by qPCR and demonstrate that UA stimulates mitochondrial biogenesis in the skeletal muscle of humans. Similar observations have been made in other studies on the impact of different exercise regimens and their related effects on the human muscle transcriptome 20 , 21 , 22 . In particular, 12-week, high-intensity aerobic interval training induced upregulation of both mitochondria gene and protein expression in the skeletal muscle of young and older subjects 22 . Table 1 List of mitochondrial gene sets that are enriched in human skeletal muscle after 28 d of UA administration Full size table We have also compared the UA-induced transcriptional signature in the skeletal muscle to the natural transcription signatures observed in age-matched pre-frail (that is, elderly with low muscle strength) and active aged-matched elderly individuals (non-interventional study (NIS); NCT02472340 ), investigated previously 23 . As described, GSEA showed a clear downregulation of multiple mitochondrial gene sets in the skeletal muscle of pre-frail subjects compared with active subjects, with the top ten downregulated gene sets being only related to mitochondria, highlighting a decline in mitochondrial biogenesis in pre-frail muscle 23 . In total, there were 16 gene sets that were both downregulated in pre-frail versus active subjects, and upregulated in subjects who received UA at 1,000 mg for 28 d (Table 2 ). Of these, 13 were related to mitochondrion organelle or function. The 63 genes that were the most induced within the GO_MITOCHONDRION gene set (Fig. 2d ), following 28-d treatment with UA, were extracted from the published NIS NCT02472340 dataset and plotted as a heat map (Fig. 2e ). This graphical representation of the GSEA results shows that the molecular signature in skeletal muscle of the pre-frail individuals is marked by a downregulation of mitochondrial gene expression. In comparison, UA significantly upregulates the transcription of the same mitochondrial gene set in skeletal muscle, providing encouraging evidence that UA may provide a benefit for age-related decline in muscle mitochondrial health. This comparison is particularly relevant, as the participants from phase 1 and the published NIS study are matched for age and body mass index (BMI), and are all sedentary, except those in the active healthy elderly group. Table 2 List of gene sets that are enriched in human skeletal muscle after 28 d of UA administration at 1,000 mg and downregulated in human skeletal muscle of pre-frail compared to active elderly subjects (NIS; NCT02472340 ) Full size table The global impact on plasma acylcarnitines and the specific effect on muscle transcriptomics in subjects receiving daily doses of either 500 mg or 1,000 mg of UA revealed an improved systemic mitochondrial health, an enhanced fatty acid oxidation rate and a gene expression profile consistent with mitochondrial biogenesis in the muscle. Future clinical trials may allow a more in-depth characterization of UA’s impact on mitochondrial function in tissues. Given the wide range of health benefits of UA that have been recently reported in the brain 24 , 25 and intestine 26 , this evidence suggests that UA will have benefits on mitochondrial health in tissues other than skeletal muscle. The present study reveals that UA induces a molecular signature response, in both the plasma and skeletal muscle of humans, resembling that observed as a consequence of a regular exercise regimen. It is important to highlight that our earlier work revealed that the stimulation of mitophagy by UA led to an induction of mitochondrial biogenesis and an enhancement of mitochondrial function, resulting in improved aerobic endurance and higher muscle strength in treated rodents 1 . In humans, endurance exercise is well known to trigger mitochondrial biogenesis 27 and fatty acid oxidation in the skeletal muscle 28 to optimize efficient production of ATP by skeletal muscle cells under aerobic conditions. It has also been shown that exercise is a natural means of triggering mitophagy 29 , 30 , making it particularly important to maintain an active lifestyle during aging, as it ultimately results in improved mitochondrial function in the muscle 4 , 5 . The research community has shown considerable interest in the therapeutic potential of stimulating the mitophagy pathway, as it may be the key to treating many of the conditions and diseases associated with a decline in mitochondrial function linked to aging. Aside from the present study, the most advanced published developments in stimulating mitophagy have been at the preclinical stage, where the approach of targeting deubiquitylating enzymes appears to be promising 31 . This report of a clinical investigation of an activator of mitophagy demonstrates the successful translation of the benefits of the natural food metabolite UA to humans, particularly the combination of its positive biological effects on mitochondrial health and, importantly, its favourable safety and bioavailability profile. These promising findings support an approach of dietary supplementation with UA as a nutritional intervention to assist in managing the declining mitochondrial function that accompanies aging and to promote healthy muscle function throughout life. Methods Trial design The phase 1 clinical trial was designed to investigate the safety of the food ingredient UA in elderly adults, as well as its impact on biomarkers of mitochondrial health. The trial was conducted as a single centre (in a phase 1 clinical trial unit), randomized, double-blind, placebo-controlled study in 60 healthy male and female elderly volunteers. The study was divided into two parts (Supplementary Fig. 1 ): part A involved administering a single ascending dose (250 mg, 500 mg, 1,000 mg and 2,000 mg of UA delivered orally) to 24 healthy elderly male and female volunteers, where each participant was randomized into two subsequent doses in three cohorts. To minimize risk, administration of the investigational product in each dose group of the study was done sequentially within each cohort (maximum of four subjects each day, respectively). For the study, a 3:1 ratio was followed in part A, with each cohort receiving six active doses and two placebo doses. Four participants (3 active, 1 placebo) received the specified dose at the start of the week and, in the absence of adverse events, the remaining four participants (3 active, 1 placebo) in the cohort received their doses later in the week. Dose escalation to the next higher dose tested was always twofold higher than the previous dose. A decision tree was employed (also visually depicted in Fig. 1a ) and followed for the single-dosing part A of the study. In period 1, the following doses of UA or placebo were tested: Cohort 1 (6 active + 2 placebo) was orally administered the lowest dose of UA (250 mg). After safety of this dose was documented, the study proceeded to the next higher dosing. Cohort 2 (6 active + 2 placebo) was orally administered the next higher dose of UA (500 mg). Once safety was documented, the study proceeded to the next higher dosing. Cohort 3 (6 active + 2 placebo) was orally administered the next higher single dose of UA (1,000 mg). Once safety was documented, the study proceeded to the next higher dosing. Following a minimum 3-week wash-out period, in period 2, the following doses of UA or placebo were tested: Cohort 1 (6 active + 2 placebo) was orally administered the highest single dose of UA (2,000 mg) examined in the study. Cohorts 2 and 3 were administered single doses of UA at 500 mg and 1,000 mg, respectively, which were admixed in a high-protein yogurt. Once safety was documented for a given UA dose, only in the subsequent week was the UA dosing escalated to the next higher dosing. Safety (adverse events, serum biochemistry, urinalysis, ECG and physical examination) was documented for each dose before escalation to the next higher dose. This evaluation was overseen by a safety monitoring committee that included two qualified physicians at the clinical site. At the end of each dose level, an interim safety report was issued by the study medical investigator. A dose escalation meeting was held, and the decision to proceed (for example, to the next higher dose) was taken on the basis of a blind safety data review. The same decision tree was employed for the 4-week multiple dosing in part B of the study, starting from lowest to highest UA dosing. Part B consisted of administering multiple ascending doses (250 mg, 500 mg and 1,000 mg) to 36 healthy elderly male and female volunteers, who were orally administered placebo or UA-containing soft gels for 4 weeks. In both parts, placebo or UA were administered in the form of soft gels containing either 250 mg UA or placebo. The analytical laboratories, the study investigator and the team and the subject were blinded for the duration of the clinical study. Each subject was randomized to receive either UA or placebo in soft gels that looked identical in appearance. The subjects were recruited from the volunteers database of the clinical unit and all participants were Caucasian. The randomization list was generated by the clinical site (Eurofins Optimed) using SAS statistical software (v.9.3). All clinical data were recorded electronically on a web-based electronic case report form (eCRF-RDC 4.6, a validated Electronic Records/Electronic Signature-compliant (21 CFR Part 11) application of Oracle Clinical 4.6). The study was carried out in accordance with the Declaration of Helsinki as modified in Fortaleza (2013), the recommendations on Good Clinical Practice (GCP) (ICH E6) and applicable local regulatory requirement(s). The clinical study was approved by both the Ethics Committee ‘Comité de Protection des Personnes’ and by the French/National Health Authorities ‘Agence Nationale de sécurité du médicament et des produits de santé’ for the use of urolithin A as a food ingredient. The clinical study is registered in clinicaltrials.gov as NCT02655393 . Sample size The sample size was chosen on the basis of feasibility to allow preliminary characterization of safety, tolerability and pharmacokinetics and to explore pharmacodynamic measures of the UA intervention. Following guidelines and recommendations 9 and similar standard phase 1 study dose escalation designs 10 , 11 , the single-dosing cohorts consisted of 6 + 2 (6 active and 2 placebo) receiving subjects per dose at randomization, while the multiple-dosing cohorts were 9 + 3 (9 active and 3 placebo) receiving subjects per dose at randomization. Placebo samples from each dose level were grouped during analysis, resulting in six placebo subjects in the single-dosing cohorts, and nine placebo subjects in the multiple-dosing cohorts. This is consistent with most phase 1 pharmacokinetic trials designed to provide sufficient information about human safety and the bioactive pharmacokinetics profile and to allow dose selection and powering of the design of future phase 2 efficacy trials. Inclusion and exclusion criteria Subjects were healthy, sedentary elderly people who were included in the study on meeting the inclusion and exclusion criteria, as reviewed by the study Principle Investigator. If volunteers agreed to enter the study, they signed the informed consent form. The participants agreed to refrain from consuming dietary supplements that could potentially impact either muscle or mitochondrial function, such as resveratrol, pomegranate and ellagitannins, nicotinamide riboside, whey protein, leucine, iso-leucine, l -carnitine, creatinine, coenzyme Q10, vitamin A, niacin, folic acids, vitamin C, vitamin E and probiotic foods and supplements, during the 2 weeks before inclusion and throughout the study. Concomitant medications were recorded during the course of the study. Participants were requested to follow a stable lifestyle throughout the duration of the trial with no sports and exercise activity. Elderly subjects were medically screened up to 21 d before study enrolment for eligibility. General inclusion criteria included an age of 61 y to 85 y; BMI 18–32 kg m – 2 and demonstrated sedentary behaviour, that is, having an activity level <600 MET (metabolic equivalent unit—minutes per week as assessed by the International Physical Activity Questionnaire (IPAQ)). General exclusion criteria included any presence of cardiovascular, pulmonary, gastrointestinal, hepatic, renal, metabolic, haematological, neurological, psychiatric, systemic or infectious disease; inability to abstain from muscular and physical activity >20 min each day during the course of the study; history or presence of drug or alcohol abuse (alcohol consumption >40 grams per day); positive hepatitis B surface antigen or anti-hepatitis C virus antibody, or positive results for human immunodeficiency virus-1 or -2 tests; inability to refrain from smoking more than half a pack of cigarettes (or similar for other tobacco products) per day during the course of the study; excessive consumption of beverages with xanthine bases (>4 cups or glasses per day) and blood donation within 2 months before the start of the clinical study product administration. Adverse events and concomitant medications were continuously registered throughout the entire clinical study period. Study schedule and product intake Subjects were screened for eligibility up to 21 d before study enrolment. The level of activity was assessed only during screening. Once the subjects were confirmed to have met the inclusion and exclusion criteria by the medical investigator and had signed the informed consent form, they were admitted to the clinical research unit for the part A single dosing that was conducted in two periods, separated by at least a 3-week wash-out period. UA was synthesized to a high purity (>99%) for this study and was formulated into soft gels containing 250-mg doses. The placebo soft gels were indistinguishable in appearance from UA-containing soft gels. UA was also directly admixed into a high-protein yogurt (17 grams per serving) at doses of 500 mg and 1,000 mg, to study the effect of food on bioavailability. Placebo yogurts were colour-matched and indistinguishable to the UA admixed yogurt. These products were administered orally during fasting (that is, before breakfast) in the morning. Following single dosing, plasma samples were collected at frequent intervals up to 96 h following dosing to establish UA pharmacokinetics. In the highest single dosing of UA at 2,000 mg, muscle biopsies were performed 8 h after dosing to detect UA levels in skeletal muscle in the subjects. For part B of the study, repeated administrations of UA were performed in the morning. During part A, treatments containing UA or placebo were administered under the supervision of the investigator in a clinical pharmacology unit (Eurofins Optimed) at around 8:00. For part B, subjects were provided with the study product to take during each ambulatory visit and were given the necessary supply for administration at home between visits. Subjects were given a diary to record the number of capsules taken and the time of intake. The actual time of product administration was documented in the individual eCRF of the clinical study. All unused capsules were returned by the subjects at the end of the study intervention. Muscle tissue collection and blood sampling for biological markers of mitochondrial function were performed at the day −1 visit (that is, before the start of the 4-week dosing in part B of the study) and on day 28, following the last UA dosing. IPAQ The short version of the IPAQ was used to estimate an individual’s level of physical activity in the domains of household and yard work activities, occupational activity, self-powered transport and leisure-time physical activity, as well as sedentary activity. The questionnaire was taken only during screening and was self-assessed by the subjects. The scores were recorded in the eCRF. Safety and bioavailability assessment of UA Adverse events were recorded and coded according to the Medical Dictionary for Regulatory Activity (MedDRA). The safety events were classified as serious adverse events and adverse events by the study medical principal investigator. The relationship to the study product was also assessed and assigned to the following categories: related, unrelated, unlikely (to be related) and possible (could be related but a doubt exists). Twelve-lead ECGs were recorded in supine position (Cartouch Cardionics Device). A physical examination was conducted on all subjects by the study medical principal investigator at the start and end of study treatments and including evaluation of main body systems/regions, including: skin and mucous, ears/nose/throat, pulmonary, cardiac, gastrointestinal and neurological systems. A panel of haematological (haemoglobin, haematocrit, red blood cells, white blood cells, differential count, platelet count, mean corpuscular volume, mean corpuscular haemoglobin and mean corpuscular haemoglobin concentration); serum biochemistry (creatinine, uric acid, alanine serine transferase, alanine leucine transferase, gamma glutamyl transferase, and total and conjugated bilirubin); and urinalysis (pH, ketone bodies, proteins, glucose and blood) safety tests were also performed at the start and end of UA or placebo treatment, to compare the safety profile of UA in elderly subjects. Plasma concentrations of UA and its metabolites, UA-glucuronide and UA-sulfate, were analysed in plasma and muscle biopsy samples. UA levels in plasma were assessed at the following time points during both the single and following the last dose of the 4-week multiple-dosing oral intervention with UA (pre-dosing, 1, 2, 4, 6, 8, 12, 24, 72 and 96 h except for the 250 mg and 500 mg single dosing where the plasma was sampled up to the 36 h time point) for bioavailability assessments. Plasma samples were also collected for assessment of UA steady-state concentrations during the 4-week UA study at the following time points (day −1, day 7, day 14, day 28, day 29, day 31, day 32). UA levels in skeletal muscle biopsies were also assessed at pre-dosing and at 8 h post-dosing. We have developed robust, precise and validated methods to measure the individual concentrations of the parent urolithin A (UA) and its detectable metabolites in both plasma and in human skeletal muscle. Validation of the methods for the measurements of urolithin A aglycone and its glucuronide and sulfate metabolites was performed following both the guidance on method validation from the Food and Drug Administration 13 and the European Medicines Agency 12 . The limit of quantification was 5.00 pg ml −1 for UA in plasma and 5.00 ng ml −1 for UA-glucuronide and UA-sulfate in plasma, 5.00 pg ml −1 for UA in skeletal muscle and 5 ng ml −1 for UA-glucuronide and UA-sulfate in skeletal muscle. For mean value calculations, all values below the limit of quantification were set to zero. Concentrations were converted to molarity values using the following molecular weight: M (UA) = 228.2 g mol −1 ; M (UA-glucuronide) = 404.3 g mol −1 ; M (UA-sulfate) = 325.3 g mol −1 . The pharmacokinetic variables were calculated on the basis of the actual sampling times. Non-compartmental pharmacokinetic analysis was performed using Phoenix WinNonlin v.6.3 (Pharsight Corporation). Muscle biopsy procedure Muscle tissue was collected from the vastus lateralis skeletal muscle of the right leg using a 4.5-mm Bergström muscle biopsy needle. The subjects were in fasting condition before the collection of the muscle biopsy sample. Subjects were placed in a semi-supine position with the knees supported and slightly flexed. The lateral side of the leg was palpated to determine the location of biopsy, which was 10 cm proximal of the upper pole of the patella on a line between the patella and the anterior superior iliac spine. After disinfecting the skin, the skin and muscle fascia were locally anaesthetized with 5–10 ml lidocaine 5% solution. Additional lidocaine was administered when the anaesthetic effect was not sufficient. A sterile cloth with a hole was placed on the leg, keeping the biopsy site exposed. A small incision of 5 mm was made in the skin and the muscle fascia was incised minimally, just wide enough for the biopsy needle to pass through. The biopsy needle was introduced via the skin and fascia into the muscle. Considering the length of the needle, the depth of the collection could be estimated to be around 4–5 cm below the skin and, therefore, in the skeletal muscle. Moreover, the passing of the fascia lata was always perceptible. The minimal amount of each muscle sample was at least 30 mg. After collecting the required amount of muscle tissue, the wound was closed with a single, non-absorbable skin suture and pressure was applied by an elastic bandage. Subjects were instructed not to perform strenuous physical activity with the right leg for 2 d. Tissue collected for RNA and DNA expression was snap-frozen in liquid nitrogen within 30 min of collection and stored at −80 °C. Biomarker analysis in skeletal muscle biopsies mtDNA abundance Muscle samples were incubated overnight in 360 µl of buffer ATL and 40 µl proteinase (Qiagen) at 55 °C in a thermomixer set at 300 r.p.m. Cell debris was removed by centrifugation and 200 µl of clear lysates was placed in the QIAsymphony SP workstation (Qiagen). DNA was extracted with the QIAsymphony DNA Mini kit (Qiagen, catalogue no. 937236) following the manufacturer’s procedures. Quantitative PCR was performed on the Fluidigm Biomark system following the Fluidigm Specific Target Amplification Quick Reference (Fluidigm). Samples were loaded as technical triplicates. The real-time PCR data were analysed using the Linear Derivative baseline correction and User (detector) Ct threshold method on the latest version of the Fluidigm Biomark software (v.4.1.3). Quantification of mtDNA was performed using two customized Taqman assays targeted against a nuDNA sequence (18S) and a conserved region of mtDNA (MTND1) 32 . Relative mtDNA copy number was determined comparing MTND1 to 18S signal. All quantifications were determined using the 2 − ∆∆Ct method and the mean Ct of the technical triplicates. mRNA extraction from muscle biopsies Approximately 10–15 mg of muscle samples was homogenized in 800 µl of buffer RLT Plus (Qiagen) plus two steel balls using a Tissue Lyser (Qiagen). Cell debris was removed by centrifugation and clear lysates were placed in the QIAsymphony SP workstation (Qiagen). RNA was extracted with the QIAsymphony RNA kit (Qiagen, catalogue no. 931636) following the manufacturer’s procedures. RNA was quantified and checked for purity on a Nanodrop-8000. RNA integrity was controlled using RNA 6000 Nano LabChip kit (Agilent Technologies, catalogue no. 5065-4476) on an Agilent Bioanalyzer (Agilent Technologies). Gene expression using qPCR Complementary DNA was synthesized using an ABI High Capacity cDNA kit using 50 ng of RNA or the highest quantities isolated. Quantitative PCR was performed on the Fluidigm Biomark system following the Fluidigm Specific Target Amplification Quick Reference (Fluidigm). Samples were loaded as technical triplicates. The real-time PCR data were analysed using the Linear Derivative baseline correction and User (detector) Ct threshold method on the latest version of the Fluidigm Biomark software (v.4.1.3). Gene expression using microarray Two nanograms of total RNA was processed using the Human Clariom D microarray following the Affymetrix GeneChip WT Pico Reagent Kit Guide. The data were normalized with the signal space transformation-robust multi-array average (SST-RMA) method. Gene set enrichment analysis (GSEA) GSEA was employed to look for over-representation of known pathways and gene functional categories among the regulated genes 19 . First, a ranked list of differentially expressed genes was generated using the R package limma (R v.3.3.2, limma v.3.30.11). A design matrix was created using the status (dose + day as factors). Comparisons were generated using a contrast matrix. A linear model was then fitted to each gene. A moderated t -test was used to compute the t -statistics, moderated F -statistic and log-odds of differential expression using the empirical Bayes method. Genes were ranked by log 2 (fold change) (high to low) and were filtered for an unadjusted P value of 0.1 before using them in the GSEA analysis to reduce the risk of false positives. Gene sets defined by the Gene Ontology Consortium ( ) were downloaded from the Broad Institute’s MSigDB website ( ), using the C5: GO gene sets (MSigDB v.5.2). A total of 6,166 gene sets were used as an input for all GSEA comparisons. The genes contributing the most to the enrichment of GO_MITOCHONDRION were identified using the leading edge analysis available in the GSEA software (v.2.2.3). Global metabolomics in plasma A 6-ml blood sample was withdrawn into a K2 EDTA-coated tube. The blood samples were gently inverted a few times for complete mixing with the anticoagulant. The exact time of sample collection was recorded on the eCRF. Within 30 min following blood collection, each blood sample was centrifuged at 1500 g for 10 min at 4 °C. At 30 min after centrifugation, the top layer of human plasma was transferred into two pre-labelled polypropylene tubes, containing approximately 1,500 µl of plasma. Tubes were capped immediately and the plasma was frozen in an upright position at approximately −80 °C for storage. Metabolomics of plasma was performed by Metabolon according to published methods 33 . In brief, sample preparation was conducted using a proprietary series of organic and aqueous extractions to remove the protein fraction while allowing maximum recovery of small molecules. The extracted samples were split into equal parts for analysis on the gas chromatography–mass spectroscopy (GC–MS) and liquid chromatography–tandem mass spectroscopy (LC–MS/MS) platforms. For LC–MS/MS, samples were split in two aliquots that were either analysed in positive (acidic solvent) or negative (basic solvent) ionization mode. GC–MS was performed on bis(trimethylsilyl)triflouroacetamide-derivatized samples in a 5% phenyl GC column. Statistical analysis The SAS statistical software (v.9.3) was used to analyse all the clinical safety endpoints and the qPCR data (RNA and DNA). This is a first-in-human phase 1 study of a nutritional ingredient that follows the standard dose escalating design (single and multiple) employed in phase 1 trials, with stopping rule criteria for safety analysis; that is, all the doses have the same number of subjects assigned until experimentation is stopped, starting from the lowest to the maximum tolerated dose. We chose a 6 + 2 (6 active and 2 placebo) design for the single-dosing cohorts and a 9 + 3 (9 active and 3 placebo) design for the multiple-dosing cohorts to permit a robust statistical characterization of UA in the healthy elderly population for endpoints of safety, tolerability and pharmacokinetics and to explore pharmacodynamic parameters. For pharmacodynamics analysis, the alpha level was set at a standard level of 5% and all statistical tests are bilateral. For each gene assessed via qPCR the following analyses were performed: descriptive statistics on normalized Ct values for each assessment by dose group; descriptive statistics on mean fold change values at each assessment by dose group and analyses of variance (ANOVA) performed on mean fold change values. The following model was performed on mean fold change values in gene expression: model included dose (placebo/250 mg/500 mg/1,000 mg) as a fixed factor. In case of a significant dose effect (alpha risk: 0.05), pairwise comparisons between the active dose groups and the placebo were carried out from ANOVA using a Dunnett’s test. If residuals from ANOVA were not normally distributed, a natural log (ln) transformation was applied to the data. If the normal hypothesis was not demonstrated from the ln transformation data, rank data were retained. For metabolomics data, statistical analyses were performed in ArrayStudio on log-transformed data. A two-way, repeated-measures ANOVA was used where one factor was applied to each subject and the second factor was a time point. The model took into account the repeated measures, that is, the treatments are given to the same subject over time, to determine whether there was a significant effect of the compound over time. All the samples that were measured and all the data points were included in the analysis. There was no sample or data point exclusion. Non-interventional study (NIS) comparing pre-frail sedentary with active healthy elderly subjects The study was conducted as described elsewhere 23 ( NCT02472340 ). Briefly, 11 pre-frail (6 males and 5 females) subjects aged 70.2 ± 5.8 y and 11 active (6 males and 5 females) subjects aged 70.0 ± 6.7 y participated in this study, with subjects having a mean BMI of 25.7 ± 4.2 versus 24.6 ± 3.9 kg m −2 , respectively. Subjects were considered pre-frail when fulfilling at least one to two of the three criteria for frailty 34 . According to the IPAQ questionnaire, pre-frail subjects had a mean daily energy expenditure of 392 MET minutes per week, corresponding to less than 20 min of walking per day, while the active group had a score of 6,508 MET minutes per week, corresponding to 1 h of vigorous exercise per day. Muscle biopsy was collected under fasted state in the morning and processed for RNA extraction, as described above. The HTA 2.0 microarray chip from Affymetrix was used to measure mRNA expression levels of 42,935 reporters/probes associated with 33,804 annotated transcripts or genes (mRNA). The raw gene expression values were normalized with the SST-RMA algorithm. The genes were ordered in a ranked list according to the magnitude and direction of their differential expression between pre-frail and active groups with the R library limma (R v.3.3.2). This list was used to perform the GSEA analysis using the same gene sets as for the phase 1 study (see ‘GSEA’ section above). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The gene expression data are deposited at the European Genome-phenome Archive under accession code EGAS00001003638 and can be accessed subject to signing a data access agreement.
Urolithin A, a metabolite of biomolecules found in pomegranates and other fruits, could help slow certain aging processes. EPFL spin-off Amazentis, in conjunction with EPFL and the Swiss Institute of Bioinformatics, has published a paper in the journal Nature Metabolism outlining the results of their clinical trial. It is a fact of life that skeletal muscles begin to lose strength and mass once a person reaches the age of 50. A recent clinical trial involving two EPFL entities—spin-off Amazentis and the Laboratory of Integrative Systems Physiology (LISP) – showed that urolithin A, a compound derived from biomolecules found in fruits such as pomegranates, could slow down this process by improving the functioning of mitochondria—the cells' powerhouses. A joint paper presenting the results of the trial, published today in Nature Metabolism, also demonstrates that ingesting the compound poses no risk to human health. Slowing mitochondrial aging The claim that healthy eating is the key to longer life might seem too convenient—but it is now further backed by scientific evidence. Pomegranate, a fruit prized by many civilizations for its health benefits, contains ellagitannins. When ingested, these molecules are converted into a compound called urolithin A (UA) in the human gut. The researchers found that UA can slow down the mitochondrial aging process. The catch is that not everyone produces UA naturally. To get around that problem, and to make sure all participants received an equal dose, the team synthesized the compound. Some 60 elderly people, all sedentary yet in good health, took a single dose of between 250 and 2,000 mg of UA. The researchers observed no side effects when compared with the control group, who were given a placebo. The participants were then split into four groups, each receiving a placebo, or a 250, 500 or 1,000 mg daily dose of UA for 28 days. Again, no adverse health impacts were found, even after prolonged ingestion. The team then assessed the efficacy of UA by looking at cellular and mitochondrial health biomarkers in the participants' blood and muscle tissue. The results were compelling: UA stimulates mitochondrial biogenesis—the process by which cells increase mitochondrial mass—in the same way as regular exercise. UA is the only known compound that re-establishes cells' ability to recycle defective mitochondria. In young people, this process happens naturally. But as we age, our body starts to lose its power to clean up dysfunctional mitochondria, causing sarcopenia (loss of skeletal muscle mass) and the weakening of other tissues. The team focused on slowing, or even reversing, this natural effect of aging. The paper, published today, also confirms that the compound is safe to eat. Amazentis, based at EPFL's Innovation Park, hopes to harness the promising results to quickly bring the product to market. "These latest findings, which build on previous preclinical trials, really crystallize how UA could be a game-changer for human health," says Johan Auwerx, a professor at LISP, the EPFL lab involved in the trial. An article published in 2016 showed that the lifespan of nematode worms exposed to UA increased by 45 percent—from around 20 to 30 days—when compared with the control group. Likewise, older mice showed 40 percent better endurance while running after two weeks of treatment. The compound may thus have even more secrets to reveal about its benefits for human health.
10.1038/s42255-019-0073-4
Biology
Squeezing cells into stem cells
Caiazzo M, Okawa Y, Ranga A, Piersigilli A, Tabata Y, Lutolf MP. Defined three-dimensional microenvironments boost the induction of stem cell pluripotency. Nature Materials 11 January 2016. DOI: 10.1038/nmat4536 Journal information: Nature Materials
http://dx.doi.org/10.1038/nmat4536
https://phys.org/news/2016-01-cells-stem.html
Abstract Since the discovery of induced pluripotent stem cells (iPSCs), numerous approaches have been explored to improve the original protocol, which is based on a two-dimensional (2D) cell-culture system. Surprisingly, nothing is known about the effect of a more biologically faithful 3D environment on somatic-cell reprogramming. Here, we report a systematic analysis of how reprogramming of somatic cells occurs within engineered 3D extracellular matrices. By modulating microenvironmental stiffness, degradability and biochemical composition, we have identified a previously unknown role for biophysical effectors in the promotion of iPSC generation. We find that the physical cell confinement imposed by the 3D microenvironment boosts reprogramming through an accelerated mesenchymal-to-epithelial transition and increased epigenetic remodelling. We conclude that 3D microenvironmental signals act synergistically with reprogramming transcription factors to increase somatic plasticity. Main The manipulation of mammalian cell morphology can induce a variety of behavioural changes including proliferation, migration, apoptosis and differentiation 1 , 2 , 3 , 4 , 5 , 6 , 7 . These morphologically driven processes are directly controlled by the cell microenvironment, and there is mounting evidence that biophysical signals conveyed by the extracellular matrix (ECM) are responsible for these changes in cell fate 8 , 9 , 10 . On a molecular level, shape-induced cell fate changes are controlled by changes in gene expression patterns, which themselves are precisely regulated by chromatin organization through a number of different post-translational histone modifications such as acetylation and methylation 11 . Reprogramming of somatic cells is considered to be a multi-step process characterized by an early and late phase of transcriptome and proteome resetting 12 . Notably, genome-wide analysis has revealed that intermediate cell populations that eventually form iPSCs are characterized by the activation of genes responsible for cytoskeleton organization during the first three days of reprogramming 13 , and quantitative proteomic analysis has shown strong induction of proteins related to the regulation of chromatin organization during the same time frame 14 . Taken together, cytoskeletal and epigenetic alterations are two critical events that mark the initiation phase of the reprogramming process. Although previous studies have demonstrated profound effects of the ECM on cell shape and accompanying alteration in chromatin structure, and recent work has revealed that iPSC generation can be influenced by biophysical parameters in 2D culture 15 , the role of 3D microenvironmental cues on somatic-cell reprogramming remains unexplored. To characterize these essential early events in establishing the pluripotent state of iPSCs, we report a new reprogramming strategy using chemically defined 3D ECMs (ref. 5 ). Such matrices permit a precise control over the physiochemical characteristics of the cellular microenvironment that is unachievable in 2D culture systems. We demonstrate that biophysical effectors linked to 3D cell confinement induce immediate alterations in cell morphology that facilitate mesenchymal-to-epithelial transition (MET), as well as histone modifications essential for the initiation of reprogramming. These results suggest previously unknown mechanisms underlying somatic-cell reprogramming and highlight the functional importance of the interaction between cells and their ECM in the regulation of cell fate. 3D microenvironments to promote pluripotency To find an optimal synthetic microenvironment for iPSC generation, we first used mouse embryonic stem cells (ESCs) to determine 3D matrix compositions that best promote ESC self-renewal and pluripotency. To do so, we modulated the mechanical properties of enzymatically crosslinked poly(ethylene glycol) (PEG)-based hydrogels by varying polymer content 16 , 17 . Additionally, to mimic the biochemical features of native ECMs, we functionalized the otherwise inert PEG network with the fibronectin-derived adhesion peptide RGDSP (arginine-glycine-aspartate-serine-proline). We then encapsulated ESCs in soft gels (shear modulus G ′ = 300 ± 35 Pa, as determined by rheometry) using either matrix metalloproteinase (MMP)-degradable (containing the MMP substrate GPQG ↓ IWGQ; ↓ indicating the cleavage site) or -nondegradable (containing the MMP-insensitive sequence GDQGIAGF) PEG networks 18 . The results clearly showed that degradable gels allowed for better cell proliferation ( Supplementary Fig. 1a–c ). We further quantified ESC proliferation in gels of variable stiffness and cell seeding density ( Supplementary Fig. 1d–h ). These experiments, consistent with our recently published data 19 , showed that degradable and soft gels, at a seeding density of 1,000 cells μl −1 , resulted in optimal proliferation rates similar to the 2D control condition, and gave rise to the highest levels of pluripotency marker expression ( Supplementary Fig. 1i ). Indeed, following encapsulation in 3D gels, individual ESCs were uniformly distributed throughout these matrices and expanded into colonies of cells positive for NANOG, OCT4 and alkaline phosphatase ( Fig. 1a–f ). Furthermore, Alamar blue assays confirmed sustained viability in PEG-based hydrogels (data not shown). Notably, whereas ESCs cultured in two dimensions rapidly differentiated following LIF removal, 3D-encapsulated cells maintained their typical undifferentiated morphology and Oct4 –GFP expression as much as nine days after removal of LIF ( Fig. 1g–i ), suggesting that spatial confinement of cells in the 3D microenvironment plays a role in maintaining pluripotency. Figure 1: 3D PEG hydrogel cultures maintain ESC pluripotency. a – c , ESC colony growth 1, 3 and 5 days after encapsulation in PEG hydrogels. BF, bright field. d , Immunostaining for NANOG (the inset shows the relative DAPI staining). e , Oct4 -driven GFP expression. f , Alkaline phosphatase (AP) staining. g , h , Oct4 –GFP expression of ESCs grown without LIF in three dimensions for one week ( g ) or in two dimensions ( h ). Insets show the bright-field images with undifferentiated and differentiated morphologies in 3D and 2D conditions, respectively. i , Flow cytometry analysis of Oct4 –GFP expression of ESCs cultured in 2D and 3D conditions with or without LIF. Scale bars, 100 μm. Full size image iPSC generation in defined 3D microenvironments The promising results obtained with 3D ESC culture prompted us to use the same 3D microenvironment formulation for iPSC generation. To assess whether reprogramming to pluripotency could be achieved in a 3D context, we employed a well-defined mouse model system based on the drug-inducible expression of the four Yamanaka factors 20 ( Fig. 2a ). Pou5f1 tm 2( EGFP ) mice harbouring an IRES–EGFP fusion cassette downstream of the stop codon of the Oct4 gene 21 were crossed with the mutant mice R26 rtTA ; Col1a1 4 F 2 A harbouring both the doxycycline-inducible polycistronic 4F2A cassette ( Oct4 , Sox2 , Klf4 and Cmyc ) and the constitutively expressed reverse tetracycline transactivator (rtTA; ref. 22 ). We then derived primary tail-tip fibroblasts from the resulting Pou5f1 tm 2( EGFP ) ; R26 rtTA ; Col1a1 4 F 2 A (4F2A– Oct4 –GFP) mice and encapsulated them in PEG-based hydrogels. Reprogramming was initiated the following day by addition of doxycycline, and the appearance of Oct4 –GFP+ iPSC colonies in 3D gels, termed ‘3DiPSCs’, was quantified over a period of two weeks. Strikingly, Oct4 –GFP-expressing colonies began to appear in these ‘standard’ gels after only six days of doxycycline induction ( Supplementary Movie 1 ). After 16 days of reprogramming and an additional seven days without doxycycline, we found by immunofluorescence staining that 3DiPSCs expressed the main pluripotency markers OCT4, SOX2 and NANOG ( Fig. 2b ) and showed transcriptional pluripotency signatures comparable to control mouse ESCs and iPSCs generated in 2D culture ( Fig. 2c ). Moreover, 3DiPSCs could differentiate into all three germ layers in vitro ( Fig. 2d–f ). To further compare the 3DiPSCs with ESCs, we employed bisulphite sequencing to assess the methylation states of CpG dinucleotides in the Oct4 and Nanog promoter regions. These experiments demonstrated that Oct4 and Nanog promoter regions were highly demethylated compared with the parental fibroblasts and possessed methylation states closely resembling those of ESCs ( Fig. 2g ). Most importantly, 3DiPSCs were competent to generate chimaeric mice ( Fig. 2h ) and differentiated into all three germ layers in an in vivo teratoma assay ( Fig. 2i–l ). Figure 2: Generation of 3DiPSCs. a , Schematic representation of the one-step 3D reprogramming protocol. b , Immunocytochemistry analysis of pluripotency markers in 3DiPSCs. c , RT–PCR analysis of pluripotency marker genes in 2 clones of 3DiPSCs (3DiPSCs-1, 3DiPSCs-2) compared with 2DiPSCs (2DiPSCs-1, 2DiPSCs-2), ESCs and tail-tip fibroblasts (TTFs). d – f , Immunostaining showing differentiation of 3DiPSCs into neuroectodermal (TUBB3), mesodermal (MHC, myosin heavy chain) and endodermal (FOXA2) cell types. g , Methylation analysis of Oct4 and Nanog promoters in 3DiPSCs, ESCs and tail-tip fibroblasts. h , Chimaeric mouse generated with 3DiPSCs. i – l , Teratoma assay demonstrating that 3DiPSCs are able to differentiate in vivo into neuroectoderm, mesoderm and endoderm. Scale bars, 50 μm ( b , d – f , k , l ) or 25 μm ( j ). dox, doxycycline. Full size image 3D microenvironments enhance reprogramming After confirming the feasibility of generating iPSCs in 3D culture, we used tail-tip fibroblasts derived from 4F2A– Oct4 –GFP mice to perform a comparative study with the standard 2D culture protocol. As ESC self-renewal in three dimensions was found to be strongly cell density-dependent ( Supplementary Fig. 1g ), we first tested how this parameter would influence 3D reprogramming. Indeed, 3D reprogramming was also highly dependent on cell density and, at an optimal density of 500 cells μl −1 , was found to result in significantly higher reprogramming efficiency compared with the 2D culture conditions ( Fig. 3a ). Figure 3: 3D culture accelerates reprogramming and increases iPSC generation efficiency. a , 3DiPSC generation efficiency obtained with different cell densities. b , Representative bright-field images taken during the derivation of 2D and 3DiPS cells showing early colony formation (day 6), Oct4 –GFP+ colony formation (day 10), and mature, doxycycline-independent iPSC colony formation (day 20) in the 3D condition. Insets show Oct4 –GFP expression. c , Quantification of Oct4 –GFP+ colonies over the course of the reprogramming process in 2D and 3D conditions. d , Experimental design of iPSC stability study. Doxycycline was removed at different time points to determine the kinetics of iPSC generation. e , 3D culture accelerates acquisition of independence from exogenous reprogramming factor expression. Data are expressed as means ± s.e.m. ∗ ∗ , p < 0.01; ∗ , p < 0.05. Scale bar, 30 μm. Biological replicates ( n = 6) are represented in a , c , and e . The same starting number of cells per sample was used in all comparative experiments in 2D and 3D conditions. Full size image At the observed optimal cell density, Oct4 –GFP-expressing colonies began to appear in three dimensions two days earlier than in two dimensions ( Fig. 3b, c ). To understand how the 3D environment could engender such different reprogramming dynamics and efficiencies, we next compared cell viability and proliferation in 2D and 3D conditions. A LIVE/DEAD analysis by Calcein AM and ethidium homodimer-I staining revealed comparable survival rates ( Supplementary Fig. 2a ), whereas the proliferation rate assessed by EdU labelling substantially differed ( Supplementary Fig. 2b ): in two dimensions, fibroblasts proliferate rapidly between day 2 and day 6 followed by a lag phase most likely caused by cell confluence. Cells then re-enter a phase of rapid cell division between day 10 and 14, representing the colony-formation phase indicative of reprogramming. In contrast, cells in 3D culture initially proliferate much slower, but continue to steadily increase their proliferation rate over the course of the experiment. These data suggest that the 3D microenvironment might keep cells in an active proliferative state throughout the entire reprogramming process ( Supplementary Fig. 2b–e ). Further analysis of the 3D reprogramming dynamics showed that fibroblasts cultured in PEG hydrogels lost their typical spindle-shaped mesenchymal morphology and formed spherical colonies as early as day 3 ( Supplementary Fig. 3a, b ). Accordingly, forward scatter analysis by flow cytometry revealed that within the first three days, 3D-encapsulated cells acquire an iPSC-like size, which was stably maintained, whereas most cells in two dimensions reach that size only during the late phase of reprogramming ( Supplementary Fig. 3c, d ). To provide further proof of accelerated cell reprogramming in three dimensions, we examined whether differences exist in the temporal requirement of factor expression between the two conditions. Doxycycline was therefore applied for 6, 8, 10, 12, 14 or 16 days on reprogramming fibroblasts ( Fig. 3d ). Strikingly, in three dimensions, an 8-day doxycyline treatment was already sufficient to give rise to iPSCs, in contrast to two dimensions ( Fig. 3e ), providing evidence for the faster induction of iPSCs in 3D culture. Optimization of iPSC generation by 3D artificial niches Our previous experiments showed that a 3D microenvironment can strongly affect iPSC generation when optimized for the maintenance of ESC pluripotency in three dimensions. However, as fibroblast and early iPSC states might require significantly different 3D matrix characteristics, we postulated that an optimization of this ‘standard’ 3D matrix composition might further improve reprogramming efficacy. To test this, we took advantage of the tunable nature of our artificial ECM platform to systematically probe how biophysical and biochemical cues could influence reprogramming ( Fig. 4 ). Figure 4: Optimization of iPSC reprogramming efficiency by modulation of 3D microenvironment. a , 3D HTS approach to simultaneously probe 128 unique microenvironmental conditions (in triplicate). Read-outs: number, phenotype and Oct4 –GFP intensities of colonies in response to each microenvironment. b , Quantification of Oct4 –GFP-positive colonies in all 128 unique conditions analysed by HTS in 3D PEG hydrogels. Yellow bar indicates the ‘standard’ 3D condition. c , Microenvironmental components within the identified top 20% unique conditions. d , Relative contribution and interactions of component categories to global regulatory landscape (MP, mechanical properties; DG, degradability (MMP sensitivity); SF, soluble factors; EC, extracellular matrix and cell–cell contact proteins; INT, interactions; Epsilon, model uncertainty). e , Global analysis of individual component contributions to reprogramming efficiency. f , Comparison of reprogramming efficiency in HTS-derived condition versus standard 3D and 2D conditions. Data are expressed as means ± s.e.m. All pairwise differences were computed using the Tukey–Kramer method. ∗ ∗ ∗ , p < 0.001; ∗ ∗ , p < 0.01; p < 0.05. Biological replicates ( n = 5) are represented in e . Full size image To this end, we employed a previously developed 3D high-throughput screening (HTS) approach 19 to simultaneously probe 128 unique microenvironmental conditions in triplicate. Using an automatic high-throughput imaging system we were able to detect the number, phenotype and Oct4 –GFP intensities of colonies in response to each microenvironment ( Fig. 4a ). We used this approach to tune matrix stiffness between 300 and 1,200 Pa ( G ′) and explore the susceptibility of the gel to MMP degradation using gel building blocks with high or intermediate degree of degradability. Furthermore, we reasoned that we could improve the induction of iPSCs by enriching PEG hydrogels with proteins previously shown to play a role in regulating pluripotency. Thus, we selected and screened several cell–cell interactions and ECM proteins, namely E-cadherin 23 , Epcam 24 , laminin 25 , fibronectin 26 (and its minimal signalling peptide RGD and fragment F9-10), vitronectin 27 and collagen IV (ref. 28 ). Finally, we analysed the synergistic effect of Wnt pathway stimulation, achieved here by the GSK3 β inhibitor CHIR99021, which is also known to contribute to pluripotency induction 29 . The complete list of the 128 screened conditions is reported in Supplementary Table 1 . The outcome of this HTS showed that our previously used ‘standard’ 3D condition could be largely improved on. Indeed our ‘standard’ condition (marked by the yellow bar in Fig. 4b ) was in the lower half of all conditions, ranked by the number of identified Oct4 –GFP-positive colonies. A detailed analysis of the top 20% conditions clearly highlighted that the ideal stiffness occurred between 300 and 600 Pa and that enrichment of the microenvironment with Laminin or Epcam resulted in marked improvements in 3DiPSC generation. Notably, we also observed a strong correlation between the activation of the Wnt pathway by CHIR99021 and reprogramming efficiency in three dimensions ( Fig. 4c ). We then performed a comprehensive quantitative analysis of Oct4 –GFP expression and an image-based assessment of morphological parameters of the colonies detected by high-throughput imaging and grouped all of the conditions by hierarchical clustering ( Supplementary Fig. 5a ). This analysis confirmed that our automated identification of the number of GFP-positive colonies correlated with manual image assessment. Furthermore, it identified colony area as being closely correlated to GFP-positive colony number, indicating that conditions of high proliferation would be favourable to reprogramming. Interestingly, colony morphology and GFP intensity were not closely correlated with the appearance of Oct4 –GFP-positive colonies, suggesting that 3D reprogramming efficiency was decoupled from levels of Oct4 –GFP expression and colony shape. To investigate the conditions leading to the identified clustered patterns, we further assessed selected clusters with the highest and lowest reprogramming efficiency and discovered that they resulted from highly divergent microenvironmental conditions ( Supplementary Fig. 5b, c ). To quantify the relative contribution of the various microenvironmental categories to reprogramming efficiency as well as to identify significant interactions between these microenvironmental categories, we created a generalized linear model (GLM; see Methods ) relating Oct4 –GFP colony efficiency to all 128 input conditions ( Fig. 4d ). This analysis also permitted us to identify the global landscape of contributions of individual factors to reprogramming efficiency, clearly identifying global positive and negative regulators of reprogramming and corroborating the optimal parameters arising from the identified top 20% conditions ( Fig. 4e ). To validate these results, we replicated in 12-well plate culture the best condition of the 3D microenvironment screening (that is, 600 Pa, high degradability, gel enriched with Epcam, and CHIR) together with the previously used ‘standard’ condition. This experiment confirmed that fine-tuning a defined 3D microenvironment can further increase iPSC generation: compared with the standard protocol in 2D our 3D approach resulted in a more than threefold increase in reprogramming efficiency ( Fig. 4f ). Finally, we sought to compare our 3D reprogramming conditions in chemically defined microenvironments with other 3D culture techniques. Therefore, we compared the efficiency of reprogramming in cells encapsulated in our ‘standard’ PEG gels, Matrigel and collagen type I gels, and a previously published biophysical reprogramming method based on microgrooves 15 . Our results showed that 3D PEG-based hydrogels are able to support the generation of a significantly higher number of Oct4 –GFP-positive colonies compared with collagen I or microgrooves ( Supplementary Fig. 4a ). Interestingly, 3D Matrigel also significantly increased iPSC generation to a level comparable to our ‘standard’ 3D PEG hydrogels ( Supplementary Fig. 4a ). However, neither collagen nor Matrigel supported the generation of iPSC colonies with a homogeneous morphology ( Supplementary Fig. 4b–e ). We believe that the relatively rapid degradation and variable formulation of naturally derived hydrogel systems such as collagen and Matrigel substantially limit the potential use of these 3D culture methods compared with our chemically defined system. Taken together, these data demonstrate that the modulation of the 3D microenvironment is crucial to obtain reprogramming efficiencies that may be unachievable with 2D or other 3D culture methods in which cell–matrix perturbations are limited. 3D culture promotes MET and epigenetic plasticity We next focused on elucidating the mechanisms that underlie the significant differences observed between cell reprogramming in 2D and 3D conditions. To this end, we began by analysing Yes-associated protein (YAP) signalling, one of main pathways regulating stem cell fate choices induced by physical inputs 30 , with a previously documented role during reprogramming to pluripotency 31 . Surprisingly, we did not find any evident sign of nuclear YAP1 activity by immunohistochemistry for 3D reprogramming in early colonies (that is, at day 8, Supplementary Fig. 6a–c ). However, we noted that at the onset of reprogramming (day 1–3), a substantial fraction of fibroblasts in two dimensions expressed nuclear YAP1, whereas in 3D hydrogels this phenotype was never observed ( Supplementary Fig. 6d, e ). Consistently, at these time points the expression of YAP1 target genes ( Ankrd1 , Ctgf and Cyr61 ) in three dimensions was much weaker compared with 2D controls ( Supplementary Fig. 6g–i ). This differential YAP1 activity on surfaces versus 3D hydrogels at the beginning of reprogramming, when cells are still ‘fibroblast-like’, is consistent with the work of Piccolo and colleagues, which has shown nuclear YAP/TAZ activity for cells on hard substrates (or large adhesive islands) and cytoplasmic, inactive YAP/TAZ for those on soft hydrogels (or small adhesive islands) 30 . These results could indicate that YAP/TAZ modulation may also play a role in the early phases of reprogramming, as has already been shown for Wnt pathway activity 32 , which is known to be linked to YAP/TAZ (ref. 33 ). The extent and mechanisms to which these early differences in YAP activity explain our reprogramming results clearly require further elucidation. A detailed confocal microscopy-based assessment of the morphological changes during early reprogramming provided evidence that 3D conditions favoured the formation of colony-like structures after only three days of doxycycline exposure ( Supplementary Fig. 3 , and data not shown). The pronounced change in the shape of fibroblasts towards a round morphology within the first three days of 3D reprogramming suggested that the matrix induced the cells to undergo an apparent phenotypic change to an epithelial-like morphology. Indeed, MET is a key initiation step during the generation of iPSCs and is instrumental for colony formation 34 , 35 . We therefore analysed the transcripts of key MET markers during the reprogramming process ( Fig. 5a–c ) and found that a 3D microenvironment accelerates the expression of epithelial markers (E-cadherin and Epcam ) and results in a concomitant loss of the fibroblast phenotype ( Thy1 ). Accordingly, immunocytochemical analysis highlighted that E-cadherin protein appears much earlier in 3D (day 3) than in 2D culture (day 5) ( Fig. 5d–f and data not shown). These data suggest that the 3D microenvironment accelerates iPSC generation through an earlier induction of the MET process. Figure 5: 3D reprogramming accelerates MET and enhances epigenetic plasticity. a – c , Real-time RT–PCR analysis of MET markers during 2D and 3D reprogramming. Transcripts were normalized to Gapdh levels. d – f , Immunocytochemical analysis for E-cadherin in 2D or 3D culture conditions after 3 days of reprogramming and relative quantification. g , Cells cultured for 3 days in 2D and 3D conditions with or without doxycycline (dox) induction were fixed and stained for AcH3 or H3K4me3 and signal intensity was quantified by image analysis. h , Western blot analysis of epigenetic marks in 2D and 3D conditions. Valproic acid (VPA, 0.5 mM) was used as a positive control. Data are expressed as means ± s.e.m. ∗ ∗ ∗ , p < 0.001; ∗ ∗ , p < 0,01; ∗ , p < 0.05. Scale bars, 30 μm. Biological replicates ( n = 3) are represented in a – c and ( n = 6) f . Full size image We next asked whether the observed rapid changes in cell morphology and size in 3D culture are accompanied by rearrangements in chromatin structure. Histone acetylation and methylation were chosen here as candidates for histone marks because their role in iPSC generation has been extensively studied 36 . In particular, tri-methylation at lysine 4 of histone H3 (H3K4me3) has been identified to play a role in the activation of genes responsible for proliferation and metabolism during the early phase of reprogramming 37 . Furthermore, reduction in H3K4me3 levels in mouse ESCs has been demonstrated to cause attenuated expression of self-renewal markers including OCT4, NANOG and SSEA1 (ref. 38 ), highlighting its importance in the maintenance of pluripotency. Histone H3 acetylation (AcH3) is also critical for cell reprogramming, as it marks open chromatin and thus promotes active transcription. Accordingly, histone deacetylase inhibitors such as valproic acid improve iPSC generation efficiency and allow successful reprogramming in the absence of c-Myc and Klf4 (ref. 39 ). Therefore, we monitored by immunofluorescence and image analysis AcH3 and H3K4me3 levels during the early phase (day 3) of 2D and 3D reprogramming. Interestingly, we observed a significant increase in H3K4me3 levels in the 3D relative to the 2D culture condition as well as an increase in the AcH3 mark in 3D culture ( Fig. 5g , left panels). A similar trend in histone modifications was observed even in fibroblasts cultured without doxycycline induction ( Fig. 5g , right panels), a result that was confirmed by western blot analysis ( Fig. 5h ). However, H3K4me3 and AcH3 protein increases were not detectable in 3D samples treated with doxycycline, possibly because these epigenetic markers were already strongly induced by the reprogramming factors. Taken together, these data show that the exposure of single cells to a 3D microenvironment induces marked changes in chromatin structure, which is well known to be essential for overcoming the epigenetic barriers of cell reprogramming. 3D culture promotes pluripotency induction in human cells Our final aim was to test whether iPSCs could be generated in a 3D microenvironment by reprogramming of human somatic cells. Therefore, we tried to reprogramme human fibroblasts into iPSCs using a lentiviral vector-based approach, comparing 2D and our 3D PEG-hydrogel-based protocols. Both ‘standard’ and HTS-optimized 3D conditions were able to accelerate human iPSC generation. NANOG expression became detectable in three dimensions after only 14 days of reprogramming, whereas it was undetectable in standard 2D conditions at that relatively early time point ( Fig. 6a and data not shown). Moreover, after 6 weeks of reprogramming, the efficiency of human 3DiPSC generation was found to be up to 2.5-fold higher in 3D culture ( Fig. 6b ). Human 3DiPSCs showed a mature morphology ( Fig. 6c ) and expressed SOX2, NANOG, OCT4 and SSEA-4, the cardinal markers of human pluripotency ( Fig. 6d–g ). Pluripotency of the reprogrammed human cells was further corroborated by multilineage in vitro differentiation ( Fig. 6h–j ) and in vivo teratoma formation ( Fig. 6k–n ). These results demonstrate that an engineered 3D cell microenvironment can also be deployed to enhance human reprogramming, thereby opening up our method for translational applications. Figure 6: 3D generation of human iPSCs. a , Immunostaining for NANOG after 14 days of 3D reprogramming. Inset shows the corresponding bright-field image (BF). b , Comparison of reprogramming efficiency in 2D and 3D culture (‘standard’ and conditions optimized by HTS). c , Human 3DiPSC colony after 6 weeks of reprogramming. d – g , Immunocytochemistry analysis of pluripotency markers in human 3DiPSCs. h – j , Immunostaining showing differentiation of human 3DiPSCs into neuroectodermal (TUBB3), mesodermal (SMA, smooth muscle actin) and endodermal (SOX17) cell types. k – n , Teratoma assay showing that human 3DiPSCs are able to differentiate in vivo into neuroectoderm, mesoderm and endoderm. Data are shown as means ± s.e.m. ∗ ∗ , p < 0,01; ∗ , p < 0.05. Biological replicates ( n = 5) are represented in a – c . Scale bars, 30 μm ( a , n ), 100 μm ( c – h , l , m ) or 50 μm ( i , j ). The same starting number of cells per sample was used in the comparative experiment ( b ) in 2D and 3D conditions. Full size image Recently, several research groups have investigated 3D culture systems to identify extrinsic factors meditating ESC pluripotency (for example, refs 40 , 41 , 42 , 43 , 44 ). To the best of our knowledge, there are no complementary studies exploring the generation of iPSCs in 3D microenvironments. Our PEG-based hydrogel system represents a powerful model system to dissect the role of microenvironmental signals in modulating cell reprogramming. Whereas current 2D methods may limit iPSC generation owing to rapid cell confluency, the 3D method we present here overcomes this issue by promoting proliferation throughout the entire reprogramming process. Our results suggest that the 3D matrix selects for colony-forming iPSCs because the proliferation of non-colony-forming cells is limited in the 3D milieu. Notably, compared with reprogramming in conventional 2D conditions, reprogramming kinetics and efficiency are improved in three dimensions, most likely caused by the action of biophysical cues from the 3D microenvironment that may cooperate with the induced transcription factors. We also note that the observed cell confinement and controlled proliferation make our approach particularly attractive for the scalable, automatized generation of iPSCs, which is difficult to implement with standard 2D methods. Our experiments show that early events during 3D reprogramming are accompanied by pronounced morphological changes that may cause both chromatin remodelling and accelerated MET, two key events for the initiation of iPSC generation. Consistent with the idea that the 3D environment can have a profound effect on cell reprogramming, we observed higher levels of AcH3 and H3K4me3 in 3D conditions even without transgene overexpression ( Fig. 5c, d ). Interestingly, our data are in accordance with previously published work showing that biophysical stress can modulate the epigenetic state in cell reprogramming 15 . Our findings represent a first proof of principle for 3D reprogramming and pave the way for further investigations into the discovery of a synthetic ‘reprogramming niche’. This concept opens up the intriguing possibility of shifting from genetic to microenvironmental manipulations of somatic cell fate, which would be of particular interest for clinical applications of iPSC technology. Furtheremore, model systems such as those used in this work could help achieve a deeper understanding of cell-extrinsic factors involved in cell fate regulation. Methods Mice. B6;129S4– Pou5f1 tm 2 Jae and Gt(ROSA)26Sor tm 1( rTA ∗ M 2) Jae Col1a1 tm 3( tetO − Pou 5 f 1, − Sox 2, − Klf 4− Myc ) Jae mice were purchased from The Jackson Laboratory, bred to obtain B6;129S4– Pou5f1 tm 2 Jae /Gt(ROSA)26Sor tm 1( rTA ∗ M 2) Jae Col1a1 tm 3( tetO − Pou 5 f 1, − Sox 2, − Klf 4− Myc ) Jae (4F2A– Oct4 –GFP) mice and maintained in micro-isolator cages. Mice were provided continuously with sterile food, water and bedding. All in vivo procedures were carried out in accordance with institutional guidelines of Canton Vaud. Mouse ESC culture. Oct4 –GFP mouse ESCs (R1) were maintained on 0.2% gelatin-coated tissue culture plates in mouse ESC medium: Dulbecco’s modified Eagle’s medium (DMEM) supplemented with 2 mM GlutaMAX, 15% (v/v) ESC-qualified fetal bovine serum (FBS; Fisher Scientific), 1 mM sodium pyruvate (Life Technologies), ×1 non-essential amino acids (Gibco), 1% (v/v) penicillin/streptomycin (Gibco), 0.1 mM 2-mercaptoethanol (Gibco), and 103 U ml −1 mouse leukaemia inhibitory factor (LIF; Millipore). Medium was stored at 4 °C and was used within 2 weeks. Isolation and culture of tail-tip fibroblasts. Tail-tip fibroblasts (TTFs) were isolated from 4F2A– Oct4 –GFP mice. ∼ 1 cm tail tips were cut from euthanized 8-week-old mice. Dermis was peeled off from the tails, and the remaining tissues were minced into small pieces and incubated in 0.1% trypsin (Invitrogen) for 30 min at 37 °C. Digested tissues were neutralized with FBS, collected by centrifugation (×300 g for 5 min), and placed onto 0.2% gelatin-coated tissue culture plates in fibroblast medium consisting of Dulbecco’s modified Eagle’s medium (DMEM) supplemented with 2 mM GlutaMAX, 10% (v/v) fetal bovine serum (FBS; Life Technologies), 1 mM sodium pyruvate (Life Technologies), ×1 non-essential amino acids (Life Technologies), 1% (v/v) penicillin/streptomycin (Life Technologies). Fibroblasts were allowed to migrate out for 7 days. TrypLE Express (Life Technologies) was used for routine passaging TTFs. All cells were cryopreserved at passage 1 and were used exclusively at passage 3 for all experiments. Cell reprogramming. To reprogramme in adherent 2D conditions, 3 × 10 4 primary TTFs were plated on 0.2% gelatin-coated 12-well plates. To reprogramme in 3D conditions, primary TTFs were mixed with gel precursors (see below). Crosslinking was performed in 50 mM Tris buffered saline (TBS): 50 mM Tris, 50 mM CaCl 2 , pH 7.6, and 10 U ml −1 thrombin-activated factor XIIIa, conditions that have been previously reported as suitable for 3D cell encapsulation 16 . Hydrogels were formulated in a 12-well plate (1 drop per well) in 30 μl drops using 1,000 cells μl −1 . Crosslinking was carried out for 30 min at 37 °C in a humidified incubator. The day after encapsulation, cells were shifted to mouse ESC medium containing 2 μg ml −1 doxycycline (Sigma-Aldrich) and medium exchange was carried out every 48 h during the entire reprogramming process. PEG hydrogels. For convenience, we reproduce below methodological details originally available in refs 17 , 18 . Briefly, the factor XIIIa substrate peptides TG-MMP-Lys and TG-Gln were added to eight-arm PEG vinylsulphone (PEG-VS) in a 1.2-fold molar excess over VS groups in 0.3 M triethanolamine (pH 8.0) at 37 °C for 2 h. The reaction solution was subsequently dialysed (Slide-A-Lyzer 7K, MWCO: 7; or Snake Skin, MWCO 10K, PIERCE) against ultrapure water for 3 days at 4 °C. After dialysis, the product (8-PEG–MMP-Lys and 8-PEG-Gln, respectively) was lyophilized to obtain a white powder. Factor XIII activation. Activation of factor XIII (FXIII) was achieved as described previously 18 . Briefly, 1 ml of 200 U ml −1 FXIII (CSL Behring) was incubated for 30 min at 37 °C with 100 μl of 20 U ml −1 thrombin (Sigma-Aldrich) in the presence of 2.5 mM CaCl 2 . Activated FXIII was aliquoted and stored at −80 °C until use. Characterization of hydrogel mechanical properties. For convenience, we reproduce below methodological details originally available in ref. 19 . Hydrogel discs (30 μl) were made and allowed to swell in 50 mM TBS for 24 h at room temperature. Swollen hydrogel disks of 1 mm thickness were placed between the two plates of a Bohlin CV 120 rheometer (Bohlin Instruments) and compressed up to 80% of their original thickness. Measurements were conducted in constant strain (5%) mode. Shear stress was recorded over the frequency range of 0.1–1 Hz and average storage moduli ( G ′) over the frequency range were obtained. Storage modulus ( G ′) was plotted as a function of PEG content (w/v) for each of the two MMP sensitivities. Gel functionalization of cell–cell interaction proteins. For convenience, we reproduce below methodological details originally available in ref. 19 . The Fc-tag–Protein A conjugation strategy was employed to bind cell–cell interaction proteins to the hydrogel network. The modification of Protein A was achieved by functionalization with a maleimide group by reacting with NHS–PEG–maleimide in 10 molar excess, followed by Gln-peptide attachment through its cysteine side chain. Fc-tagged E-cadherin and EpCAM, (R&D Systems) were premixed with Gln–Protein A in 1.66 molar excess ratios for 30 min at room temperature. The obtained products were aliquoted and stored at −20 °C until use. 3DiPSCs culture and differentiation. Stably reprogrammed 3DiPSCs were collected from hydrogels by TrypLE Express treatment for 10 min at 37 °C and grown for at least three passages on feeder cells in mouse ESC medium. Following this initial step 3DiPSCs can be grown in feeder-free condition in mouse ESCs medium. To differentiate 3DiPSC differentiation into the three germ layers cells were collected by TrypLe Express treatment and transferred to bacterial culture dishes in mouse ESC medium without LIF. After 3 days, embryoid bodies were plated on 0.2% gelatin-coated tissue culture plates and incubated for another 7 days. The differentiation potential was then assessed by immunostaining analysis of TUBB3, FOXA2 and MHC. Cell viability assay. Calcein AM and ethidium homodimer-I staining (Life Technologies) was carried out following the manufacturer’s instructions, and flow cytometry analysis was carried out with a CyAn ADP Analyzer (Beckman Coulter). Alamar blue assay and alkaline phosphatase staining. To monitor 3D cell growth in PEG matrices, mouse ESC proliferation was quantified by Alamar blue assay (Invitrogen) according to the manufacturer’s instructions. Briefly, in this assay, fresh medium containing 10% Alamar blue solution was added to the gel drops and incubated for 6 h at 37 °C. The supernatant was then collected and analysed by Tecan Safire 2 plate reader. The assay was performed every other day over 9 days. Alkaline phosphatase staining was performed according to the manufacturer’s instructions using the Alkaline Phosphatase Detection Kit (Millipore). EdU labelling assay. To analyse cell proliferation, a 1 h pulse of EdU was applied to cells at different time points during 2D or 3D reprogramming. The EdU signal was then revealed using the Click-iT EdU imaging Kit (Life Technologies) according to the manufacturer’s protocol. Fluorescence was then analysed by imaging or quantified by FACS analysis. Immunofluorescence staining. Encapsulated/plated cells were washed with phosphate buffered saline (PBS) for 30 min and fixed with 4% paraformaldehyde (PFA) for 30 min at room temperature. Fixed cells were washed three times with PBS and permeabilized in 0.2% Triton X-100 (Sigma-Aldrich) for 30 min at room temperature. Permeabilized cells were then blocked with 2% bovine serum albumin (BSA; Sigma-Aldrich) for 2 h (or 30 min for 2D cultures) at room temperature. Cells were stained overnight with primary antibody at 4 °C. After three PBS washes, cells were stained with corresponding secondary antibody (1:500; Invitrogen) overnight at 4 °C in the dark (or 1 h at room temperature for 2D cultures). Cells were washed three times with PBS and incubated for 30 min in 4′,6-diamidino-2-phenylindole (Invitrogen) for nuclei visualization. Confocal imaging was carried out using LSM 700 (Zeiss). Images represent the z -stack projection of confocal sections. The following antibodies were used: rabbit anti-NANOG (Abcam ref. no. AB80892, 1:100), rabbit anti-hNANOG (Abcam ref no. 21624, 1:200), rabbit anti-SOX2 (Life Technologies ref. no. 481400, 1:200), goat anti-OCT4 (Abcam ref. no. AB27985, 1:100), goat anti-hSOX17 (R&D ref. no. AF1924, 1:200), mouse anti-SSEA-4 (Hybridoma Banks ref. no. MC-813-70, 1:50), mouse anti-SMA (Abcam ref. no. 7817, 1:200), rabbit anti-E-cadherin (Cell Signaling, 1:200), mouse anti-TUBB3 (Covance ref. no. MMS-435P, 1:500), mouse anti-MHC (Hybridoma Bank ref. no. MF-20, 1:50), rabbit anti-FOXA2 (Millipore ref. no. AB4125, 1:200), rabbit anti-acetyl-Histone H3 (Millipore ref. no. 06-599, 1:100), and rabbit anti-H3K4 tri-methylation (Abcam ref. no. AB8580, 1:100). Flow cytometry. 3DiPSCs were collected from hydrogels by dissociation with TrypLE Express for 20 min at 37 °C. Flow cytometry analysis was carried out with a CyAn ADP Analyzer or Gallios (Beckman Coulter). Image analysis. For quantification of AcH3 and H3K4me3 fluorescence intensity, ×63 images were processed using algorithms developed in ImageJ software. Briefly, the macro applies the threshold algorithm on the DAPI channel to create a mask (a binary image), detects the particles (using a specified threshold for size) on this mask and measures the mean intensity of the particles (on the channel selected for measurement). E-cadherin+ colony and YAP1+ cell nucleus counting was performed on ten ×63 images from three replicates for each condition. Western blot. For convenience, we reproduce below methodological details originally available in ref. 45 . Three days after 2D or 3D culture 1 × 10 6 plated fibroblasts were lysed on ice in 50 mM Tris HCl, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100 with 1× protease cocktail inhibitor (Roche) for 30 min. The protein content of each sample was determined with the Bio-Rad assay kit (Bio-Rad). About 50 g per lane of proteins was separated on 4–20% gradient SDS–polyacrylamide gel (Bio-Rad). Proteins were transferred to nitrocellulose filters (Bio-Rad) in transfer buffer (25 mmol l −1 Trizma 193 mmol l −1 glycine, 20% methanol, Sigma). Filters were soaked for 30 min in transfer buffer and were blotted for 2 h in TTBS (10 mmol l −1 Tris–HCl pH 8, 150 mmol l −1 NaCl, 5%, Tween-20 0.1%, non-fat dry milk, Bio-Rad). Membranes were probed with primary antibodies in TTBS overnight at 4 °C. After washing 4 times for 5 min immunoblots were incubated with HRP-conjugated secondary antibodies in TTBS for 2 h at 25 °C. Western blots were carried out using the ECL method according to the manufacturer’s protocol (Amersham Biosciences). The following primary antibodies were used: rabbit anti-acetyl-Histone H3 (Millipore ref. no. 06-599, 1:100), rabbit anti-H3K4 tri-methylation (Abcam ref. no. AB8580, 1:100), rabbit anti-Histone H3 (Millipore ref. no. 06-755, 1:1,000), mouse anti-beta actin (Sigma ref. no. A5441, 1:5,000). PCR with reverse transcription. Mouse ESCs were collected from hydrogels by dissociation with TrypLE Express for 20 min at 37 °C; RNA extraction was performed using TriPure Isolation Reagent (Roche) according to the manufacturer’s instructions. cDNA was synthesized from 1 μg RNA using iScript Select cDNA Synthesis Kit (Bio-Rad), and PCR was carried out with primer sets listed below, using JumpStart Taq DNA Polymerase with UNO Thermal Cycler (VWR). Real-time quantitative RT–PCR was carried out with the primer sets listed below, using iQ SYBR Green Supermix (Bio-Rad) with the Applied Biosystems 7900HT System. The expression of genes of interest was normalized to that of glyceraldehyde 3-phosphate dehydrogenase ( Gapdh ) in all samples. The primers used for PCR amplification are listed in Supplementary Table 2 . Bisulphite sequencing. For the methylation analysis of Oct4 and Nanog promoters by bisulphite sequencing, genomic DNA (gDNA) was extracted using TriPure Isolation Reagent (Roche) and then treated with Epitect Bisulfite Kit (Qiagen) according to the manufacturer’s instructions. Converted gDNA were used as templates to amplify target of interest. The resulting PCR products were cloned and sequenced. The primers used for PCR amplification are listed in Supplementary Table 2 . Microgroove substrates, Matrigel and collagen gels. Microgrooves were produced as previously described 15 . 3D Matrigel cultures were obtained by mixing Matrigel and cell suspension at a 1:1 ratio and then incubating the solution for 30 min at 37 °C. 3D collagen I gels were obtained by mixing cell suspension with 0.5% bovine dermis acid-solubilized type I collagen (Koken). Then, 0.4% collagen I gels were obtained by neutralizing the pH according to the manufacturer’s instructions. The amount of cells and the 3D reprogramming protocol used for Matrigel and collagen I gels was the same as indicated for the PEG hydrogels. Teratoma formation assay. Immunodeficient NSG mice (NOD.Cg- Prkdc scid Il2rg tm 1 Wjl /SzJ) were used for the teratoma formation assay. Five mice were injected subcutaneously with 1 × 10 6 cells (200 μl volume, 50% Matrigel) of mouse or human 3DiPSCs. Animals were observed twice a week for tumour growth. Tumour size was measured with a calliper and animals were euthanized after 6 weeks or after development of tumours larger than 1 cm 3 . Tumours were then dissected out and subjected to haematoxylin and eosin staining. Chimaeric mice generation. Chimaeric mice were generated by the EPFL Transgenic Core facilities ( ). For convenience, we reproduce below methodological details originally available in ref. 46 . Blastocysts were collected from the oviduct of superovulated CD-1 females mated with CD-1 males 48 h earlier by a flushing procedure with ESC medium. The zona pellucida was removed from the embryos by washing through 3 consecutive drops of Acid Tyrode solution (Sigma). Single cells were generated after treatment with trypsin. Between 5 and 15 3DiPSCs were injected in each embryo and cultured in ESC Knockout replacement serum medium overnight at 37 °C 5% in 5% CO 2 . The following day, blastocysts were transferred into the uterus of pseudopregnant recipient CD-1 females mated with vasectomized CD-1 males 2.5 days earlier. HTS robotic mixing and dispensing. For convenience, we reproduce below methodological details originally available in ref. 19 . High-throughput combinatorial screening of 3D microenvironments was performed as described using a Hamilton Microlab StarPlus automatic liquid handling robot with a Nanopipettor head. All automated steps were programmed with MicroLab Vector Software version 4.1.1 (HAMILTON Bonaduz AG). In brief, 8 combinations of PEG precursor solutions (comprising 2 MMP sensitivities ×4 stiffnesses; MMP sensitivities were controlled by incorporating the two MMP substrates GPQG ↓ IWGQ and VPMS ↓ MRGG in the gel backbone) were mixed robotically and aliquoted into wells of a 384-well plate. ECM proteins and peptides were thawed on ice and dispensed (including blank control) into the gel precursor-filled wells to produce hydrogel precursors with 64 unique combinations of mechanical properties (MP), MMP sensitivities/degradability (DG) and ECM components (EC). A 96-well plate was prepared with differentiation medium containing the soluble factor CHIR99021 (3 μM) and a blank control. Fibroblasts were trypsinized and resuspended at a concentration of 2 × 10 6 cells ml −1 and kept on ice. Simultaneously, frozen aliquots of FXIIIa were thawed and also kept on ice. Cells were then dispensed into the wells containing gel precursors, followed by dispensing and mixing of FXIIIa. The final volume of each gel was 4 μl. These cell-containing hydrogel mixtures were then immediately dispensed into a 384-well plate, incubated for 20 min, followed by having media containing CHIR or a blank control dispensed in a timed, automated manner. High-throughput imaging. For convenience, we reproduce below methodological details originally available in ref. 19 . Plates were fixed with 4% paraformaldehyde 8 days after induction of reprogramming and then stained with DAPI. Imaging was performed on a BD Pathway 435 automated imaging system (BD Biosciences). At every xy -position, that is, for every well, 6 images were captured across a z -stack height of 800 μm. For each well, these 6 images in each channel were collapsed into a single additive image. All images were processed using algorithms developed in CellProfiler v.9777 (Broad Institute). Collapsed image stacks for each well in the GFP, DAPI channels were input. In an initial analysis, DAPI images were subjected to a threshold and segmented to obtain total colony numbers per well. This DAPI-based segmentation was then used as a mask for the GFP images. The GFP intensity across all colonies was analysed and correlated with corresponding images, and a threshold determining GFP-positive colonies was identified. In a second analysis, GFP images were subjected to a threshold with the stringent criterion identified in the first analysis to determine the number of GFP-positive colonies in each well. Morphometric and GFP intensities were also identified in this second analysis. HTS data processing and statistical analysis. For convenience, we reproduce below methodological details originally available in ref. 19 . Matlab R2010b (Mathworks) was used to process and visually explore the data. Data for each well were averaged over experimental triplicates to obtain averages, standard deviations and standard errors of the mean for each unique microenvironmental condition. The ‘number of GFP-positive colonies (count per well)’ output was selected from this data set and ranked. The top 20% of all conditions (26 conditions out of 128) were extracted from this ranked set and their corresponding microenvironmental conditions were identified. Each microenvironment in these top conditions was represented by a set of 4 numbers corresponding to a category (MP, DG, EC, SF). For the identified 26 top conditions, counts for each condition within each category were added, and all counts were represented as a percentage within each category. To perform hierarchical clustering on the entire data set, all measurements in wells where no colonies were identified were set to 0. The data were then centred to have a mean of 0 and scaled to have a standard deviation of 1. Hierarchical clustering with a Euclidean distance metric and average linkage was performed to generate the hierarchical tree. Conditions identified for each cluster were then represented as for the ranked values above. To construct the generalized linear models, data were first input into R V2.14.2. GLM models, which took into account all possible interaction terms specified for analysis of number of GFP-positive colonies. The step AIC procedure was run to obtain optimal models based on the Akaike criterion. The GLM procedure in SAS v9.0 software (SAS Institute) was used to compute a least-squares mean value for each factor within every category, and differences of least-squares means ± standard errors with the control were tested for significance. The used models considered the effects of MP, DG, EC and SF, as well as interactions determined to be significant. For all parametric tests, normality of the residues and homogeneity of the variance were examined in QQ and Tukey–Anscombe plots, respectively. Human 3D iPSCs. Human neonatal foreskin fibroblasts (1.5 × 10 5 ; ATCC) were infected twice at 24 h intervals, with a lentiviral vector-expressing human OCT4 , SOX2 , KLF4 and MYC (OSKM) driven by a SFFV promoter. Successively, infected cells were collected and either replated on mitomycin-inactivated mouse embryonic fibroblasts (2D standard condition) or encapsulated in PEG hydrogels (3D standard or HTS conditions). The following day, the medium was replaced by mTeSR-1 (Stemcell Technologies) and thereafter changed daily. After 6 weeks, PEG hydrogels with human 3DiPSCs were digested with TrypLE Express, and cells were replated plated on a 6-well plate, coated with hESC-Matrigel (Corning). Human 3DiPSCs were grown in mTeSR-1 and passaged in 1:5 ratio using ReLeSR (Stemcell Technologies). Multilineage differentiation was performed by plating human 3DiPSCs as single cells on Matrigel and inducing for 30 days meso-endodermal or neural fate respectively with fibroblast medium or DMEM/F12/B27. The differentiation into the three germ layers was assessed by immunostaining analysis for TUBB3 (neuroectoderm), SMA (mesoderm) and SOX17 (endoderm). Lentivirus preparation. HEK 293T cells were cultured in DMEM, supplied with 10% FBS, 10 U ml −1 penicillin, streptomycin 10 μg ml −1 , 2 mM glutamine, 1 mM sodium pyruvate and 100× non-essential amino acids. For the transfection, 7.5 × 10 6 HEK 293T cells were seeded on 15 cm dishes and incubated overnight. The following mix was prepared for the transfection: 22 μg of SFFV-OKSM, 7.9 μg of VSV-G and 14.6 μg of Gag-Pol-Rev-Tat in 1.125 ml of water. Successively, 125 μl of CaCl 2 2.5 M and 1.25 ml of HBS were added to the mix while vortexing at full speed. The final mix was then dropped over the cells. After 48 h the supernatant was collected and ultracentrifuged at 50,000 g for 2 h at 20 °C. The pellet with viruses was resuspended in PBS and stored at −80 °C. Statistical analysis. Data were analysed by using the paired/unpaired Student’s t -test or, in case of more than 2 experimental groups, by one-way analysis of variance (ANOVA) followed by post hoc Bonferroni’s multiple comparison test using GraphPad Prism 6.0 statistical software (GraphPad Software). Significance level was preset to p < 0.05. Data are expressed as means ± s.e.m., with n denoting the number of samples analysed. All represented experiments were performed at least three times.
EPFL scientists have developed a new method that turns cells into stem cells by "squeezing" them. The method paves the way for large-scale production of stem cells for medical purposes. Stem cells are now at the cutting edge of modern medicine. They can transform into a cells of different organs, offering new ways to treat a range of injuries and diseases from Parkinson's to diabetes. But producing the right type of stem cells in a standardized manner is still a serious challenge. EPFL scientists have now developed a gel that boosts the ability of normal cells to revert into stem cells by simply "squeezing" them into shape. Published in Nature Materials, the new technique can also be easily scaled up to produce stem cells for various applications on an industrial scale. There are different types of stem cells, but the ones that are of particular medical interest are the so-called "induced pluripotent stem cells" or iPSCs. These are derived from mature, adult cells that have been genetically reprogrammed to behave like stem cells (which is why they are "induced"). iPSCs can then be regrown into a whole range of different cells types, e.g. liver, pancreatic, lung, skin etc. There have been many attempts to design a standardized method for generating such stem cells. But even the most successful methods turn out to not be very effective, especially for use on a large scale. A major issue is that existing techniques use the two-dimensional environment of a petri dish or cell culture flask, whereas cells in the body exist in a three-dimensional world. The lab of Matthias Lutolf at EPFL has now developed a new method that may help to overcome these challenges. The approach uses a three-dimensional cell culture system. Normal cells are placed inside a gel that contains normal growth nutrients. "We try to simulate the three-dimensional environment of a living tissue and see how it would influence stem cell behavior," explains Lutolf. "But soon we were surprised to see that cell reprogramming is also influenced by the surrounding microenvironment." The microenvironment in this case, is the gel. The researchers discovered that they could reprogram the cells faster and more efficiently than current methods by simply adjusting the composition - and hence the stiffness and density - of the surrounding gel. As a result, the gel exerts different forces on the cells, essentially "squeezing" them. As a new phenomenon, this is not entirely understood. However, the scientists propose that the three-dimensional environment is key to this process, generating mechanical signals that work together with genetic factors to make the cell easier to transform into a stem cell. "Each cell type may have a 'sweet spot' of physical and chemical factors that offer the most efficient transformation," says Lutolf. "Once you find it, it is a matter of resources and time to create stem cells on a larger scale." The greater impact of this discovery is possibly quantity. The technique can be applied to a large number of cells to produce stem cells on an industrial scale. Lutolf's lab is looking into this, but their main focus is to better understand the phenomenon, and to find the 'sweet spots' for other cell types.
10.1038/nmat4536
Medicine
New study finds that malaria vaccine protects adults for up to a year
Protection against malaria at 1 year and immune correlates following PfSPZ vaccination, Nature Medicine, DOI: 10.1038/nm.4110 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.4110
https://medicalxpress.com/news/2016-05-malaria-vaccine-adults-year.html
Abstract An attenuated Plasmodium falciparum (Pf) sporozoite (SPZ) vaccine, PfSPZ Vaccine, is highly protective against controlled human malaria infection (CHMI) 3 weeks after immunization, but the durability of protection is unknown. We assessed how vaccine dosage, regimen, and route of administration affected durable protection in malaria-naive adults. After four intravenous immunizations with 2.7 × 10 5 PfSPZ, 6/11 (55%) vaccinated subjects remained without parasitemia following CHMI 21 weeks after immunization. Five non-parasitemic subjects from this dosage group underwent repeat CHMI at 59 weeks, and none developed parasitemia. Although Pf-specific serum antibody levels correlated with protection up to 21–25 weeks after immunization, antibody levels waned substantially by 59 weeks. Pf-specific T cell responses also declined in blood by 59 weeks. To determine whether T cell responses in blood reflected responses in liver, we vaccinated nonhuman primates with PfSPZ Vaccine. Pf-specific interferon-γ-producing CD8 T cells were present at ∼ 100-fold higher frequencies in liver than in blood. Our findings suggest that PfSPZ Vaccine conferred durable protection to malaria through long-lived tissue-resident T cells and that administration of higher doses may further enhance protection. Main In 2015 there were an estimated 214 million clinical cases and 438,000 deaths due to malaria 1 , primarily caused by Pf in children in sub-Saharan Africa. A highly effective vaccine is urgently needed to prevent malaria in individuals and to facilitate elimination of malaria from defined geographic areas. To achieve these goals, we established an interim target of >85% sterile protection against Pf infection for >6 months 2 . There is currently no malaria subunit vaccine that approaches this level of protection. The most extensively studied candidate malaria vaccine, RTS,S (a subunit vaccine based on the Pf circumsporozoite protein (PfCSP)), confers sterilizing protection against controlled human malaria infection (CHMI) in about 22% of healthy malaria-naive adults 5 months after vaccination 3 . In a phase 3 field study, the efficacy of RTS,S against clinical malaria was 26% and 36% in young infants and children between the ages of 5 and 17 months, respectively, through 38–48 months of follow-up following a four-dose regimen on a 0-, 1-, 2-, and 20-month schedule 4 . Therefore, it is necessary to investigate alternative vaccination strategies that confer long-lived sterilizing protection 5 , 6 . Sustained sterilizing immunity against the pre-erythrocytic stages of Pf has been observed in humans immunized by whole-parasite approaches using mosquitoes for vaccination 7 , 8 . In a study of malaria-naive adults, 5/6 subjects exposed to >1000 irradiated mosquitoes carrying attenuated PfSPZ were protected when CHMI occurred 23–42 weeks after immunization 8 . To advance from using mosquitoes for inoculation of attenuated PfSPZ toward a clinical product, we previously reported that immunization by intravenous (i.v.) injection of radiation-attenuated, aseptic, purified, cryopreserved PfSPZ, a product called Sanaria PfSPZ Vaccine 9 (hereafter referred to as PfSPZ Vaccine), was well tolerated and immunogenic (the VRC 312 study) 10 , 11 . PfSPZ Vaccine induced a dose-dependent increase in PfSPZ-specific antibodies and frequencies of multifunctional T H 1 cytokine–producing CD4 T cells and γδ T cells in the blood. For vaccine recipients that underwent CHMI 3 weeks after final immunization, Pf parasitemia was observed in 3/9 and 0/6 subjects who received four or five doses of 1.35 × 10 5 PfSPZ, respectively, whereas parasitemia was observed in 5/6 unvaccinated controls, demonstrating that PfSPZ Vaccine confers high-level, short-term protection. The next critical milestones for PfSPZ Vaccine were to assess the durability of vaccine efficacy and to investigate the immune correlates and mechanisms of protection. Results PfSPZ Vaccine efficacy at 21 weeks To assess the durability of protection, subjects from the VRC 312 study were re-enrolled for repeat CHMI with the homologous Pf clone 3D7. Because vaccine efficacy (VE) is assessed at multiple time points in the same vaccinated subjects, we defined vaccine efficacy as first VE (VE at first CHMI), subgroup VE (VE among the subgroup of subjects who were not parasitemic after the first CHMI and returned for repeat CHMI), and cumulative VE (first VE × subgroup VE). VE is calculated as '1 − relative risk', where relative risk is the ratio of the infection rate among the vaccinated subjects divided by the infection rate among the controls. Thus, VE is always adjusted to account for those cases in which the infection rate in the controls is not 100%. Six subjects who had received four or five doses of 1.35 × 10 5 PfSPZ by i.v. injection and had not developed parasitemia following CHMI at 3 weeks 14 underwent repeat CHMI 21 weeks after the final vaccination. Four of six subjects developed parasitemia, as compared to 6/6 unvaccinated control subjects ( Supplementary Fig. 1a,b ), for a subgroup VE of 33% ( P = 0.23). For this dosage group, the first VE was 76% (ref. 11 ), and therefore the cumulative VE was 25%. Thus, 1.35 × 10 5 PfSPZ administered four or five times did not confer adequate protection at 21 weeks. Additionally, eight subjects in the VRC 312 study who were parasitemic at a prior CHMI (six immunized subjects and two unvaccinated controls) underwent repeat CHMI, and all of them developed parasitemia ( Supplementary Fig. 1c,d ). Thus, a single previous episode of Pf parasitemia followed by drug treatment did not confer protection to a subsequent CHMI. Therefore, only vaccinated subjects who did not develop parasitemia following their first CHMI underwent repeat CHMI in the following studies. Study design We assessed in a new study (VRC 314) whether increasing the dosage of PfSPZ Vaccine from 1.35 × 10 5 to 2.7 × 10 5 PfSPZ per dose affected VE. With this higher dose, we tested three-dose (group 1) and four-dose (groups 4 and 5) regimens. A fourth group of subjects received 1.35 × 10 5 PfSPZ four times, followed by a fifth dose of 4.5 × 10 5 PfSPZ (group 3), to determine whether a higher final dose improved VE, as compared to that for subjects who received five doses of 1.35 × 10 5 PfSPZ ( Fig. 1a ). For the subjects in each of these groups, PfSPZ Vaccine was administered by rapid i.v. injection. Figure 1: Sterile protection following CHMI at 59 weeks after immunization with PfSPZ Vaccine. ( a ) Timing of immunizations and CHMIs. Vaccinated subjects underwent CHMI 3, 21–25, and/or 59 weeks after final immunization. Half-filled green triangle denotes 4.5 × 10 5 PfSPZ by i.v. administration. ( b – f ) Kaplan–Meier curves showing the percentage of volunteers who did not develop parasitemia after each of five separate CHMIs at 3 weeks ( b , c ), 21–25 weeks ( d , e ), or 59 weeks ( f ) after the final immunization. New unvaccinated control subjects were enrolled for each CHMI. Day 0 indicates the time of exposure to infected mosquitoes. Monitoring for parasitemia was done by PCR once a day. ( g ) Number of subjects undergoing each CHMI, and vaccine efficacy for each group after each CHMI. In a , g , the dagger (†) indicates that subjects in group 3 received four doses of 1.35 × 10 5 PfSPZ each and a fifth dose of 4.5 × 10 5 PfSPZ. Full size image The trial also assessed route of administration. Studies in nonhuman primates (NHPs) and humans show that i.v. administration of PfSPZ Vaccine is substantially more immunogenic and protective than subcutaneous (s.c.) or intradermal (i.d.) administration at a dose of 1.35 × 10 5 PfSPZ 10 , 11 . However, studies in rodents show that intramuscular (i.m.) administration of radiation-attenuated SPZs is more protective than s.c. or i.d. administration, although it is considerably less protective than by i.v. administration 12 , 13 . Because there may be instances in which administration by i.v. injection is logistically complex, we compared the protective efficacy of administering 2.7 × 10 5 PfSPZ by i.v. injection versus 2.2 × 10 6 PfSPZ by the i.m. route (an 8.1-fold higher dose) with the same four-dose regimen (group 2) ( Fig. 1a ). For the primary efficacy analysis, the results are presented for the first CHMI for each vaccine group as compared to those for the controls. To account for multiple comparisons, P < 0.01 was taken as evidence of an effect, and P values between 0.01–0.05 were considered to be suggestive of an effect. This threshold for significance of the primary analysis was specified in the protocol to balance the potential for type 1 error with the conservative Bonferroni approach (Online Methods ). Secondary analyses compared the results from the different groups, as well as describe the results of repeated challenges among those subjects who remained parasite free in each group. No formal multiple-comparison adjustment was used for secondary end points. Daily monitoring by PCR analysis, rather than thick blood smear, was used for diagnosis of parasitemia and treatment. In subjects of the VRC 312 cohort, PCR analysis allowed detection of parasitemia 1–2 d earlier than by blood smear ( Supplementary Table 1 ), which enabled clinical monitoring criteria to be changed from overnight stays after day 7 to once-daily outpatient visits. Study population Of the 101 subjects who were enrolled, 57 were vaccine recipients, 32 were CHMI controls, and 12 were backup controls ( Supplementary Fig. 2 ). Baseline demographic characteristics are shown in Supplementary Table 2 . 55/57 (96%) subjects completed all scheduled vaccinations and, cumulatively, 224 vaccinations were administered. In total, 52/55 subjects (95%) who completed their vaccinations underwent at least one CHMI. For CHMIs occurring 3 weeks, 21–25 weeks, and 59 weeks after the final vaccination, there were 39, 31, and 5 participating vaccine recipients, respectively ( Supplementary Table 3 ). Adverse events Vaccinations were well tolerated. Of 57 vaccine recipients, 41 (72%) had no solicited adverse events at the injection site after any vaccination, 15 (26%) had mild symptoms, and one (2%) had moderate symptoms ( Supplementary Table 4 ). There were no solicited systemic symptoms for 32 (56%) vaccine recipients, mild ones for 19 (33%) recipients, and moderate ones for six (11%) recipients ( Supplementary Table 5 ). There were no serious adverse events attributed to vaccination. Alanine aminotransferase (ALT) levels measured 14 d after each vaccination were not elevated in any dosage group, as compared to those for the same subjects before starting the vaccinations ( Supplementary Fig. 3 ). Vaccine efficacy over 1 year VE at 3 weeks, by i.v. administration. 4/12 subjects in group 3 (four doses of 1.35 × 10 5 PfSPZ and a fifth dose of 4.5 × 10 5 PfSPZ) and 6/9 subjects in group 1 (three doses of 2.7 × 10 5 PfSPZ) developed parasitemia after CHMI, as compared to parasitemia in 7/8 control subjects ( Fig. 1b ). In group 4 (4 doses of 2.7 × 10 5 PfSPZ), 2/9 vaccinated subjects, as compared to 5/6 controls, developed parasitemia ( Fig. 1c ). First VE was estimated to be 62% for group 3 ( P = 0.025), 24% for group 1 ( P = 0.34), and 73% for group 4 ( P = 0.035). VE at 21–25 weeks, by i.v. administration. Only vaccine recipients who did not develop parasitemia after their first CHMI underwent repeat CHMI. Seven subjects from group 3 (four doses of 1.35 × 10 5 PfSPZ and a fifth dose of 4.5 × 10 5 PfSPZ) underwent repeat CHMI at 25 weeks after final vaccination, and 3/7, as compared to 6/6 controls, developed parasitemia. Subgroup VE was 57%, and cumulative VE was 35% ( Fig. 1d ). These results were similar to the results from the VRC 312 study for CHMI that was performed at 21 weeks ( Supplementary Fig. 1 ). Thus, increasing the final dose by 3.3-fold did not improve short- or long-term protection, as compared to that with the five-dose regimen of 1.35 × 10 5 PfSPZ 11 . Three subjects in group 1 (three doses of 2.7 × 10 5 PfSPZ) underwent repeat CHMI at 25 weeks, and 1/3 developed parasitemia ( Fig. 1d ); subgroup VE was estimated at 67%, and cumulative VE was 16%. Four subjects in group 4 (four doses of 2.7 × 10 5 PfSPZ) underwent repeat CHMI at 24 weeks. 1/4 vaccinated, and 6/6 control, subjects developed parasitemia ( Fig. 1e ), resulting in a subgroup VE of 75% and cumulative VE of 55%. A second group of subjects who received four doses of 2.7 × 10 5 PfSPZ (group 5) underwent their first CHMI at 21 weeks. Five of these 11 vaccinated subjects developed parasitemia, resulting in a first VE of 55% ( P = 0.037; Fig. 1e ). VE at 59 weeks, by i.v. administration. Five subjects who received four doses of 2.7 × 10 5 PfSPZ, each of whom had undergone a single prior CHMI and had not developed parasitemia, underwent a second CHMI 59 weeks after the final immunization. 0/5 vaccine recipients and 5/6 controls developed parasitemia ( P = 0.013; Fig. 1f ). Subgroup VE was 100%, and estimated cumulative VE was 55%, at 59 weeks ( Fig. 1g ). VE at 3 and 25 weeks, by i.m. administration. 5/8 subjects from group 2 (i.m. vaccination with 2.2 × 10 6 PfSPZ) developed parasitemia following CHMI 3 weeks after vaccination, resulting in a first VE of 29% ( Fig. 1b ). The three subjects who did not develop parasitemia at 3 weeks underwent repeat CHMI at 25 weeks, and all of them developed parasitemia ( Fig. 1d ). Thus, PfSPZ administered by i.m. injection, even at an 8.1-fold higher dose, was less efficient in inducing protection than by i.v. administration (8/8 parasitemic in group 2 through CHMI at 21–25 weeks versus 5/11 in group 5; P = 0.018). Immunogenicity Antibody responses. Antibodies induced by vaccination with attenuated SPZs can limit parasites from infecting hepatocytes in animals 14 , 15 , 16 . Therefore, antibodies to PfCSP and whole PfSPZ were assessed by enzyme-linked immunosorbent assays (ELISAs) and automated immunofluorescence assays (aIFAs), respectively, 2 weeks after the final vaccination. There were no differences in antibody levels among the groups that received the different i.v. regimens. However, subjects immunized by the i.m. route had lower antibody responses than did subjects immunized by i.v. injection ( Fig. 2a,b ). Figure 2: Pf-specific antibody and cellular immune responses 2 weeks after the final immunization. ( a ) Antibodies to PfCSP were measured by ELISA. ( b ) Antibodies to PfSPZ were measured by automated immunofluorescence assay (aIFA). ( c ) Pf-specific memory CD4 T cells secreting IFN-γ, IL-2, and/or TNF-α were measured by intracellular cytokine staining (ICS) before (pre) and after (post) vaccination. Results are the percentage of cytokine-producing cells after incubation with PfSPZ minus the percentage of cytokine-producing cells after incubation with vaccine diluent (medium with 1% human serum albumin). ( d ) Pf-specific memory CD8 T cells secreting IFN-γ, IL-2, and/or TNF-α by ICS after incubation with PfRBC or with uninfected RBCs as controls. Results calculated as in c . ( e ) Frequency of Vγ9 + Vδ2 + (hereafter called Vδ2 + ) γδ T cells among total lymphocytes. ( f ) Fold change in the frequency of Vδ2 + T cells from before (pre) to after (post) vaccination, as a percentage of total lymphocytes. Data are geometric mean ± 95% confidence interval ( a , b , f ) or mean ± s.e.m. ( c – e ). Each dot represents one subject. In a , b : group 1, n = 10; group 2, n = 9; group 3, n = 12; group 4, n = 9; group 5 n = 11. In c – f : group 1, n = 10; group 2, n = 9; group 3, n = 12; group 4, n = 11; group 5, n = 12. For a , b , between-group differences were assessed by the Kruskal–Wallis test with Dunn's correction for multiple comparisons. For c – e , there were no differences between vaccine groups in the post-vaccination responses by the Kruskal–Wallis test. Differences from the pre-vaccination levels were assessed by the Wilcoxon signed-rank test ( c – e ) or one-sample Student's t -test ( f ). For c – f , P values were corrected by the Bonferroni method. * P < 0.05, ** P < 0.01, *** P < 0.001. Dagger (†) indicates to see Figure 1 for doses. Full size image Cellular responses. Cellular immunity is critical for protective efficacy by vaccination with live-attenuated SPZs in rodent and NHP models 10 , 17 , 18 , 19 . Although interferon (IFN)-γ-producing CD8 T cells are necessary and sufficient for protection in the majority of mouse and NHP studies 20 , NK, γδ, and CD4 T cells also influence protection 19 , 21 . Pf-specificity was determined by stimulating peripheral blood mononuclear cells (PBMCs) with PfSPZ Vaccine (for CD4 T cell responses) or with Pf-infected erythrocytes (PfRBC) (for CD8 T cell responses) ( Supplementary Figs. 4 and 5 and Supplementary Table 6 ). There is considerable overlap in the proteomes of PfSPZ and PfRBC. Although there was an increase in PfSPZ-specific CD4 T cell cytokine responses after the final immunization in subjects from groups 1, 2, 4, and 5, there were no differences between the vaccine groups (including the i.m. group) in the magnitude ( Fig. 2c ) or quality of PfSPZ-specific CD4 cytokine responses ( Supplementary Fig. 6 ). The PfRBC-specific CD8 T cells in blood 2 weeks after the final immunization were no different from those observed before administration of the vaccine (background) in most vaccine groups ( Fig. 2d ). Finally, there was a vaccine-induced increase in the frequency of total γδ T cells, but no differences between the groups ( Fig. 2e,f ). The expansion was restricted to the Vγ9 + Vδ2 + family (hereafter referred to as Vδ2 + ), which comprises ∼ 75% of γδ T cells in blood ( Supplementary Fig. 7a–f ). A higher percentage of γδ T cells expressed the activation markers CD38, the cytotoxic molecule perforin, and the liver-homing chemokine receptor CXCR6 (ref. 22 ) after vaccination ( Supplementary Fig. 7g–l ). Of note, γδ T cells from PBMCs analyzed before vaccination expressed IFN-γ when stimulated ex vivo with live-attenuated PfSPZ ( Supplementary Fig. 7m ). Immune correlates To identify potential immune correlates of protection 23 , we assessed whether there were associations between the immune responses measured 2 weeks after the last immunization and the outcome after each CHMI (parasitemia versus no parasitemia). Antibody correlates. PfCSP- and PfSPZ-specific antibody levels correlated with outcome for CHMIs that were done 3 weeks and 21–25 weeks after the final immunization ( Fig. 3a,b ). For this analysis, we assumed that subjects who developed parasitemia after the 3-week CHMI would also develop parasitemia after the 21- to 25-week CHMI, on the basis of our earlier findings ( Supplementary Fig. 1 ). Figure 3: Correlation between immune responses and CHMI outcome. ( a ) Antibodies to PfCSP, as measured by ELISA, 2 weeks after final immunization in subjects who did (+) or did not (−) develop parasitemia at the 3-week (left) and 21- to 25-week (right) CHMIs. ( b ) Antibodies to PfSPZ, as measured by aIFA (arbitrary fluorescence units (AFU) = 2 × 10 5 ), 2 weeks after final immunization in subjects who did (+) or did not (−) develop parasitemia at the 3-week (left) and 21- to 25-week (right) CHMIs. ( c ) Frequency of Vδ2 + T cell subset among total lymphocytes 2 weeks after final immunization in subjects who did (+) or did not (−) develop parasitemia at the 3-week (left) and 21- to 25-week (right) CHMIs. ( d ) Frequency of Vδ2 + T cells among total lymphocytes before immunization in subjects who did (+) or did not (−) develop parasitemia at the 3-week (left) and 21- to 25-week (right) CHMIs. In a – d , n = 21 (−) and n = 17 (+) for left-hand graphs; in a – c , n = 16 (−) and n = 30 (+) for right-hand graphs; in d , n = 16 (−) and n = 31 (+) for right-hand graph. For a – d , because we expected subjects who were parasitemic at 3 weeks to again be parasitemic at 21–25 weeks, the immune data from individuals who were parasitemic at 3 weeks were included in the analysis for 21- to 25-week CHMI. Statistical significance for all correlations was determined by stratified Wilcoxon test with vaccine regimen as covariate. Bar denotes geometric mean ( a , b ) or median ( c , d ). Each dot represents one subject. Dagger (†) indicates to see Figure 1 for doses. Full size image Cellular correlates. The percentage of Pf-specific CD4 and CD8 T cells in the blood that produced IFN-γ, IL-2 and/or TNF-α did not correlate with outcome at the 3-week or the 21- to 25-week CHMI ( Supplementary Fig. 8 ). However, the absolute frequency of unstimulated Vδ2 + T cells as a percentage of total lymphocytes correlated with outcome at the 21- to 25-week CHMI ( Fig. 3c ). Notably, the frequency of the Vδ2 + T cell subset measured before the first vaccination also correlated with outcome at the 3-week CHMI and the 21- to 25-week CHMI ( Fig. 3d ). Role of PfSPZ antibodies in protection The finding that PfCSP- and PfSPZ-specific antibody levels 2 weeks after the final immunization correlated with outcome of the 3-week and 21- to 25-week CHMIs suggested that antibodies were a biomarker of a successful vaccine response; however, antibodies could also have a functional role in mediating protection 14 , 15 , 16 . To investigate this we assessed the levels of antibodies at the times of the 21- to 25-week CHMIs and the 59-week CHMIs. At 59 weeks, PfCSP-specific antibody levels in the five subjects who did not develop parasitemia were 8-fold lower (PfCSP geometric mean (GM) = 1,640) than the responses among subjects who did not develop parasitemia at 3 weeks (PfCSP GM = 13,200; P = 0.0051; Fig. 4a ). Of note, the levels of antibodies to PfCSP in the five subjects at 59 weeks (PfCSP GM = 1,640) were no different than those in subjects immunized with the same regimen and who developed parasitemia following CHMI at 21–24 weeks (PfCSP GM = 1,860; P = 0.87; Fig. 4a ). Assessment of antibody responses to whole PfSPZ ( Fig. 4b ) also revealed a decline in Pf-specific antibody levels over time. Figure 4: Functional role of PfSPZ-specific antibodies. ( a ) Antibodies to PfCSP, as measured by ELISA, at the time of CHMI in subjects who received four doses of 2.7 × 10 5 PfSPZ/dose by i.v. injection and who subsequently did (+) or did not (−) develop parasitemia at the 3-week (left), the 21- to 24-week (middle), or the 59-week (right) CHMI. ( b ) Antibodies to PfSPZ (AFU = 2 × 10 5 ), as measured by aIFA, at the time of each CHMI in subjects who received four doses of 2.7 × 10 5 PfSPZ/dose by i.v. injection and who subsequently did (+) or did not (−) develop parasitemia at the 3-week (left), 21- to 24-week (middle), or 59-week (right) CHMI. ( c ) Effect of passive transfer of purified IgG on liver-stage burden, as assessed by whole animal imaging. The experiment was performed once. Data are mean ± s.e.m. y axis on the right denotes the raw luciferase signal. y axis on the left denotes the percentage of signal in mice that received the IgG from each of the five subjects (identified by the identification numbers on the x axis) at each of the two time points relative to the signal in the mice that received pre-vaccination IgG, which was set to 100%. ( d ) Correlation between PfCSP antibody abundance (as determined by ELISA) and liver-stage burden (as measured by luciferase expression) in the FRG-huHep mice. In a , b , n = 7 (−) and n = 2 (+) for left-hand graphs; n = 9 (−) and n = 6 (+) for middle graphs; n = 5 (−) for right-hand graphs. In c , n = 7 for pre-vaccine, n = 2 for PfCSP mAb, and n = 4 for each IgG sample, except 511 ( n = 3). In d , n = 10 for IgG samples. In a , b , bar denotes geometric mean, and comparison ( P values) between groups was assessed by the Mann–Whitney U -test. In d , Pearson correlation coefficient was used to determine the relationship between Pf liver-stage burden and PfCSP level. Full size image To assess the in vivo functional activity of Pf-specific antibodies, we used Fah −/− Rag2 −/− Il2rg −/− (FRG) mice (which are deficient in T cells, B cell, and NK cells) reconstituted with human hepatocytes (which we refer to as FRG-huHep mice), which are capable of supporting liver-stage development of Pf (ref. 24 ). Total IgG from the five vaccine recipients that did not develop parasitemia following CHMI at 59 weeks was purified from plasma and serum. The IgG was passively transferred to FRG-huHep mice, and each mouse was challenged by exposure to 50 mosquitoes infected with luciferase-expressing PfSPZ. Liver-stage burden was quantified by whole-animal imaging. The negative control consisted of pooled IgG obtained from the blood of the five subjects before vaccination. Passive transfer of 150 μg of the PfCSP-specific monoclonal antibody 3C1, which was used as the positive control, reduced liver-stage burden by ∼ 90% as compared to that by transfer of the negative control. Purified IgG from the five vaccinated subjects taken 2–3 weeks after vaccination reduced liver-stage burden by 88% (median), and IgG taken at 59 weeks after the final immunization reduced liver-stage burden by 65% (median) ( Fig. 4c ). There was higher inhibitory activity in IgG taken at 2–3 weeks from 4/5 subjects as compared to that taken at 59 weeks, but the difference did not reach the level of statistical significance ( P > 0.05 by Wilcoxon signed-rank test). However, liver-stage burden did correlate inversely with anti-PfCSP levels ( Fig. 4d ). PfSPZ-specific T cell activation is dependent on the timing of CHMI To provide additional insight into the potential mechanisms of protection, we assessed how CHMI affected PfSPZ-specific T cell responses in vivo after each CHMI. For this analysis, we identified PfSPZ-specific T cells by the expression of IFN-γ, interleukin (IL)-2, and tumor necrosis factor (TNF)-α that was induced by ex vivo stimulation with PfSPZ antigens, and we simultaneously measured the expression of Ki-67 (a marker used to detect recent lymphocyte cell division 25 ; Fig. 5a ). One week after the subjects in the VRC 312 ( Fig. 5b ) and VRC 314 ( Fig. 5e ) clinical trials received their final immunizations, ∼ 20–30% of PfSPZ-specific CD4 T cells were Ki-67 + , demonstrating that vaccination induced T cell division. Ki-67 was only detected in PfSPZ-specific CD4 T cells and not in total memory CD4 T cells ( Fig. 5b–h ), providing evidence for PfSPZ specificity. One to two weeks following the 3-week CHMI, Ki-67 was detected in ∼ 40–60% of PfSPZ-specific CD4 T cells from subjects with parasitemia (both vaccinated and unvaccinated subjects), showing that malaria infection was associated with activation of a high proportion of PfSPZ-specific CD4 T cells ( Fig. 5c,d ). In contrast, there were low to undetectable numbers of Ki-67 + PfSPZ-specific CD4 T cells in PBMCs from the 25 subjects in the VRC 312 ( n = 12; Fig. 5c ) and VRC 314 ( n = 13; Fig. 5f ) studies who did not develop parasitemia after the 3-week CHMI. One to two weeks after the 21- to 25-week and 59-week CHMIs, ∼ 10–20% of PfSPZ-specific CD4 T cells from the subjects who did not develop parasitemia were Ki-67 + ( Fig. 5d,g,h ), as compared to the 0–1% of PfSPZ-specific CD4 T cells that were present on the day of the CHMI. These data suggest that there was differential antigen exposure in vivo when CHMI was done at 3 weeks versus 21–25 or 59 weeks among subjects who remained without parasitemia. The percentage of Pf-specific CD4, CD8, or γδ T cells in blood did not change after CHMI at any time in subjects who did not have parasitemia ( Supplementary Fig. 9a–f ), nor did PfCSP-specific antibody levels ( Supplementary Fig. 9g ). Figure 5: Activation of PfSPZ-specific CD4 T cells following CHMI. PBMCs from vaccinated subjects and unvaccinated controls in the VRC 312 and VRC 314 studies were isolated before and after the final vaccination and each CHMI. Cells were incubated with PfSPZ or vaccine diluent (1% HSA), and IFN-γ, IL-2, TNF-α, and Ki-67 expression were assessed by flow cytometry. ( a ) Representative flow cytometry plots showing memory showing gating for CD4 T cells at 0, 1, 2, and 4 weeks following CHMI in an unvaccinated control subject who developed parasitemia. PfSPZ-specific cells were identified by expression of IFN-γ following PfSPZ stimulation, and cells undergoing replication were identified by Ki-67 expression. ( b ) Percentage of either total memory CD4 T cells (purple dashed line) expressing Ki-67 or PfSPZ-specific CD4 T cells (blue solid line) expressing Ki-67 after final immunization in vaccine recipients who received 1.35 × 10 5 PfSPZ by i.v. injection in the VRC 312 study. ( c , d ) Frequency of cells (see key) expressing Ki-67 in vaccine recipients who did or did not develop parasitemia following CHMI at 3 ( c ) or 21 ( d ) weeks after the final vaccination. Unvaccinated controls are also shown. Subjects were from the VRC 312 study. ( e ) Analysis as in b for subjects from the VRC 314 study who were vaccinated by the i.v. route. ( f – h ) Analyses as in c , d for subjects from the VRC 314 study who were vaccinated by the i.v. route and for who CHMIs were done 3 weeks ( f ), 21- to 25-weeks ( g ), and 59 weeks ( h ) after the final vaccination. For b – h , PfSPZ-specific cells were identified by any combination of IFN-γ, IL-2, or TNF-α expression. Throughout, data are mean ± s.e.m. The number of subjects in each group is indicated on the respective graph. Full size image Tissue distribution of Pf-specific T cells The low amount of Pf-specific antibody at 59 weeks for all five vaccinated subjects who were not parasitemic after CHMI ( Fig. 4a,b ) suggested that antibodies did not probably have a major role in mediating protection at this time point. Therefore, PfSPZ-specific T cell responses were assessed in blood throughout the course of vaccination and following each CHMI. Longitudinal cellular responses in humans. In PBMCs, cytokine-producing PfSPZ-specific CD4 T cell responses declined significantly over the course of 59 weeks in vaccinated subjects who remained without parasitemia ( P = 0.047; Fig. 6a ), and cytokine-producing PfRBC-specific CD8 T cell responses were induced by vaccination; however, they returned to pre-vaccine levels shortly after the final vaccination ( Fig. 6b ). Thus, neither antibodies nor Pf-specific T cells were readily detected in the blood of the five vaccinated subjects who were not parasitemic after CHMI at 59 weeks. We hypothesized that the T cells resident in the liver might have a critical role in protection 10 , 26 . Figure 6: Circulating and liver-resident Pf-specific T cells in humans and NHPs. ( a , b ) Longitudinal assessment of PfSPZ-specific CD4 T cell responses ( a ) and PfRBC-specific CD8 T cell responses ( b ) in subjects who received four doses of 2.7 × 10 5 PfSPZ/dose by i.v. injection and who did not develop parasitemia at the 3-week, 21-to 24-week, and 59-week CHMIs. The numbers of subjects included at each time point (marked by the blue '×') are indicated on the graphs. Data are mean ± s.e.m. P values are for comparisons between immune responses at 2 and 59 weeks after the final immunization and were calculated by the Mann–Whitney U -test. ( c ) Schematic of NHP study design. ( d ) PfSPZ-specific CD4 (top) and PfRBC-specific CD8 (bottom) T cell responses in PBMCs of NHPs 2 weeks after the final vaccination for the subject groups outlined in c . ( e ) PfSPZ-specific CD4 (top) and PfRBC-specific CD8 (bottom) T cell responses in livers of NHPs 10 weeks after the final vaccination. In d , n = 16 for systemic, n = 8 for peripheral, n = 4 for naive; in e , n = 9 for systemic, n = 8 for peripheral, n = 6 for naive. In d , e , bar denotes median, and between-group comparisons were determined by the Mann–Whitney U -test. n.s., not significant ( P > 0.05). ( f ) Lineage of liver-resident lymphocytes in vaccinated NHPs expressing IFN-γ following stimulation with PfRBC. MAIT, mucosally associated invariant T cell. Full size image Tissue-resident T cell responses in NHPs. Because liver-resident cellular immunity could not be directly assessed in the human subjects, such responses were assessed in NHPs following immunization with PfSPZ ( Supplementary Fig. 10 and Supplementary Table 6 ). Immunogenicity studies with PfSPZ Vaccine in NHPs previously provided immune data that guided successful translation to human studies 10 , 11 . We had previously shown that 1.35 × 10 5 PfSPZ administered by the s.c. route did not induce detectable T cell responses in the blood or liver of NHPs 10 , so we administered tenfold more PfSPZ in each of five doses that were administered by the i.m. or i.d. route and compared the T cell responses to those resulting from five i.v. doses of 1.35 × 10 5 PfSPZ and four i.v. doses of 2.7 × 10 5 PfSPZ ( Fig. 6c ). PfSPZ-specific CD4 T cell responses in PBMCs were modestly higher in NHPs vaccinated by the peripheral (i.m. or i.d.) routes than by the systemic (i.v.) route ( P = 0.046), and there were no differences in PfRBC-specific CD8 T cell responses in PBMCs ( Fig. 6d ). In contrast, i.v. administration induced twofold higher frequencies of PfRBC-specific CD8 T cells in the liver, as compared to those induced by the 6- to 10-fold higher total doses of PfSPZ that were administered by the i.m. or i.d. route ( Fig. 6e ). Notably, the frequency of PfRBC-specific CD8 T cell responses in the liver were ∼ 100-fold higher than in PBMCs. Moreover, whereas multiple populations of IFN-γ-producing lymphocytes—including NK cells, γδ T cells, and CD4 T cells—were identified in the livers of NHPs, the majority ( ∼ 60%) were CD8 T cells ( Fig. 6f and Supplementary Fig. 11 ). The composition and differentiation state (i.e., memory phenotype) of lymphocytes in NHP livers largely reflect that of unvaccinated human liver samples, underscoring the biological relevance of the NHP model ( Supplementary Fig. 11 ). Discussion This was the first clinical trial of a malaria vaccine in which the first CHMI in one of the vaccine groups was done more than 4 weeks after the final immunization, thus representing a shift beyond a search for short-term protection toward trials that focus on dose and regimen optimization for durable sterilizing protection. At 59 weeks after four doses of 2.7 × 10 5 PfSPZ each, the cumulative VE against CHMI was estimated to be 55%—the most durable efficacy against CHMI reported to date with an injectable malaria vaccine. In the vaccination regimens tested here, the efficacy of PfSPZ Vaccine depended on both the dose per vaccination and the number of vaccinations. The estimated VE against CHMI done 3 weeks after immunization with three doses of 2.7 × 10 5 PfSPZ was 24%, as compared to 73% with four doses of 2.7 × 10 5 PfSPZ. Similarly, four or five doses of 1.35 × 10 5 PfSPZ conferred an estimated 25% efficacy against CHMI at 21 weeks, as compared to 55% with four doses of 2.7 × 10 5 PfSPZ. On the basis of these data, we hypothesize that additional increases in the dosage of PfSPZ Vaccine will further increase the magnitude and durability of protective efficacy. Ongoing studies using 4.5 × 10 5 to 2.7 × 10 6 PfSPZ per dose are assessing this for homologous CHMI, heterologous CHMI, and natural exposure in all age groups. The route of administration influenced VE, further underscoring the importance of PfSPZ administration by the i.v. route in achieving protection with this vaccine. Completed studies in Mali, Tanzania, and Equatorial Guinea have provided the first data for the feasibility of direct venous inoculation (DVI) of PfSPZ Vaccine to adults in Africa (M. Sissoko (International Center for Excellence in Research, Mali), S. Healy (NIH), S. Shekalaghe & A. Olotu (both from Ifakara Health Institute, Tanzania), personal communication), and field studies in Africa have begun to determine the safety, efficacy, and feasibility of DVI administration in young children and infants. An unexpected immunological finding was that the pre-vaccination frequency of circulating Vδ2 + T cells correlated with outcome of CHMI. Vδ2 + T cells recognize intracellular lipid metabolites (such as hydroxyl-methyl-butenyl-pyrophosphate (HMBPP)) from Plasmodium 27 , 28 . It is possible that these cells have a role in the initial priming of adaptive immune responses through rapid Pf recognition, IFN-γ production ( Supplementary Fig. 7m ), and/or by serving as antigen-presenting cells 29 at the time of immunization, leading to enhanced CD8 T cell priming 30 . γδ T cells have been shown to provide help to dendritic cells to facilitate protective responses to blood-stage malaria 31 , and IFN-γ-secreting γδ T cells are a correlate of protection against blood-stage malaria 32 . It is possible Vδ2 + T cells also mediate direct effector functions in the liver. A high frequency of circulating γδ T cells express CXCR6, a chemokine receptor that facilitates surveillance of the liver sinusoids 22 , and produce IFN-γ and perforin ( Supplementary Fig. 7i–m ), two potent effector molecules shown to mediate killing of intracellular Pf. γδ T cells have been shown to contribute to protection against pre-erythrocytic malaria in mouse models 21 , and these T cells expand in humans following whole-SPZ immunization 33 . The correlation between γδ T cell frequency and CHMI outcome described herein suggests that PfSPZ Vaccine induces protective immunity not only through induction of conventional CD8 and CD4 T cell responses but also through induction of γδ T cells. Such broad-based T cell responses that are induced by live-attenuated vaccines may not occur after immunization with the most commonly used protein- or virus-based subunit vaccines, highlighting one of many important differences between these approaches. We previously showed a dose-dependent increase in Pf-specific antibodies and multifunctional cytokine-producing CD4 T cells with doses between 7.5 × 10 3 and 1.35 × 10 5 PfSPZ that were administered by i.v. injection, but we could not assess correlates of protection because of the high-level of protection at the 3-week CHMI 11 . The clear dose response in this prior study 11 probably reflects the wider range (18-fold) of doses given. In the present study, there was a smaller range of doses that were given by the i.v. route, which probably accounts for the narrower range of antibody and T cell responses. The correlates of protection described here are hypothesis-generating and require validation in large prospective studies. Moreover, correlates of protection may differ depending on the dose per vaccination, the number of vaccinations, the time of sampling, and history of malaria exposure. The data presented here are consistent with the following role for antibodies: at the time of early CHMIs (3 weeks), antibody responses were highest, and subjects may have reduced the numbers of PfSPZ that successfully invaded hepatocytes ( Fig. 4c,d ) 14 , 15 , 16 , allowing for rapid elimination of the remaining parasites in the liver by cellular responses 34 , 35 , thereby resulting in low Ki-67 expression by PfSPZ-specific CD4 T cells in the protected subjects. At the time of the later CHMIs (21–25 weeks and 59 weeks), antibody levels were considerably lower, resulting in more parasites reaching the liver after CHMI, thus leading to greater activation (based on Ki-67 expression) of PfSPZ-specific CD4 T cells during the clearance of liver-stage parasites ( Fig. 5d,g,h ). Circulating Pf-specific T cell responses were low in blood at 59 weeks in all of the subjects who were not parasitemic after CHMI ( Fig. 6a,b ). Thus, if T cells are contributing to protection, then it may be due to an unmeasured aspect of the response in blood or to differential responses in the liver. Indeed, tissue-resident Pf-specific CD8 T cell responses in the liver of PfSPZ-vaccinated NHPs were highest after i.v. administration despite using 6- to 10-fold higher doses with the i.m. or i.d. route. Infectivity studies in mice show that >90% of SPZs that are administered by the i.m. or i.d. route fail to reach the liver 12 , and the parasite antigens are limited to muscle- or skin-draining lymph nodes 36 . Intravenous administration, in contrast, results in the efficient delivery of attenuated PfSPZ to the liver, where they presumably mediate priming of T cells that are programmed to remain as non-recirculating, liver-resident cells 26 . For PfSPZ Vaccine, efficacy is critically dependent on the route of vaccination. The data from the NHP studies indicate this is likely because i.v. administration induces robust tissue-resident responses in the liver, whereas peripheral (i.m. or i.d.) administration results in T cell responses in blood but more limited responses in the liver. Thus, for vaccines seeking to induce protective T cells, vaccine development efforts may be misled by focusing solely on maximizing T cell responses in the blood without considering how the vaccination strategy affects T cell responses at the infection site. In summary, the immune data presented here support the conclusion that whereas antibodies may have some contribution to protection early after final immunization, tissue-resident CD8 T cells are probably necessary for durable sterile protection 10 , 17 , 18 , 19 , 20 , 26 , 37 , 38 . Methods VRC 312 study. VRC 312 ( ; NCT01441167 ) was a phase 1, open-label, dose-escalation trial with CHMI to assess the safety, immunogenicity, and protective efficacy of PfSPZ Vaccine 9 , 10 that was administered by intravenous (i.v.) injection. The VRC 312 clinical trial protocol, funding, study oversight, and data reporting have been previously described 11 . Subjects from the VRC 312 study were re-enrolled for repeat CHMI. PCR and thick blood smear (TBS) for detection of parasitemia were assessed in parallel. The criteria for initiation of treatment for malaria were: (i) a positive TBS, (ii) two consecutive positive PCRs, or (iii) one positive PCR with symptoms or signs consistent with malaria. Upon diagnosis of parasitemia, a 3-d course of atovaquone and proguanil (Malarone) was used for treatment. VRC 314 study. VRC 314 ( ; NCT02015091 ) was a multi-institution, phase 1, open-label, dose-escalation trial with CHMI, designed to assess the safety, immunogenicity, and protective efficacy of PfSPZ Vaccine that was administered by i.v. or intramuscular (i.m.) injection. Complete inclusion and exclusion criteria are available at . Briefly, malaria-naive, healthy US adults, 18–45 years of age, were screened for good health, including laboratory testing for hepatitis B and C, HIV, and for pregnancy testing for women of child-bearing potential. Baseline complete blood counts, creatinine, and alanine aminotransferase (ALT) were screened, and only volunteers with normal values were enrolled. Volunteers with significant cardiovascular risk were excluded (i.e., 5-year risk of >10%) 39 . Exclusion criteria included known history of malaria infection or vaccination, the presence of hemoglobin S by electrophoresis, and splenectomy. The Vaccine Research Center (VRC) Clinical Trials Core at the National Institutes of Health (NIH) Clinical Center (CC) developed the protocol. The University of Maryland, Baltimore (UMB), Center for Vaccine Development (CVD) assisted with protocol development and study design. VRC and UMB CVD performed clinical duties. Sanaria produced, characterized, and prepared the syringes of PfSPZ Vaccine. The Walter Reed Army Institute of Research raised and infected mosquitoes, and assisted with the conduct of the CHMIs. Study oversight and safety surveillance. Study Oversight and Safety Surveillance was approved by the Intramural Institutional Review Board (IRB) of the National Institute of Allergy and Infectious Diseases (NIAID). US Department of Health and Human Services guidelines for conducting clinical research were followed. Each step was reviewed and approved by a Safety Monitoring Committee and submitted to the Food and Drug Administration (FDA). The protocol was reviewed by the FDA and conducted under IND 14826. Oral and written informed consent was obtained from all subjects. Local injection-site and systemic reactogenicity parameters were recorded through 7 d after vaccination as solicited adverse events (AEs). Unsolicited AEs were recorded until day 28 post-CHMI. Serious AEs and new chronic medical conditions were recorded through 24 weeks following the last vaccination. AEs were graded using a Toxicity Grading Scale for Adults, adapted from FDA Guidance for Industry 2007 (ref. 40 ). Malaria infection was recorded as a separate study end point. Therefore, associated signs and symptoms were not included in unsolicited AE listings. Vaccine and vaccination. PfSPZ Vaccine, which is composed of aseptic, purified, cryopreserved, metabolically active, radiation-attenuated P. falciparum sporozoites, was manufactured as described 9 using Pf NF54 parasites and met all quality control release- and stability-assay specifications. Cryovials containing PfSPZ Vaccine were thawed and formulated using phosphate-buffered saline (PBS) and human serum albumin (HSA) as diluent. Vaccines were administered by rapid intravenous injection of 1-ml volume into an antecubital vein or by intramuscular injection in two divided doses of 0.25 ml in the deltoid muscle. CHMI. CHMI was achieved by the bites of five PfSPZ-infected Anopheles stephensi mosquitoes, which met standard infectivity criteria 41 , 42 , 43 . All challenges were performed with the 3D7 clone. Subjects were monitored from days 7–28 after CHMI until diagnosis and cure of parasitemia was documented. Blood draws for PCR were performed and analyzed daily as previously described 11 . Treatment with atovaquone and proguanil (Malarone) was initiated when two PCR results were positive for Pf parasitemia. For the week 3 CHMI for groups 1–3, based on the large number of vaccinated subjects and controls, half of the subjects in each vaccine group were equally randomized and four unvaccinated controls underwent CHMI on two consecutive days (total of eight controls). Efficacy analysis. For the primary efficacy analysis, the results of the first CHMI for each vaccine group were compared to controls who underwent CHMI concurrently. To account for multiple comparisons, P < 0.01 was taken as evidence of an effect and P values between 0.01–0.05 were taken as suggestive of an effect. This threshold for significance of the primary analysis was specified in the protocol to partially address the potential for type-1-error-rate inflation. A strict Bonferroni adjustment would have been overly conservative, as the study design has several features, including the sequential and dependent nature of the groups and the use of new controls for each CHMI that resembles a sequence of small trials rather than a traditional parallel multi-arm trial. A fixed threshold that did not depend on the final number of groups was preferable in this case where results of some CHMIs were available before the next arm had been finalized. Secondary efficacy analyses compared the results from the different groups, as well as described the results of repeated challenges among those subjects who remained parasite free in each group. No formal multiple-comparison adjustment was used for secondary end points. For determining vaccine efficacy (VE), first VE was the VE (VE = 1 – relative risk) for the first CHMI. In instances in which a group underwent two CHMIs, vaccine recipients who developed parasitemia during the first CHMI did not undergo repeat CHMI. This was based on data in Supplementary Figure 1 showing that vaccinated subjects who were parasitemic following CHMI and who were treated remained parasitemic following a second CHMI. Subgroup VE was VE among the subgroup of subjects who were not parasitemic at the prior CHMI. Cumulative VE is first VE multiplied by subgroup VE. PfCSP ELISA. For ELISA measurement of IgG against PfCSP, a recombinant Pf circumsporozoite protein (rPfCSPv2) was used. The rPfCSP (3D7 clone) is a 48-kDa protein, which begins at amino acid residue 50 of the native protein sequence. The first six amino acids of rPfCSP are S-L-G-E-N-D. The central repeat region of this rPfCSP is composed of 22 N-A-N-P and 4 N-V-D-P tandem repeat sequences. 96-well plates (Nunc MaxiSorp Immuno Plate) were coated overnight at 4 °C with 2.0 μg rPfCSP/ml in 50 μl per well in coating buffer (KPL). Plates were then washed three times with 1× imidazole-based wash solution containing 2 mM imidazole, 160 mM NaCl, 0.02% Tween-20, 0.5 mM EDTA and were blocked with 1% bovine serum albumin (BSA) blocking buffer (KPL) containing 1% non-fat dry milk for 1 h at 37 °C. Plates were washed three times, and serially diluted samples (in triplicates) were added and incubated at 37 °C for 1 h. After washing three times, peroxidase-labeled goat anti–human IgG (KPL) was added at a dilution of 0.1 μg/ml and incubated at 37 °C for 1 h. After washing three times, ABTS (2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid)-diammonium salt) peroxidase substrate was added for plate development, and the plates were incubated for 75 min at 22 °C. The plates were read with a Spectramax Plus384 microplate reader (Molecular Devices) at 405 nm. The data were collected using Softmax Pro GXP v5. Data were fit to a 4-parameter sigmoidal curve, and the reciprocal serum dilution at which the optical density was 1.0 (OD1.0) was calculated. To serve as assay controls, a negative control (pooled serum from non-immune individuals from a malaria-free area) and a positive control (serum from an individual with anti-PfCSP antibodies) were always included. The results are reported as net OD1.0, which was the OD1.0 of the post-immunization serum minus the OD1.0 of the pre-immunization serum. Samples were considered positive if the difference between the post-immunization OD1.0 and the pre-immunization OD1.0 (net OD1.0) was >50 and the ratio of post-immunization OD1.0 to pre-immunization OD1.0 (ratio) was >2.5. Supplementary Table 7 lists all PfCSP antibody levels 2 weeks after final vaccination. PfSPZ aIFA. Purified PfSPZ (NF54 strain) from aseptic A. stephensi mosquitoes, which was produced by Sanaria, were resuspended in PBS (pH 7.4). 0.5 × 10 4 PfSPZ in 40 μl were added to each well of Greiner CELLSTAR clear-bottom black 96-well plates (Sigma-Aldrich). After addition of the suspension, plates were left at room temperature for 12−18 h for air-drying. 50 μl of sera diluted in PBS with 2% BSA were added to each well containing air-dried PfSPZ. Sera samples were added at twofold dilutions. Plates were incubated at 37 °C for 1 h. Plates were washed in PBS three times. Alexa-Fluor-488-conjugated goat anti–human IgG (Molecular Probes) was diluted to 1:250 in PBS with 2% BSA, and 40 μl was added to each well. The plates were then incubated for 1 h at 37 °C. Plates were washed three times with PBS. 100 μl PBS was added to each well. Samples were assessed by scanning the entire surface of each well using an Acumen eX3 laser-scanning imaging cytometer. The positive control was pooled human serum taken 2 weeks after the last immunization from 12 protected volunteers immunized four or five times with PfSPZ Vaccine in the VRC 312 clinical trial 11 . The data was plotted to fit a four-parameter sigmoidal curve. Sera from malaria-naive volunteers in the USA and Europe, including pre-immune sera, always register an AFU (arbitrary fluorescence units) value less than 2.0 × 10 5 , even at the highest concentration used in this assay (1:50 dilution). For sera that do bind to PfSPZ, 2.0 × 10 5 AFU falls in the exponential portion of the sigmoidal curves. Therefore 2.0 × 10 5 was chosen as the threshold in the aIFA assay, and the level of anti-PfSPZ antibodies for each volunteer is reported as the reciprocal serum dilution at which fluorescence intensity was equal to 2.0 × 10 5 AFU. Supplementary Table 7 lists all PfSPZ antibody levels 2 weeks after final vaccination. T cell assays. Assessment of cellular immune responses using multi-parameter flow cytometry was done from PBMCs on cryopreserved samples at the completion of the study. After thawing, PBMCs were rested for 8 h in complete RPMI (RPMI-1640 medium containing 2 mM L -glutamine (GE Life Sciences), 10% vol./vol. heat-inactivated FCS (Atlanta Biologicals), 100 U/ml penicillin (Life Technologies), 100 μg streptomycin (Life Technologies), 25 mM HEPES buffer (Life Technologies), 0.1% vol./vol. 2-mercaptoethanol (Sigma)) containing 25 U/ml Benzonase (EMD Biosciences), and plated in 200 μl of medium at 1.5 × 10 6 cells per well in a 96-well V-bottom plate and stimulated for 17 h with: (i) PfSPZ Vaccine diluent (1% human serum albumin; CSL Behring); (ii) 1.5 × 10 5 viable, irradiated, aseptic, purified, cryopreserved PfSPZ from a single production lot; (iii) 2 × 10 5 lysed, infected RBCs consisting of >90% parasitemic late-stage schizonts (PfRBC) from a single production lot; or (iv) a single lot of donor-matched uninfected erythrocytes (uRBCs). PfSPZ and PfRBC were titrated for optimal sensitivity and specificity of detection of Pf-specific responses. For the last 5 h of the stimulation, 10 μg/ml brefeldin A (BD Biosciences) was added to the culture. A positive control sample from a subject vaccinated with five doses of 1.35 × 10 5 PfSPZ by i.v. injection and a negative malaria-naive control were included for each day the subjects were analyzed to determine the reproducibility of antigen stimulation. After stimulation, cells were stained as previously described 44 . The staining panels are shown in Supplementary Table 6 ; the gating tree is shown in Supplementary Figures 4 and 5 , and the antibody clones and manufacturers are shown in Supplementary Table 6 . Briefly, cells were surface-stained for the chemokine receptor CCR7 at 37 °C for 20 min. Dead cells were identified by Aqua Live/Dead dye (Invitrogen) as per the manufacturer's instructions. This was followed by a 15-min surface-staining at room temperature for the markers CD4, CD8, CD14, CD20, CD38, CD45RA, CD56, TCR Vα-7.2, CD161, TCR-γδ, TCR-Vδ1, TCR-Vδ2, TCR-Vγ9, or CXCR6. Cells were washed, fixed, and permeabilized using Cytofix/Cytoperm kit (BD Biosciences) and were stained intracellularly for CD3, IFN-γ, IL-2, TNF-α, perforin, or Ki-67. Cells were washed, fixed in 0.5% paraformaldehyde, and signals were acquired on a modified LSR II (BD Biosciences). All antigen-specific cytokine frequencies are reported after background subtraction of identical gates from the same sample incubated with the control antigen stimulation (HSA or uRBC). Human liver samples. Human liver samples were obtained under a clinical protocol unrelated to the PfSPZ vaccination protocol. The Regional Ethical Review Board in Stockholm, Karolinska Institutet, Stockholm, Sweden, approved the study. Oral and written informed consent was obtained from all subjects. Livers were obtained during partial hepatectomy from living donors undergoing therapeutic tumor excision where only tumor-free unaffected tissue was used for isolation of immune cells. Immune cells were isolated using a previously described protocol 45 . Functional assessment of vaccine-induced IgG. Total IgG serum antibodies were purified from serum or plasma by affinity chromatography using Pierce Protein G Plus Agarose (Thermo Scientific) according to manufacturer's instructions. Column-eluted IgG was concentrated and buffer-exchanged into phosphate-buffered saline (pH 7.4) using Amicon Ultra 30,000 MWCO centrifugal filters (EMD Millipore). Protein concentration of the purified IgG was calculated by the BCA assay (Thermo Scientific). Human hepatocyte donor-matched FRG-huHep mice (male and female, aged 5–9 months) were purchased from Yecuris, Inc. All animals were cared for in accordance with the NIH Guide for the Care and Use of Laboratory Animals 46 . The Institutional Animal Care and Use Committee of the Center for Infectious Disease Research, Seattle, WA approved all animal procedures. Animals were randomly allocated to experimental and control groups. Mice were intravenously injected with 8 mg/mouse of total IgG purified from vaccinated subjects or 150 μg of anti-PfCSP mouse monoclonal antibody (IgG3; clone 3C1; Rockefeller University) 16 h before challenge. For the mosquito bite challenge, Anopheles mosquitoes infected with P. falciparum expressing GFP–luciferase were generated as previously described 47 . Mosquito infection was quantified at day 10 after blood meal and were used only if infection was >50% prevalence with an average of >10 oocysts/midgut. All qualifying mosquitoes were then pooled and distributed into cages with ∼ 50 mosquitoes/mouse (up to 250 mosquitoes). Mice, in groups of up to five, were then anesthetized with isoflurane and placed on a mesh screen covering the container of mosquitoes while under isoflurane anesthesia. Mosquitoes were then allowed to feed for 10 min, with lifting of mice once every minute to encourage probing and injection of sporozoites rather than blood feeding. After 10 min, the mice were returned to normal activity. At day 6 post-infection (peak of liver burden), mice were imaged for liver-stage burden using bioluminescence and IVIS imaging as previously described 24 , 48 . Briefly, mice were intraperitoneally injected with 100 μl of RediJect D -Luciferin (PerkinElmer) and, after 5 min, were imaged with a 5-min exposure. Liver-stage burden was assessed by placing a region of interest (ROI) around the liver of each mouse and measuring total flux in photons/s. An identical ROI was placed over the thorax, which was used to subtract background signal. Liver-stage burden of all mice was normalized to the mean of the negative control group, which received pre-immune IgG. Sample size was based on prior passive transfer experiments and calculated using Prism (GraphPad) and JMP Design of Experiment functionality (SAS). Pearson correlation coefficient was used to determine the relationship between Pf liver-stage burden and PfCSP level. Investigators were blinded to treatment group during passive transfer of IgG, Pf infection, and measurement of luminescence. NHP vaccinations. Thirty healthy male and female rhesus macaques of Indian origin ( Macaca mulatta ) with a mean (s.d.) age and weight of 6.1 (1.5) years and 8.7 (3.1) kg, respectively, were singly housed in animal biosafety level 2 facilities and were monitored throughout the study for physical health, food consumption, body weight, and temperature. All animals were cared for in accordance with the NIH Guide for the Care and Use of Laboratory Animals 46 . The Institutional Animal Care and Use Committee of the Vaccine Research Center, NIH approved all animal procedures. Study groups were balanced with respect to age, weight, and gender (equal numbers of males and females in each group). Sample size was based on prior NHP immunogenicity studies and calculated using Prism (GraphPad) and JMP Design of Experiment functionality (SAS). PfSPZ Vaccine was formulated and administered as for human injection with the following exceptions. For i.v. groups, PfSPZ in 500 μl was administered by direct venous inoculation into the saphenous vein. For the i.m. group, PfSPZ was administered to the left and right deltoid, and each injection was 500 μl in volume. For the i.d. group, PfSPZ was administered in the skin over the left and right deltoid, and each injection was 100 μl in volume. Immunizations and blood sampling occurred with the animal under anesthesia (10 mg per kg body weight ketamine HCl). NHP T cell assays. Immune assays on PBMCs were batch-analyzed on cryopreserved samples at the completion of the study. PBMCs were isolated by density-gradient centrifugation from Acid–citrate–dextrose–anti-coagulated whole blood. Ten weeks after final immunization, animals were euthanized, and liver tissue was processed as previously described 10 . For immune analysis, samples were thawed, rested, stimulated, stained, and analyzed using identical reagents, stimulation antigens, and protocol as the intracellular cytokine staining assay used for human PBMC samples (described above). The staining panel is shown in Supplementary Table 6 ; the gating tree is shown in Supplementary Figure 10 ; and the antibody clones and manufacturers are shown in Supplementary Table 6 . Investigators were blinded to the treatment group during lymphocyte stimulations and flow cytometry gating. All antigen-specific cytokine frequencies are reported after background subtraction of identical gates from the same sample incubated with the control antigen stimulation (HSA or uRBC). Statistical analyses. Flow cytometry data was analyzed using FlowJo v9.8.5 (Tree Star). Statistical analyses were performed with Pestle v1.7 and SPICE v5.3 (M. Roederer) 49 , JMP 11 (SAS), Prism 6 (GraphPad), and R v3.2.2 with RStudio v0.99.483. For vaccine immunogenicity, comparisons between groups were performed using Kruskal–Wallis with Dunn's post-test correction for multiple comparisons. If no differences between vaccines groups were identified by Kruskal–Wallis, then differences from pre-vaccination were assessed by Wilcoxon matched-pairs signed rank test with Bonferroni correction for multiple comparisons, as specified in the figure legends. Immune responses were assessed 2 weeks after the final immunization or pre-vaccination (as specified in the figure legends) and were compared to outcome at either 3-week CHMI or 21- to 25-week CHMI. Assessment of immune responses that correlate with outcome at CHMI (parasitemia or no parasitemia) was made using a stratified Wilcoxon test controlling for vaccine regimen as a covariate. Change history 18 May 2016 In the version of this article initially published online, the authors omitted a funding source, The Bill and Melinda Gates Foundation (Investment ID: 24922). The error has been corrected for the print, PDF and HTML versions of this article.
Malaria infects hundreds of millions of people every year, and kills more than half a million, most of them under the age of 5 years. There is no vaccine. But now, a new study by researchers at the University of Maryland School of Medicine has found that an experimental malaria vaccine protected adults from infection for more than a year. The study, a Phase 1 trial, was published in the journal Nature Medicine. "These results are really important," said Kirsten E. Lyke, a researcher at the University of Maryland School of Medicine. "Malaria has such a devastating effect on children, especially in Africa. This vaccine has the potential to help travelers, military personnel and children in malaria-endemic areas." Known as PfSPZ Vaccine, the treatment was developed and produced by Sanaria Inc., of Rockville, Maryland, with support from the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health. Lyke and her colleagues, working with NIAID scientists, conducted a clinical evaluation of the vaccine, which involved exposing a small number of willing healthy adults to the malaria-causing parasite Plasmodium falciparum (P. falciparum) in a controlled setting. The parasite is transmitted to humans via the bite of infected mosquitos. The PfSPZ Vaccine consists of live, but weakened, P. falciparum, specifically, the early developmental form of the parasite. Previous research had shown that the vaccine worked for three weeks after immunization. This study analyzed its longer term effects. The trial enrolled 101 healthy adults aged 18 to 45 years, who had never had malaria. Of these, 59 participants received the vaccine, while 32 participants were not vaccinated. Vaccine recipients were divided into groups to assess various variables, including dose, number of immunizations, and route of administration. Participants were exposed to the bites of mosquitoes carrying the same P. falciparum strain from which the vaccine was derived. Scientists then took blood samples from participants to measure parasite levels for evidence of protection. IV administration appears to provide better protection than intramuscular injection, both in the short and long term. Overall, the study found that the vaccine provided protection for up to a year in more than half (55 percent) of subjects. In those people, it appeared to provide sterile protection, meaning the subjects not only didn't get malaria, but also could not further transmit malaria. Long-term, reliable protection is important for people who are vaccinated, but not actually exposed to malaria for months, such as travelers or military personnel. Durable protection is also important for mass vaccination campaigns aimed at interrupting transmission in places where the disease is widespread.
10.1038/nm.4110
Nano
Silver nanoparticles' protein 'corona' affects their toxicity
Teodora Miclăuş et al, Dynamic protein coronas revealed as a modulator of silver nanoparticle sulphidation in vitro, Nature Communications (2016). DOI: 10.1038/ncomms11770 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms11770
https://phys.org/news/2016-08-silver-nanoparticles-protein-corona-affects.html
Abstract Proteins adsorbing at nanoparticles have been proposed as critical toxicity mediators and are included in ongoing efforts to develop predictive tools for safety assessment. Strongly attached proteins can be isolated, identified and correlated to changes in nanoparticle state, cellular association or toxicity. Weakly attached, rapidly exchanging proteins are also present at nanoparticles, but are difficult to isolate and have hardly been examined. Here we study rapidly exchanging proteins and show for the first time that they have a strong modulatory effect on the biotransformation of silver nanoparticles. Released silver ions, known for their role in particle toxicity, are found to be trapped as silver sulphide nanocrystals within the protein corona at silver nanoparticles in serum-containing cell culture media. The strongly attached corona acts as a site for sulphidation, while the weakly attached proteins reduce nanocrystal formation in a serum-concentration-dependent manner. Sulphidation results in decreased toxicity of Ag NPs. Introduction The biological effects of engineered nanomaterials as drug delivery vehicles or as unintentionally released nanoparticles (NPs) are of strong current interest. Biomolecules—mainly proteins—adsorbing at NPs modify their surface properties and are proposed as important modulators of particle–cell interactions 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . A pragmatic distinction has been made between the relatively easily studied, strongly attached proteins as long-lived, hard coronas and the weakly attached, rapidly exchanging proteins as soft coronas 9 , 10 , 11 , 12 , 13 . The former are under focus with residence at the particles on timescales relevant for cellular binding and uptake 4 , 6 , 14 , whereas the role of the latter in modulating NP behaviour has yet to be established. Specific and different profiles of molecules concentrated within the hard corona at particles in biological media have been observed for different surface coatings 15 , charges 16 , 17 , sizes 15 , 18 and shapes 19 . The concept of a biological identity imprinted within the protein corona and which determines NP–cellular interactions 1 , 2 , 3 , 4 , 5 , 6 has been proposed 20 . Although the long-lived layer has been linked to particle aggregation 19 and cell association 6 , 14 , 21 , the correlation of protein composition to cellular uptake/toxicity is still relatively weak 4 , 22 , 23 . The involvement of soft corona in physical and/or chemical transformations of particle with potential implications for toxicity is so far unstudied, despite it forming a dense second layer around the strongly attached biomolecules 24 . In addition to protein corona formation, ion release is central to the toxicity of silver NPs and is an important parameter studied in vitro 25 , 26 , 27 and in vivo 28 . Oxidation contributes to ion release through the formation of Ag 2 O on the particles 29 , 30 , which is then dissolved in aqueous media 31 , 32 , 33 . Oxidative dissolution is an important step in Ag 2 S formation from/at Ag NPs 34 . Silver NP sulphidation has been receiving increasing attention, as the resulting sulphide is insoluble in water, decreasing the availability of Ag + and impacting antibacterial 35 and toxicological 36 , 37 , 38 effects. After identification of silver sulphide in sewage sludge 39 , interest in studying Ag 2 S was focused on wastewater plants 40 , 41 and aquatic environments 42 . Although most toxicity experiments are conducted in vitro , much less is known about such transformations of Ag NPs under these conditions. Thiols (for example, cysteine) have been proven to bind Ag + in biological environments 43 , 44 . Tracking the oxidation state of intracellular silver showed an evolution from Ag 0 to oxygen-bound and sulphur-bound Ag ions 45 , 46 . Formation of Ag 2 S in alveolar cells was proposed to explain decreased toxicity of silver nanowires 47 and their sulphidation in protein-free culture medium was recently studied 48 . A further step involves exploring chemical changes that occur in full culture media, in the presence of protein coronas, before cellular uptake. Here we demonstrate one clear role for the soft corona in modulating silver NP sulphidation in vitro , and highlight the interplay between strongly and weakly attached proteins for the chemical transformation of Ag NPs. We also suggest some potential implications for toxicity, without, however, establishing a clear direct connection between the soft corona and observed toxicological effects. We show for the first time a functional effect of rapidly exchanging proteins, which decreased the amount of nano-Ag 2 S formed at polyvinylpyrrolidone (PVP)-coated Ag NPs incubated in serum-supplemented cell culture media. We propose and study a mechanism for soft corona protein-assisted Ag + transport explaining reduced sulphide formation. Striking differences when going from in vitro to in vivo relevant protein concentrations are observed and discussed. As it is known that sulphidation decreases silver toxicity 36 , 37 , 38 , 47 , 49 , 50 , 51 , it is not surprising that under conditions where Ag NPs were partially or completely transformed into Ag 2 S in cell culture media, much less toxicity to J774 macrophages and different cytokine secretion profiles are seen compared with silver NPs. Results Protein coronas modulate nano-Ag 2 S formation at Ag NPs Upon incubation of PVP-coated, cubic or quasi-spherical Ag NPs in RPMI-1640 cell culture medium supplemented with fetal bovine serum (FBS), new NPs were observed to form close to the surface of the silver. Details regarding incubation are available in the Methods section, Particle incubation in cell culture media subsection. Figure 1a shows a typical transmission electron microscopy (TEM) image of nanocubes after 7 days in 1% serum, with the NPs forming a dispersed layer around the silver core (highlighted by arrows). X-rays elemental mapping ( Fig. 1b ) and energy-dispersive X-ray spectroscopy (EDS, Fig. 1c ) revealed the presence of sulphur. Co-localization of Ag and S matches the small NPs in the proximity of the silver surface ( Fig. 1b ). The diffraction line at 2.80 ( Fig. 1d ) corresponds to monoclinic Ag 2 S (ref. 52 ). Figure 1: Silver sulphide forms close to the surface of Ag NPs. TEM image with arrows highlighting nano-Ag 2 S ( a , scale bar 50 nm), X-rays elemental mapping of Ag (red), S (blue, with white rings marking the approximate contour of the Ag NPs) and overlaid Ag and S ( b ), EDS spectrum—with arrows pointing at the peaks corresponding to each element—( c ) and diffraction pattern—arrow pointing at the diffraction line corresponding to monoclinic Ag 2 S—( d ) of silver nanocubes after 7 days incubation in RPMI-1640 supplemented with 1% FBS and formation of Ag 2 S at the surface of the Ag NPs. Full size image When in contact with biological media, NPs get covered with biomolecules 1 , 2 , 3 , 4 . Hard and soft protein coronas around silver nanocubes have previously been quantified and it has been shown that the polymer coating is replaced during the first hour in 1% serum 24 . We observe no sulphide within 1 h, before PVP replacement ( Supplementary Fig. 1 ). The later appearance of Ag 2 S close to the silver surface suggests its formation is related to the layers of adsorbed biomolecules. We hypothesize a mechanism where protein coronas and ion release govern the formation of Ag 2 S at silver NPs ( Fig. 2a ). We propose that released Ag + can get trapped in the long-lived protein corona where, if enough reduced sulphur and Ag + are available, Ag 2 S may form. In contrast, the rapidly exchanging soft corona proteins prevent sulphide formation by transporting Ag + away from the particle, thus decreasing local ion concentration. To test this hypothesis, one must account for both hard and soft corona proteins, as well as bulk biomolecules. Figure 2: Protein coronas modulate sulphide formation. Proposed mechanism of protein corona-modulated nano-Ag 2 S formation at Ag NPs, with hard corona proteins trapping Ag + released from the nanoparticle surface and soft corona proteins transporting said ions away from the sulphide-formation centres in the long-lived corona ( a ); TEM images of silver nanocubes after 24 h in RPMI-1640 cell culture medium supplemented with 1% FBS ( b ), followed by 6 days incubation in RPMI-1640 with 0% FBS (inset cartoon showing only hard corona around Ag NPs) ( c ), 1% FBS ( d ) or 10% FBS ( e ; common inset cartoon showing hard and soft coronas, as well as free bulk proteins around Ag NPs); TEM images of silver nanocubes after 7 days incubation in RPMI-1640 with 0.4 mg ml −1 BSA ( f ) or 4 mg ml −1 BSA ( g ) (common inset cartoon showing hard corona and free bulk proteins around Ag NPs); Ultraviolet–visible spectra of cubic ( h ) and quasi-spherical ( i ) Ag NPs after 24 h incubation in RPMI-1640 cell culture medium supplemented with 1, 10 or 50% FBS; TEM images of silver nanocubes after 24 h in RPMI-1640 supplemented with 1% FBS ( j ), 10% FBS ( k ) and 50% FBS ( l ). Scale bars are 100 nm ( b , j – l ) or 50 nm ( c – g ). Full size image Silver nanocubes ( Supplementary Fig. 2 ) were incubated in RPMI-1640 with 1% FBS for 24 h ( Fig. 2b ) to provide a stable hard corona ( Supplementary Discussion ). The particles were washed, removing unbound and loosely bound proteins while retaining the hard corona ( Fig. 2c , cartoon), and re-suspended in RPMI-1640 without serum for 6 days. Similarly, particles were incubated for 7 days in 1 or 10% FBS to ensure the presence of hard and soft coronas, and bulk proteins ( Fig. 2d,e cartoon). The same behaviour was seen for quasi-spherical NPs ( Supplementary Fig. 3 ). Ag 2 S is observed under all incubation conditions where serum was present for at least the 24 h needed to fully form the hard corona 24 . Figure 2c–e shows typical TEM images of the different conditions–6 days at 0, 1 and 10% FBS after 24 h in serum-containing media. EDS confirmed the presence of sulphur ( Supplementary Fig. 4 ). Although the total incubation time is the same (7 days) for all samples, the differences in Ag 2 S amounts are striking, with significantly more sulphide observed at the particle surfaces when soft corona and free bulk proteins are absent after the first day. Incubating Ag NPs in serum-containing media for the initial 24 h establishes a long-lived corona, stable after the transition to 0% FBS. It has previously been shown that the hard corona is similar after 24 h in 1 and 10% FBS for different silver nanocubes 24 , and in the present study, we observed the same protein profiles for NPs of various sizes ( Supplementary Fig. 5 ). Here there is a similar long-lived layer on all Ag NPs, as confirmed by mass spectrometry ( Supplementary Tables 1–3 ); what varies (0, 1 and 10% FBS) are the soft corona and the bulk protein concentrations. Nano-Ag 2 S formation at Ag NPs decreased with soft corona presence and the increase in free proteins, with significantly more sulphide at 0% than 1% and 10% FBS. The variation does not appear linear: differences between 1 and 10% FBS are not as striking as between 0 and 1%. Furthermore, although in the presence of serum the particles are dispersed even after 7 days ( Supplementary Fig. 6 ), at 0% FBS the nano-Ag 2 S forms bridges between Ag NPs, which result in particle agglomeration ( Fig. 2c ). Both soft corona and bulk proteins could bind Ag + ; distinguishing between the two categories requires absence of one of them. To test the role of soft coronas, bovine serum albumin (BSA), at concentrations equivalent to 1, 10 or 50% FBS, was used to form a hard corona around silver nanocubes ( Supplementary Fig. 7 ). When only BSA is present in the system, the protein–protein interactions needed to form soft coronas do not occur ( Supplementary Fig. 8 and Supplementary Methods ); incubation in albumin provides only hard coronas and bulk proteins ( Fig. 2f,g , cartoon). Figure 2f,g shows slightly more nano-Ag 2 S formation when BSA concentration is increased from equivalent of 1% FBS to 10% FBS; the opposite phenomenon is observed for serum ( Fig. 2d,e ). Full sets of TEM images are available ( Supplementary Figs 9 and 10 ); similar results were obtained when using lysozyme instead of BSA ( Supplementary Figs 11 and 12 ). The observed increase in Ag 2 S content at higher albumin/lysozyme concentrations may be due to a thicker, faster-formed hard corona, allowing for trapping and sulphidation of more Ag + . Although sulphide appears for incubation in both FBS and BSA/lysozyme, increasing serum concentration—unlike increasing albumin/lysozyme concentration—decreases Ag 2 S formation. This is attributed to the lack of a soft corona when a single type of protein exists in the system 11 —as shown in Supplementary Fig. 8 and Supplementary Methods —as hard coronas and free bulk proteins are present in both cases. To further study the soft corona influence, experiments were performed at serum concentrations from 1 to 50%, over 24 h. Ultraviolet–visible spectra were collected and plasmon peak shifts were observed. Owing to localized surface plasmon resonances, variations in peak position indicate refractive index changes around particles. Red shifts of 2–4 nm upon binding of proteins to Ag NPs have been seen previously 24 . Here, we observe changes of up to 45 nm indicating the presence of a material with high refractive index—like Ag 2 S—close to the Ag NPs. Comparison to TEM images ( Fig. 2j,l and Supplementary Figs 13 and 14 ) shows the more sulphide present at the surface, the larger the red-shift in the plasmon peaks is. Increased protein concentration and, therefore, soft corona protein content 24 , results in a visible decrease of nano-Ag 2 S formation not just at prolonged incubation, but also after 24 h. Analysis of multiple TEM images revealed almost no sulphide at the NP surface in 50% serum. Furthermore, TEM and ultraviolet–visible data indicate a lower sulphide content after 24 h in 10% compared with 1% FBS. Figure 2h,i shows spectra for cubic and quasi-spherical Ag NPs after 24 h in 1, 10 or 50% serum. Decreasing FBS concentration and, implicitly, the amount of soft corona 24 , results in the formation of more Ag 2 S at Ag NPs, as suggested by the plasmon peak shifted towards higher wavelengths when going from 50% to 10% and 1% serum. NP dissolution ( Supplementary Fig. 15 ) results in reshaping and resizing on a similar timescale for these nanocubes, with concomitant changes of the optical spectra. This leads to the decreased intensity of the signal around 350 nm, where a more prominent peak is characteristic of larger silver nanocubes, with sharper edges 53 . Reduction in particle diameter, coupled with shape changes from cubes to sphere-like, also explains the blue-shift around 435 nm ( Fig. 2h ) after incubation in 50% serum. No reshaping is observed for the quasi-spherical NPs, while slight diameter decrease may occur. It has been shown that shifts in plasmon peak position, coupled with finite-difference time-domain simulations of plasmonic behaviour may be used to quantify refractive index changes caused by protein adsorption at Ag NPs 24 . Via a similar approach ( Supplementary Figs 16 and 17 , Supplementary Table 4 and Supplementary Methods ) we estimate that, at the Ag NP concentration used here (10 μg ml −1 ), 15–40% of the silver is transformed into sulphide. Ion release from Ag NPs is necessary for nano-Ag 2 S formation Sulphidation of silver NPs involves the presence of ionic silver therefore variations in Ag + content may influence the formation of Ag 2 S. To test this parameter, AgNO 3 —with Ag + representing 10% by weight of the particulate silver—was added to the NP suspension at the beginning of 1 or 7 days incubation in RPMI-1640 with 1% FBS ( Supplementary Figs 18–21 for 10% serum). ultraviolet–visible spectra of quasi-spherical particles with or without extra ions were collected ( Fig. 3a ); a less pronounced red-shift was seen when free Ag + were added, especially at short incubation, suggesting a slower NP dissolution in the presence of extra ions. Figure 3: Silver nanoparticle dissolution is involved in nano-Ag 2 S formation. Ultraviolet–visible full spectra of quasi-spherical Ag NPs ( a ) and quadrupole peak detail of nanocubes ( b ) incubated (1 day: blue or 7 days: pink) in RPMI-1640 cell culture medium supplemented with 1% FBS, with or without added extra 10% (by mass) Ag + ions from AgNO 3 ; EDS spectra of the supernatant obtained after centrifugation of Ag NPs incubated for 7 days in RPMI-1640 with 1% FBS, before ( c ) and after ( d ) spiking with 5 nm PVP-coated Ag NPs, with dotted red line highlighting the presence of a silver signal only in the spiked sample. Full size image Nanocubes have several characteristic plasmon peaks, of which a quadrupole (≈350 nm) and a dipole (≈435 nm; Supplementary Fig. 18 ). The quadrupole indicates the degree to which a NP is cubic, as well as its size 53 , and, as such, it is used to track the synthesis of silver nanocubes 54 ( Supplementary Fig. 22 ). Dissolution reduces particle size and blunts cube edges, resulting in flattening of the quadrupole peak ( Fig. 3b ). After 7 days in 1% FBS, the signal disappears almost completely.. At 24 h, both samples exhibit a pronounced quadrupole peak, with a visibly sharper signal when 10% free Ag + were added at the beginning of the incubation. The results suggest Ag NPs in samples with added ions undergo slower and, perhaps, less dissolution, observable in resizing and reshaping of the particles. This behaviour could be explained by the existence of a plateau ion concentration in the bulk, as previously observed 32 , 45 . Adding free ions may decreases the amount of Ag + required from NP dissolution for achieving this equilibrium concentration, but further investigations would be necessary to elucidate the mechanism. Inspection of multiple TEM images suggests there is no difference in the amount of Ag 2 S at silver NPs with or without added 10% Ag + at 7 days, but some decrease in sulphide incidence occurs at 24 h ( Supplementary Fig. 19 ). This indicates the formation of nano-Ag 2 S requires the release of Ag + from NPs and not just the existence of free ions in the bulk. The observation is strengthened by control experiments with silica particles in RPMI-1640 with FBS and free silver ions; no nano-Ag 2 S was detected after 7 days ( Supplementary Fig. 23 ). Atomic absorption spectroscopy ( Supplementary Fig. 15 ) confirmed the presence of silver in the bulk, strengthening the soft corona ion transport mechanism; it did not, however, indicate the chemical form of the silver. Although the timescale of soft corona exchange is on the order of seconds and minutes and nano-Ag 2 S formation occurs over hours, we verified that at prolonged incubation the rapidly exchanging proteins do not transport Ag 2 S nano-crystals away from the Ag NP surface. We measured EDS spectra and mapped supernatants collected after 7 days incubation in 1% FBS, before and after spiking in PVP-coated Ag NPs ( Fig. 3c,d and Supplementary Fig. 24 ) of similar size (5 nm) and amount to that of nano-Ag 2 S. If Ag 2 S nanocrystals were transported to the bulk, a peak would appear in the EDS spectrum around 3 keV. Absence of this signal in the un-spiked supernatant indicates nano-Ag 2 S is not transported by the soft corona. Sulphur sources and Ag/S ratio influence nano-Ag 2 S formation Sulphur-containing gases can contribute to the formation of Ag 2 S at Ag NPs exposed to air, but this is a lengthy and slow process, extended over 24 weeks 55 . In the liquid phase, RPMI-1640 provides several sulphur sources ( Supplementary Tables 5 and 6 ), with L -cysteine and L -methionine accounting for most of the reduced S; furthermore, many serum proteins contain cysteine residues. To pin-point the sulphur responsible for Ag 2 S formation in our case, we incubated Ag NPs in phosphate-buffered saline (PBS) supplemented with either 1% FBS or L -cysteine and L -methionine at the same concentrations as in RPMI-1640. After 7 days, no sulphide was seen in PBS or in buffer with serum ( Fig. 4a,b ), but nano-Ag 2 S was present in the amino-acid-supplemented buffer ( Fig. 4c and Supplementary Fig. 25 ). EDS spectra ( Fig. 4d ) confirmed the observations from TEM images. These experiments show reduced sulphur from small molecules and not from proteins is the one forming Ag 2 S. Figure 4: Sulphur sources and the Ag:S ratio influence Ag 2 S formation. TEM images of Ag NPs after 7 days incubation in PBS ( a ), PBS supplemented with 1% FBS ( b ) and PBS supplemented with L -cysteine and L -methionine at the same concentrations of amino acids as those found in RPMI-1640 ( c ) and corresponding EDS spectra ( d ); TEM images and corresponding EDS spectra (insets) of Ag NPs after 7 days incubation in RPMI-1640 supplemented with 1% FBS, with initial silver concentrations of 2 μg ml −1 ( e ), 10 μg ml −1 (f) and 100 μg ml −1 ( g ), with elemental mapping images provided in Supplementary Fig. 26 . Scale bars are 100 nm ( a – c ) or 50 nm ( e – g ). Full size image We further tested the effect of Ag/S ratios by varying the amount of nanocube stock suspension added to a given volume of serum-supplemented RPMI-1640. After 7 days, increasing the initial silver concentration from 2 to 10 and then to 100 μg ml −1 ( Fig. 4e–g and Supplementary Fig. 26 ) drastically decreases Ag 2 S formation by limiting the L -cysteine and L -methionine available per Ag + . In vivo cysteine concentration is more than double that in RPMI, making more S available for Ag 2 S formation 56 , 57 . At the lowest Ag/S ratio, the particles are almost entirely transformed into Ag 2 S, forming ‘pockets’ of sulphide that conserve the shape of the initial particle, with the metal core still visible in some cases ( Fig. 4e ). These observations suggest sulphidation is confined to the protein hard corona, in agreement with EDS showing the absence of nano-Ag 2 S from the suspension supernatant ( Fig. 3c,d ). Protein corona-mediated sulphidation impacts cell toxicity Previous research has already shown that trapping Ag + in the form of insoluble Ag 2 S decreases the toxicity of silver 50 , 51 and Ag NPs 35 , 49 , mostly in the case of aquatic environments 36 , 37 , 58 and in soil 38 , which are settings with low protein contents, but higher concentrations of other components that are not prevalent under in vitro cell study conditions. Although the effects under in vitro parameters have not been studied to the same extent, some information is also available on the decreased toxicity of sulphidated Ag NPs to cultured cells 47 . Our results confirm such already published findings and extend the observations to a cell line where the consequences of sulphidation have not previously been investigated. We indirectly tested the effects of corona-mediated sulphidation on the toxicity of Ag NPs to J774 macrophages by exposing cells to partially and completely sulphidated NPs (obtained by pre-incubation in 10 and 1% FBS respectively, Fig. 5a,b ) to mimic in vitro conditions, as well as NPs with no sulphide, to mimic in vivo settings ( Supplementary Fig. 27 ). Detailed experimental procedures are available in the Methods section and in Supplementary Methods . Ag NPs disrupted mitochondrial activity, as measured using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay, and caused cell death at concentrations above 25 μg ml −1 , whereas partially and completely transformed particles showed no effect at concentrations as high as 100 μg ml −1 ( Fig. 5c ). Silver ions were lethal to the cells even at the lowest concentration (2 μg ml −1 ), but pristine, partially and completely sulphidated Ag NPs were suitable for analysis of potential effects at sub-lethal doses. Of all the molecules measured in the cell supernatants, responses above the detection limit were seen for interleukin-1beta, interleukin-6, interleukin-18, tumour necrosis factor alpha (TNFα) and macrophage inflammatory protein 2 (MIP-2; Supplementary Fig. 28 and Fig. 5 ). Granulocyte–macrophage colony-stimulating factor was also measured at the highest particle doses in the Ag NPs samples, but for most systems, granulocyte–macrophage colony-stimulating factor values were below the detection limit and, therefore, they are not included here. The most pronounced impact is observed on TNFα and MIP-2. TNFα is a cytokine released by macrophages at early inflammatory stages together with interleukin 6 (ref. 59 ), which we also observed ( Supplementary Fig. 28 ). A threefold increase in its production is caused by pristine and partially sulphidated Ag NPs, but not by completely transformed ones ( Fig. 5d ). This result indicates upregulation of TNFα production by Ag requires the presence of nano-particulate or ionic silver, as previously seen 60 , 61 . A similar trend ( Fig. 5e ) was observed for MIP-2, a cytokine involved in cell recruitment to the site of infection following the initiation of the inflammatory response 62 . Together, these results confirm that sulphidation, which we show is mediated by the dynamic protein coronas and is more likely to occur at the serum concentrations used in vitro than in vivo , decreases the toxicity of silver NPs both at lethal and sub-lethal doses. However, although we show, for the first time, a link between both soft and hard protein coronas and Ag NPs sulphidation and confirm previous findings connecting sulphidation to decreased toxicity, we cannot, at this time, provide a direct link between the soft corona and the diminished toxicological effects of sulphidated Ag NPs. Furthermore, technical limitations in conducting in vitro studies at different FBS concentrations do not allow us to account for the Ag + transported by soft corona proteins in the bulk of the pre-incubation system. As silver ions are known for their toxicity and our experimental setting does not permit investigation of their interaction with cells when bound to rapidly exchanging proteins, we cannot make a general claim about the overall toxicological impact that the soft corona-mediated biotransformation of Ag NPs has under various in vitro or in vivo scenarios. Figure 5: Corona-mediated sulphidation of Ag NPs impacts particle toxicity. TEM images of partially sulphidated Ag NPs after pre-incubation in RPMI-1640 with 10% FBS ( a ) and completely sulphidated Ag NPs after pre-incubation in RPMI-1640 with 1% FBS ( b ); scale bars are 50 nm; Viability of J774 murine macrophages (as measured with MTT assays) after 24 h exposure to various concentrations (2, 5, 10, 15, 25, 50 and 100 μg ml −1 ) of Ag + ions (black diamonds), pristine Ag NPs (red triangles), partially sulphidated Ag NPs (blue squares) and completely sulphidated Ag NPs (orange circles); error bars are provided as standard deviation; statistically significant differences (two-tailed t -test, with all data sets showing normal distribution and similar variance values) as compared with the control are marked with ** P <0.005 or *** P <0.0005 ( n =6), with all the P values available in Supplementary Table 7 ( c ); release profiles of TNFα ( d ) and MIP-2 ( e ) after 24 h exposure of J774 macrophages to various concentrations (2, 5, 10, 15, 25, 50 and 100 μg ml −1 ) of pristine (red), partially sulphidated (blue) and completely sulphidated (orange) Ag NPs; the missing concentrations of TNFα and MIP-2 after exposure to pristine Ag NPs are above the measuring limit (see calibration curves in Supplementary Fig. 29 ). Full size image Discussion We have shown for the first time a situation where the weakly attached protein layer forming the soft corona has a visible and measurable effect on the transformation of Ag NPs in a complex biological environment. We demonstrated the presence of crystalline nano-Ag 2 S at the surface of silver NPs upon incubation in cell culture medium. Reduced sulphur, an organic layer at the Ag NPs and release of ions from the metal core are necessary for sulphide formation. Protein concentration greatly impacted the amount of nano-Ag 2 S observed at the particles through a soft corona protein-assisted mechanism of Ag + removal. In the absence of a rapidly exchanging corona, the decrease in sulphide formation upon increased bulk protein concentration was no longer observed, in agreement with the proposed mechanism. Greater free silver ion concentrations introduced in the media did not increase sulphide formation. We have studied well-defined Ag nanocubes and quasi-spherical particles formed via PVP stabilization. Although we expect the proposed mechanism to apply to Ag NPs having other sizes, shapes and coatings, the particulars of the experimental situation—such as incubation time, media, Ag 2 S formation rate and crystal size, protein concentration—are likely to influence the specific outcomes. The low water solubility of Ag 2 S decreases Ag + ions bioavailability and, although this phenomenon has been studied extensively in ecotoxicology settings 32 , 37 , 39 , 40 , 41 , 42 , little is known about the effect of NP sulphidation in cell culture media 48 . Ions are an important component in silver toxicity to cells; transformations impacting their availability should be taken into account when analysing the stability of Ag NPs in protein-containing media relevant for in vitro experiments and interpreting subsequent toxicity studies. We show that even partial sulphidation of Ag NPs prevents cell death, whereas complete sulphidation also prevents the increased pro-inflammatory cytokines production seen with pristine particles. The observed decrease in Ag 2 S formation at increased protein contents may raise a question regarding using in vitro results to predict in vivo scenarios, as the bulk biomolecule concentration in the latter settings is much higher than in the former. However, for this to become an issue, a direct link between the dynamic protein coronas and the toxic effects of Ag NPs should first be established in future research. Further studies into the bioavailability and effects of soft corona-bound Ag + from particle dissolution in protein-containing media are also necessary to obtain a clear and full picture of how biomolecules-modulated biotransformations may change NPs’ toxicity effects in vitro and in vivo . Methods Particle synthesis and characterization Silver NPs, both cubic and quasi-spherical, were prepared using the polyol method 63 , where particle shape is controlled by the ratio between the capping agent (PVP) and the silver precursor 64 . Briefly, a specific amount of silver trifluoroacetate dissolved in anhydrous ethylene glycol is reduced by ethylene glycol at high temperature (145–155 °C) in the presence of PVP (Mw≈55,000 Da). The ratio of PVP to CF 3 COOAg dictates the outcome regarding particle shape. For the cube synthesis, trace amounts of HCl and NaSH·xH 2 O are added to the reaction mixture, as described elsewhere 63 . The particles were purified by repeated washing with acetone, ethanol and MilliQ water 24 . The synthesis was tracked by collecting ultraviolet–visible spectra (Shimadzu UV-visible-NIR spectrophotometer, UV-3600) of the reaction mixture (a few drops in MilliQ water) at various times ( Supplementary Fig. 22 ). The resulting silver particles were quantified using flame atomic absorption spectroscopy (F-AAS, PerkinElmer Analyst 300 atomic absorption spectrometer mounted with a silver lumina hollow cathode lamp), after digestion in 65% HNO 3 . NP size was obtained using the SPIP scanning probe image software (Image Metrology) to analyse TEM images of at least 500 particles. Particle incubation in cell culture media RPMI-1640 (Invitrogen) is a medium widely used for cell cultures, including for the J774 murine macrophages employed here. We incubated Ag NPs (cubic or quasi-spherical) in RPMI-1640 with or without added supplements of heat-inactivated FBS (HyClone; 1–50% by volume), for 1 or 7 days. As we are studying a model system from the perspective of a mechanism of chemical transformation, our choice of serum concentrations and incubation times is only partially based on real toxicology settings. As such, 10% FBS is used as the typical serum supplement for in vitro toxicity studies 65 , 66 , 67 , with 24 h being the go-to time for acute toxicity experiments 26 , 45 , 65 , 66 , 67 , 68 , 69 . However, in vitro Ag NPs exposure studies have been performed at lower serum concentrations 45 , 68 , 69 , down to 1% (ref. 25 ), which is the concentration selected in our work. Furthermore, the 1% provides a better model system to study and understand mechanisms, as the low protein concentration makes the protein exchange-dependant process at the NP surface take longer, with fewer biomolecules available to participate. Although FBS contents higher than 10% are not common for in vitro studies, they are closer to an in vivo situation, so 50% serum was selected to better investigate the behaviour of Ag NPs in a realistic setting. However, we perform a mechanistic study on model systems, therefore the incubation times were chosen to have a fully formed hard corona on Ag NPs 24 (24 h) and to observe an extensive chemical transformation of the NPs (7 days), even though the onset of the sulphidation occurs, as we show, much earlier and clear changes are visible at 24 h. To study the influence of free ions on nano-Ag 2 S formation, AgNO 3 (Sigma-Aldrich) aqueous solutions (10% Ag + by weight of the total Ag NP mass) were added to the media at the beginning of the incubation process for some of the samples. Sample analysis post-incubation Ultraviolet–visible spectra were collected against a MilliQ water background, over a wavelength domain of 300–800 nm. The peak positions of the resulting spectra were assessed using the Savitzky–Golay method to derive the spectral plots in the SpecManager software (ACD Labs). After incubation, free proteins were eliminated by centrifugation of particles in a Heraeus Multifuge X1R table top centrifuge (Thermo Scientific) and removal of the supernatant. The silver ions in the supernatants were quantified through F-AAS. Ag NPs were washed with MilliQ water three times through re-dispersion of the pellets and TEM/STEM samples were prepared by dropping 5–10 μl of suspension on a formvar/carbon-supported copper grid (Ted Pella) and leaving it to dry overnight. Imaging and diffraction studies were performed using a Phillips CM20 transmission electron microscope working at 200 keV. EDS and X-ray elemental mapping experiments were performed on a Talos 200X STEM instrument. EDS and elemental mapping of supernatants Silver nanocubes were incubated (10 μg ml −1 ) for 7 days in RPMI-1640 cell culture medium supplemented with 1% FBS. The resulting sample was centrifuged to pellet the particles. A drop of the supernatant was deposited on a formvar/carbon-supported TEM copper grid and left to dry overnight. 150 μl of the remaining supernatant was separated from the pelleted particles, moved to a clean Eppendorf tube and spiked with 40 μl of PVP-coated NanoXact 5 nm Ag NPs (nanoComposixm, stock concentration 20 μg ml −1 ). The NanoXact Ag NPs used for spiking are comparable in size to the observed Ag 2 S nanocrystals and the amount added provides a mass of silver comparable to that in the Ag 2 S nanocrystals formed at the Ag NPs. A drop of the spiked supernatant was deposited on a TEM grid and left to dry overnight. EDS spectra and elemental maps from the resulting samples were collected using a Talos 200X STEM machine. Cell line The J774A.1 murine macrophage cell line (referred to as J774) was obtained from Cell Line Service (#400220). The cells were cultured in RPMI-1640 cell culture medium supplemented with penicillin (100 μg ml −1 ), streptomycin (100 U ml −1 ), GlutaMAX (1 ×) and 10% heat-inactivated FBS. The cells were kept in at 37 °C humidified atmosphere with 5% CO 2 . The cell culture medium and all the supplements were purchased from Gibco. Particle pre-incubation and toxicity studies Pristine Ag nanocubes, as well as partially and completely sulphidated Ag nanocubes were used for toxicity testing. Silver NPs (2 μg ml −1 ) were pre-incubated in RPMI-1640 supplemented with 10 or 1% FBS, resulting in either their partial or their complete transformation into Ag 2 S ( Supplementary Fig. 27 ). After pre-incubation, the samples were centrifuged, the supernatants were removed and the pelleted particles were re-suspended in RPMI-1640 with 10% FBS. Re-suspension was done in such a way as to up-concentrate the samples to the desired concentrations, namely 2, 5, 10, 15, 25, 50 and 100 μg ml −1 silver. For each of the final concentrations, pre-incubation was done in separate tubes. For the partially and completely transformed Ag NPs, up-concentration was done correcting for the amount of silver lost through ion release during pre-incubation. The resulting samples were fed to the J774 cells. Dosing of the cells was performed in a blinded experimental setting with the researcher performing the toxicity studies not being informed of the specific content of each tube provided for cell treatment. Cytokine production was quantified using a multiplex assay and cell mitochondrial activity was measured using an MTT assay ( Supplementary Methods ) in six separate replicates. Sample sizes down to three are routinely used in MTT assays. We believe larger sample numbers (six in this case) only serve to strengthen the significance of the results. Values are calculated as mean±standard deviation. Data availability The authors declare that the data supporting the findings of this study are available within the article and its supplementary information files. Additional information How to cite this article: Miclăuş, T. et al . Dynamic protein coronas revealed as a modulator of silver nanoparticle sulphidation in vitro . Nat. Commun. 7:11770 doi: 10.1038/ncomms11770 (2016).
A senior fellow at the Faculty of Chemistry, MSU, Vladimir Bochenkov, together with his colleagues from Denmark, have established the mechanism of interaction of silver nanoparticles with the cells of the immune system. The study is published in the journal Nature Communications. "Currently, a large number of products contain silver nanoparticles—antibacterial drugs, toothpaste, polishes, paints, filters, packaging, medical and textile items. The functioning of these products lies in the capacity of silver to dissolve under oxidation and form ions Ag+ with germicidal properties. At the same time, there are in vitro research data showing silver nanoparticles' toxicity for various organs, including the liver, brain and lungs. In this regard, it is essential to study the processes occurring with silver nanoparticles in biological environments, and the factors affecting their toxicity," says Vladimir Bochenkov. The study is devoted to the protein corona—a layer of adsorbed protein molecules that is formed on the surface of the silver nanoparticles during their contact with the biological environment, for example, in blood. This protein corona masks nanoparticles and largely determines their fate, including the speed of the elimination from the body, the ability to penetrate to a particular cell type, the distribution between the organs, etc. According to the latest research, the protein corona consists of two layers: a rigid hard corona consisting of protein molecules tightly bound with silver nanoparticles; and a soft corona, consisting of weakly bound protein molecules in a dynamic equilibrium with the solution. Until now, the soft corona has been studied very little because of experimental difficulties—the weakly bound nanoparticles that were separated from the protein solution easily desorbed, leaving only the rigid corona on the nanoparticle surface. The size of the studied silver nanoparticles was 50 to 88 nm, and the diameter of the proteins that made up the crown were three to seven nm. Scientists managed to study the silver nanoparticles with the protein corona in situ, without removing them from the biological environment. Due to the localized surface plasmon resonance used for probing the environment near the surface of the silver nanoparticles, the functions of the soft corona have been primarily investigated. "In the work, we showed that the corona may affect the ability of the nanoparticles to dissolve to silver cations Ag+, which determine the toxic effect. In the absence of a soft corona (quickly sharing the medium protein layer with the environment), silver cations are associated with the sulfur-containing amino acids in serum medium, particularly cysteine and methionine, and precipitate as nanocrystals Ag2S in the hard corona," says Vladimir Bochenkov. Ag2S (silver sulfide) famously easily forms on the silver surface even on the air in the presence of the hydrogen sulfide traces. Sulfur is also part of many biomolecules contained in the body, provoking the silver to react and be converted into sulfide. Formation of Ag2S nano-crystals due to low solubility reduces the bioavailability of the Ag+ ions, reducing the toxicity of silver nanoparticles to null. With a sufficient amount of the amino acid sulfur sources available for reaction, all the potentially toxic silver is converted into the nontoxic insoluble sulfide. This is what happens in the absence of a soft corona. In the presence of a soft corona, the Ag2S silver sulfide nanocrystals are formed in smaller quantities or not formed at all. Scientists attribute this to the fact that the weakly bound protein molecules transfer the Ag+ ions from nanoparticles into the solution, thereby leaving the sulfide uncrystallized. Thus, the soft corona proteins are vehicles for the silver ions. This effect, scientists believe, should be taken into account when analyzing the stability of silver nanoparticles in a protein environment, and in interpreting the results of the toxicity studies. Studies of the cell viability of the immune system (J774 murine line macrophages) confirmed the reduction in cell toxicity of silver nanoparticles at the sulfidation (in the absence of a soft corona). Vladimir Bochenkov's challenge was to simulate the plasmon resonance spectra of the systems involved and to create the theoretical model that allowed quantitative determination of silver sulfide content in situ around nanoparticles, following the change in the absorption bands in the experimental spectra. Since the frequency of the plasmon resonance is sensitive to a change in dielectric constant near the nanoparticle surface, changes in the absorption spectra contain information about the amount of silver sulfide formed. Knowledge of the mechanisms of formation and dynamics of the behavior of the protein corona, and information about its composition and structure are extremely important for understanding the toxicity and hazards of nanoparticles for the human body. Protein corona formation could be used to deliver drugs in the body, including for the treatment of cancer.
10.1038/ncomms11770
Biology
Scientists reveal function of cell death enzyme capsase-7
Kengo Nozaki et al, Caspase-7 activates ASM to repair gasdermin and perforin pores, Nature (2022). DOI: 10.1038/s41586-022-04825-8 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-04825-8
https://phys.org/news/2022-06-scientists-reveal-function-cell-death.html
Abstract Among the caspases that cause regulated cell death, a unique function for caspase-7 has remained elusive. Caspase-3 performs apoptosis, whereas caspase-7 is typically considered an inefficient back-up. Caspase-1 activates gasdermin D pores to lyse the cell; however, caspase-1 also activates caspase-7 for unknown reasons 1 . Caspases can also trigger cell-type-specific death responses; for example, caspase-1 causes the extrusion of intestinal epithelial cell (IECs) in response to infection with Salmonella enterica subsp. enterica serovar Typhimurium ( S . Typhimurium) 2 , 3 . Here we show in both organoids and mice that caspase-7-deficient IECs do not complete extrusion. Mechanistically, caspase-7 counteracts gasdermin D pores and preserves cell integrity by cleaving and activating acid sphingomyelinase (ASM), which thereby generates copious amounts of ceramide to enable enhanced membrane repair. This provides time to complete the process of IEC extrusion. In parallel, we also show that caspase-7 and ASM cleavage are required to clear Chromobacterium violaceum and Listeria monocytogenes after perforin-pore-mediated attack by natural killer cells or cytotoxic T lymphocytes, which normally causes apoptosis in infected hepatocytes. Therefore, caspase-7 is not a conventional executioner but instead is a death facilitator that delays pore-driven lysis so that more-specialized processes, such as extrusion or apoptosis, can be completed before cell death. Cells must put their affairs in order before they die. Main Caspase-3 is the primary apoptotic executioner, and is sufficient among caspases to cause apoptosis. By contrast, the roles of other executioners—such as caspase-7—remain unknown. Exemplifying this, Casp3 –/– mice are perinatally lethal on the 129/SvJ background, whereas Casp7 –/– mice are healthy 4 . However, on the C57BL/6 background, caspase-7 can rescue Casp3 –/– mice 5 . This result leads to the present line of thinking that holds that caspase-7 is an inefficient back-up for caspase-3 that only works in certain conditions. Caspase-7 is required for IEC extrusion Although caspase-7 is expressed in most tissues 6 , it is highly expressed in the intestine and in isolated IECs 7 (Extended Data Fig. 1a–c ). After oral infection with S . Typhimurium, we observed many cleaved caspase-7-positive IECs in the caecum, all of which had a characteristic morphology that is indicative of ongoing extrusion into the lumen 8 (Extended Data Fig. 1d–f ). Caspase-7 is classically known to be activated by apoptotic caspases, including caspase-3. However, infection did not increase cleaved-caspase-3-positive cells, and Casp3 –/– mice retained increased levels of cleaved-caspase-7-positive cells (Extended Data Fig. 1g–i ). In wild-type mice, individual EpCAM + IECs extruded whereas neighbouring cells remained unperturbed in the monolayer during S . Typhimurium infection (Fig. 1a,b ). Notably, in Casp7 –/– mice, IECs extruded as clusters that remained attached to the apical epithelial surface (Fig. 1a,b ). Clusters became marked 24 h after infection; in an extreme example, 18 IECs were observed in a single extrusion cluster site (Fig. 1b and Extended Data Fig. 1j–l ). This clustered morphology has not to our knowledge been previously reported. Thus, caspase-7 is activated in extruding IECs in response to S . Typhimurium infection, but independently of the conventional apoptotic executioner caspase-3. Fig. 1: Caspase-7 facilitates IEC extrusion during S . Typhimurium infection and ameliorates gasdermin D pores. a , b , The indicated mice were infected with 5 × 10 6 S . Typhimurium and the caeca were collected 24 h later. Representative images ( a ) and quantification ( b ) of epithelial marker EpCAM + cell counts per extrusion site (arrows indicate extrusion sites). WT, wild type. c , Percentage of ruptured IEC organoids after treatment with FlaTox in pooled live-imaging experiments (related to Extended Data Fig. 2a ). d , Representative images of organoids 30 min after treatment with FlaTox, stained with phalloidin and for cleaved caspase-7. e , f , Representative images ( e ; full series are in Extended Data Fig. 4b ) and quantification ( f ) in live-cell imaging of PI intensity of wild-type and Casp7 –/– organoids treated with FlaTox. g , PI Intensity of wild-type, Gsdmd –/– and Gsdmd –/– Casp7 –/– organoids treated with FlaTox or control PBS. Data are representative of three experiments ( a , b , d – f ) or are pooled from 12 ( c ) or 3 ( g ) experiments. Scale bars, 50 μm ( a ); 20 μm ( d ). * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 (two-sided Mann–Whitney U -test in b ; two-sided unpaired t -test in c ; two-way analysis of variance (ANOVA) with Sidak’s post-hoc test in f or with Tukey’s post-hoc test in g ). Data are median ± s.e.m. ( b ) or mean ± s.e.m. ( c , f , g ). Exact P values in Source Data. Source data Full size image During S . Typhimurium infection, IEC extrusion is initiated when caspase-1 is activated by NAIP–NLRC4; this complex detects bacterial proteins, such as flagellin, in the cytosol 2 , 3 . This can be mimicked in IEC organoid cultures by stimulation with FlaTox, an engineered toxin that delivers flagellin to the cytosol, which causes extrusion in organoid monolayers in a setting in which all cells activate caspase-1. We therefore examined caspase-7 function in this model of IEC extrusion. FlaTox-treated wild-type organoids ultimately collapse, concomitantly with the rupture of the inner contents into the surrounding matrigel. We visualized the morphology of extruding IECs, and in addition quantified organoid rupture as a proxy for overall extrusion dynamics. In Casp7 –/– organoids, the IECs also initiated extrusion; however, they remained attached to neighbouring IECs, formed clusters of defectively extruding cells and ultimately had a reduced incidence of organoid rupture (Supplementary Videos 1 , 2 , Fig. 1c and Extended Data Fig. 2a,b ). Casp3 –/– organoids retained normal levels of extrusion, cleaved caspase-7 and rupture (Extended Data Fig. 2c–e ). Therefore, caspase-7 is critical for the caspase-1-driven extrusion of IECs, and this occurs independently of caspase-3. Caspase-1 activates caspase-7 Caspase-1 is an inflammatory caspase that is highly expressed in IECs 7 (Extended Data Fig. 1c ). Caspase-1 can cleave and activate caspase-7, although the physiological relevance of this remains unknown 1 . We also observed this cleavage in IEC organoids; caspase-7 was cleaved as early as 15–30 min after treatment with FlaTox, concomitantly with caspase-1 cleavage. This cleaved caspase-7 staining was lost in Casp1 –/– Casp11 –/– organoids ( Casp11 is also known as Casp4 ), as shown by the delayed cleavage kinetics in western blotting experiments, but remained present in IECs that lack gasdermin D ( Gsdmd –/– IECs) (Fig. 1d and Extended Data Fig. 2f ). By contrast, cleavage of caspase-3 was not observed in the first 30 min, and Casp3 –/– organoids retained staining for cleaved caspase-7; only after 45 min in Casp1 –/– Casp11 –/– organoids did we observe the cleavage of apoptotic caspase-8, caspase-9 and caspase-3 (Extended Data Fig. 2e,f ). This suggests that secondary pathways (for example, ASC–caspase-8 (ref. 9 )) can activate both caspase-3 and caspase-7, but with delayed kinetics. This salvage pathway is slower than caspase-1, which rapidly activates caspase-7 after NLCR4 stimulation. Caspase-7 antagonizes gasdermin D Caspase-1 also cleaves gasdermin D, which forms pores in the plasma membrane 9 . Both gasdermin D and caspase-7 are cleaved 15 min after treatment with FlaTox, independently of one another (Extended Data Figs. 2f and 3a ). However, gasdermin pore permeability, as assessed by propidium iodide (PI) uptake, was accelerated in Casp7 –/– organoids, (Supplementary Videos 3 , 4 , 7 and 8 , Fig. 1e,f and Extended Data Fig. 3b,c ). This caspase-7 effect was not seen during apoptosis activated by tumour necrosis factor (TNF) with cycloheximide (CHX) or by the caspase-1–BID pathway that occurs in Gsdmd –/– cells 9 (Extended Data Fig. 3d and Fig. 1g ). Identical conclusions were drawn using calcein-AM, which stains intact cells (Supplementary Videos 5 , 6 , Extended Data Fig. 3e,f ). Casp3 –/– organoids did not show this accelerated membrane permeability (Extended Data Fig. 3g,h ). These results indicate that caspase-7 uniquely impedes the functionality of gasdermin D pores. ASM is cleaved by caspase-7 When the plasma membrane is damaged, cells have at least four mechanisms to repair the membrane: ASM-driven endocytosis; ESCRT-induced shedding; constriction; and patching 10 . Among these, ASM is a sphingomyelin-converting enzyme that is located inside the lysosome. Once the plasma membrane is damaged, Ca 2+ triggers lysosomal exocytosis into the damaged site, where the released ASM can repair the plasma membrane 10 . In studies unrelated to membrane repair, caspase-7 was found to cleave ASM and enhance sphingomyelin-catalysing activity 11 . Despite this link to ASM, a role for caspase-7 in membrane repair has not to our knowledge been investigated previously. We hypothesized that the caspase-7-dependent antagonization of the gasdermin D pore that we observed (Fig. 1e–g ) resulted from caspase-7 cleaving and activating ASM. Full-length ASM (pro-ASM; a 72-kDa band) is cleaved by caspase-7 to generate a 57-kDa form with greater enzymatic activity 11 . In resting organoids, the pro-ASM band is dominant, but treatment with FlaTox causes the cleaved band to appear in wild-type organoids (Extended Data Fig. 4a–c ). The immunoreactivity of cleaved ASM is stronger than that of pro-ASM—an effect that is also seen with the anti-gasdermin D antibody (Extended Data Fig. 3a ), albeit to a lesser degree. ASM cleavage required caspase-7, and in addition NLRC4 and caspase-1, but not caspase-3 (Extended Data Fig. 4b,c ). We observed that the 57-kDa cleaved form of ASM was glycosylated, consistent with this band being an active enzyme 12 (Extended Data Fig. 4d ). Thus, caspase-7 specifically cleaves ASM in our organoid model. Caspase-7-activated ASM makes ceramide Sphingomyelin is a major constituent of the membrane in animal cells, and contains a head group that is attached to two lipid groups. The sphingomyelin head group allows the plasma membrane to remain largely flat. ASM removes the head group of sphingomyelin, converting it into ceramide. Ceramide causes the membrane to naturally invaginate, which causes spontaneous clathrin-independent endocytosis that internalizes proteinaceous pores in the plasma membrane (Extended Data Fig. 5a ). This endocytosis repairs membrane damage quickly—streptolysin O pores are repaired within 30 s (ref. 13 ). We hypothesized that by activating ASM to produce ceramide, caspase-7 would enhance endocytic membrane repair of the gasdermin D pore. We found that infection with S . Typhimurium resulted in strong ceramide staining in extruding IECs, but not in neighbouring IECs (Fig. 2a and Extended Data Fig. 5b,c ). By contrast, Casp7 –/– mice had no ceramide enrichment in the abnormal IEC extrusion clusters (Fig. 2a ). Organoids showed the same caspase-7-dependent ceramide production after treatment with FlaTox (Extended Data Fig. 5d–f ). By contrast, organoids that were treated with a functional inhibitor of ASM, imipramine (IMP) 14 , lost ceramide staining but retained caspase-7 cleavage (Extended Data Fig. 5d–f ). Wild-type organoids that were treated with IMP had a significantly reduced incidence of rupture, showed more rapid PI entry and calcein loss and had a faster extrusion initiation time, all similar to Casp7 –/– organoids (Fig. 2b,c and Extended Data Fig. 5g,h ). By contrast, wild-type and Casp3 –/– organoids initiated extrusion at similar times (Extended Data Fig. 5i ). Finally, treatment with ceramide normalized the PI staining, extrusion initiation time and rupture percentage in Casp7 –/– organoids (Extended Data Fig. 5j–l ). Fig. 2: Caspase-7 activation drives ASM to repair gasdermin D pores and facilitate IEC extrusion. a , The indicated mice were infected with 5 × 10 6 S . Typhimurium and the caeca were collected 24 h later. Ceramide staining; dotted rectangles are shown at a higher magnification on the right (arrows indicate individual extruding cells). b , c , The indicated organoids treated with FlaTox were live-imaged, and the PI intensity ( b ) or extrusion starting time ( c ) were quantified. d , e , The indicated mice were infected with 5 × 10 6 S . Typhimurium and the caeca were collected 24 h later. d , Cleaved caspase-7 staining (arrows indicate extruding cell clusters). e , Ceramide and cleaved caspase-7 staining. Scale bars, 50 μm ( a , d ); 20 μm ( e ). Data are representative of three experiments ( a , d , e ) or are pooled from three experiments ( b , c ). For c , WT n = 41, WT + IMP n = 39 and Casp7 –/– n = 15 organoids pooled from three experiments were analysed. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 (two-way ANOVA with Tukey’s post-hoc test in b ; one-way ANOVA with Tukey’s post-hoc test in c ). Data are mean ± s.e.m. Exact P values in Source Data. Source data Full size image We next applied IMP treatments to our S . Typhimurium infection in vivo. Similar to Casp7 –/– mice, IMP-treated wild-type mice showed abnormal extrusion clusters (Fig. 2d ), and in addition contained cleaved-caspase-7-positive IECs that did not exhibit the morphology of extrusion (Extended Data Fig. 5m ). This could be the result of impairment very early in the extrusion process. Furthermore, treatment with IMP did not prevent cleaved caspase-7 staining in the extruding clusters, but ceramide staining was lost (Fig. 2e ). We also found the abnormal extruding clusters in IMP-treated organoids after stimulation with FlaTox (Extended Data Fig. 5n ). Consistent with caspase-1 activating caspase-7, ceramide staining was lost in Casp1 –/– Casp11 –/– mice (Fig. 2e ). Altogether, these data indicate that caspase-7 activates ASM to generate ceramide, which can reseal gasdermin D pores in extruding IECs. Moreover, depletion of ASM causes defective IEC extrusion clusters, similar to the phenotype that is observed with caspase-7 deficiency. Gasdermin D links caspase-7 to ceramide The above data strongly associate caspase-7 and ASM. However, ASM resides in the lysosomal lumen, whereas caspase-7 resides in the cytosol, and how caspase-7 crosses the membrane to interact with and cleave ASM remains unclear. We hypothesized that the gasdermin D pore was the conduit. Consistent with this, in FlaTox-treated Gsdmd –/– organoids, caspase-7 was activated, but was unable to cleave ASM (Extended Data Figs. 2f and 4b ). Furthermore, ceramide production was lost in Gsdmd –/– mice during infection (Fig. 2e and Extended Data Fig. 6 ). These data support the hypothesis that caspase-7 passes through the gasdermin D pore to encounter and cleave ASM; notably, these are the same pores that require ASM-driven membrane repair. Cleaved ASM repairs membranes IMP affects many lysosomal proteins, and is thus not fully specific to ASM. To specifically eliminate the caspase-7 cleavage site in ASM, we generated ASM(D249A) ( Smpd1 DA/DA ) mice by CRISPR–Cas9 mutagenesis (Extended Data Fig. 7a ). Smpd1 DA/DA mice were fertile and healthy to at least five months of age. During infection with S . Typhimurium, Smpd1 DA/DA mice showed the same extrusion defect as Casp7 –/– mice (Fig. 3a,b and Extended Data Fig. 7b ). As in IMP-treated mice, caspase-7 was activated in the extruding cell clusters in Smpd1 DA/DA mice (Fig. 3c ), and Smpd1 DA/DA organoids that were treated with FlaTox (but not those that were treated with TNF + CHX) showed faster PI uptake, reduced rupture incidence and faster extrusion initiation (Fig. 3d , Supplementary Video 9 and Extended Data Fig. 7c–f ). Of note, Smpd1 DA/DA organoids that were treated with FlaTox showed no ASM cleavage (Fig. 3e ). Therefore, mice that are resistant to caspase-7 cleavage replicate the phenotypes that are seen in Casp7 –/– mice and IMP-treated mice. Fig. 3: A mutation in ASM that renders ASM resistant to cleavage impairs membrane repair and IEC extrusion. a – c , The indicated mice were infected with 5 × 10 6 S . Typhimurium and the caeca were collected 24 h later. Representative images ( a ) and quantification ( b ) of EpCAM + cell counts per extruding site. c , Ceramide and cleaved caspase-7 staining. d , The indicated organoids treated with FlaTox or PBS control were live-imaged, and the PI intensity was quantified. e , The indicated organoids were removed of dead cells and then stimulated for 20 min with PBS or FlaTox, and the cleavage of ASM, caspase-7 and GSDMD was assessed. Data are representative of three experiments ( a – c , e ) or are pooled from three experiments ( d ). Scale bars, 20 μm. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 (two-sided Mann–Whitney U -test in b ; two-way ANOVA with Sidak’s post-hoc test in d ). Data are median ± s.e.m. ( b ) or mean ± s.e.m. ( d ). Exact P values in Source Data. Source data Full size image Caspase-7 mitigates intestinal pathology Wild-type mice show considerable intestinal pathology during infection with S . Typhimurium, whereas Casp1 –/– Casp11 –/– mice exhibit minimal pathology 15 . By contrast, Casp7 –/– mice showed worsened pathology and cytokine response, with extruded cells retained in proximity to the gut wall 48 h after infection (Extended Data Fig. 8a–d ). However, bacterial burdens were minimally altered in Casp7 –/– or Smpd1 DA / DA mice (Extended Data Fig. 8e–j ). We observed similar results during dextran sodium sulfate (DSS)-induced colitis, in which Casp7 –/– mice showed exacerbated weight loss and pathology (Extended Data Fig. 8k–l ), Thus, caspase-7 is important for reducing intestinal damage, but not for reducing the S . Typhimurium burden. These results indicate that a series of events is required to facilitate IEC extrusion downstream of caspase-1: caspase-7 passes through gasdermin D pores, then cleaves ASM, potentiating its activity to generate ceramide, which mediates membrane repair to ensure that extrusion completes successfully. Defective extrusion causes pathology, but does not reverse bacterial burdens. If activating ASM is the core evolved function of caspase-7, then this should hold true in other model systems in which cell death involves pores concomitantly with the activation of caspase-7. The most classical example of this is the perforin pore attack pathway that is used by natural killer (NK) cells and cytotoxic T lymphocytes (CTLs) to activate apoptosis. NK cell perforin defence requires caspase-7 NK cells and CTLs use perforin pores to deliver granzyme B, which activates all three of the apoptotic executioner caspases: caspase-3 (refs. 16 , 17 ), caspase-7 (refs. 18 , 19 ) and caspase-6 (ref. 20 ). NK cells and CTLs attack host cells that contain intracellular pathogens, which eliminates the infected cells 21 . Notably, perforin pores are similar in size to gasdermin pores. However, a unique role for caspase-7 after perforin-mediated attack remains elusive. We discovered a model pathogen, Chromobacterium violaceum , in which NK cell perforin attack clears bacteria. C. violaceum is a ubiquitous environmental bacterium that only infects immunocompromised individuals 22 . The immune system actually uses both apoptosis and pyroptosis to combat C. violaceum ; each is required in a different cell type 23 . Pyroptosis is implicated in the clearance of bacteria from the spleen, in which NLRC4 and caspase-1 are essential probably in macrophages, but the caspase-1-processed cytokines IL-1β and IL-18 are dispensable 23 . In the liver there is an additional role for IL-18, which primes NK cells to use perforin attack on hepatocytes (in which caspase-1 is not detectable) 23 . Casp7 –/– mice, but not Casp6 –/– mice, were more susceptible to infection with C. violaceum , phenocopying Prf1 –/– mice (Fig. 4a,b and Extended Data Fig. 9a,b ). Thus, we discovered another in vivo phenotype with which to study caspase-7. We expected that Casp3 –/– mice, which lack the primary apoptotic executioner, would be susceptible to C. violaceum . Instead, Casp3 –/– mice were aberrantly hyper-resistant (Extended Data Fig. 9a ). This does not indicate that caspase-3 enhances infection; Casp3 –/– mice are known to have an abnormal immune system. Our results may be another example of the aberrant tonic type I interferon (IFN) signalling that is seen in caspase-3-deficient mice, which causes aberrant resistance to viruses 24 , 25 . Casp7 –/– and Casp6 –/– mice do not suffer from this caveat. Thus, although caspase-3 is fully sufficient to accomplish apoptosis in vitro, caspase-3 is unable to compensate for the loss of caspase-7 in defence against C. violaceum in vivo. Fig. 4: NK cell perforin attack cleaves caspase-7 and ASM to clear C. violaceum . a – i , The indicated mice were infected with 10 2 ( c ) or 10 4 ( a , b , d – i ) C. violaceum . a – e , g , h , Enumeration of liver burdens at three days post-infection (dpi). CFU, colony-forming units. c , f , Mice were treated with recombinant IL-18 or PBS control (day 0 and 1). d , e , Mice were adoptively transferred with NK cells from the indicated sources 24 h before infection. f , Livers were stained for the indicated markers by immunofluorescence at 2 dpi. Scale bar, 50 μm. A larger image area and single channels are shown in Extended Data Fig. 8b . g , Mice were treated with IMP or PBS. i , Representative images of infected livers at 3 dpi stained for the indicated markers. Scale bars, 20 μm. Data are pooled from two experiments ( a – e ) or from three experiments ( g , h ), or are representative of two experiments ( f , i ). Mouse numbers: a , WT n = 7, Casp6 –/– n = 6, Prf1 –/– n = 6; b , WT n = 10, Casp7 –/– n = 7, Prf1 –/– n = 8; c , PBS-treated n = 6 and IL-18-treated n = 6 Nlrc4 –/– ; Casp7 –/– n = 9 each; d , Prf1 –/– n = 9, Casp7 –/– n = 7; g , WT n = 6, WT IMP n = 7, Casp7 –/– control n = 8, Casp7 –/– IMP n = 7; h , WT n = 16, Smpd1 DA/DA n = 9. *P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001; NS, not significant (two-sided Mann–Whitney U -test in a – e , h ; one-way ANOVA in g ). Bars indicate median values. Box plots show median and 25th–75th percentiles; whiskers are minimum and maximum values. Exact P values in Source Data. Source data Full size image Caspase-7 acts in hepatocytes NLRC4-driven IL-18 is upstream of NK cell perforin during C. violaceum infection 23 . IL-18 therapy rescued Nlrc4 –/– mice, but not Casp7 –/– mice (Fig. 4c ) or Prf1 –/– mice 23 . Furthermore, liver burdens in Casp7 –/– mice and Prf1 –/– mice were comparable (Fig. 4b and Extended Data Fig. 9b ), which suggests that there is a common defence pathway. Using NK cell adoptive transfer, we determined that perforin acted in NK cells, whereas caspase-7 did not (Fig. 4d,e and Extended Data Fig. 9c,d ). Thus, caspase-7 acts downstream of both IL-18 and NK cell perforin to defend the liver from infection with C. violaceum . We previously showed that C. violaceum is located in hepatocytes 23 , and thus we hypothesized that hepatocytes would contain activated caspase-7. To synchronize NK cell attack and downstream caspase-7 activation, we used IL-18 therapy in Casp1 –/– Casp11 –/– mice. C. violaceum liver infection results in macroscopic 2- to 3-mm lesions 23 that are identifiable by DAPI staining, in which cleaved caspase-7 was visualized in hepatocytes (marked by CPS1) (Fig. 4f and Extended Data Fig. 10a–d ). Cleaved caspase-3 and cleaved caspase-7 were both observed in serial sections, and cleaved caspase-3 remained present in Casp7 –/– mice; in addition, these cells were positive for a marker of apoptosis (TUNEL) (Extended Data Fig. 10e–j ). Cleaved caspase-7 and cleaved PARP (another apoptosis marker) were also observed in wild-type mice (Extended Data Fig. 10k,l ). Therefore, apoptotic caspase-7 and caspase-3 are both activated in hepatocytes, in which caspase-7 has an essential role that cannot be compensated for by caspase-3. Caspase-7 activates ASM after NK cell attack One feature in common between the Salmonella –IEC model and the C. violaceum –hepatocyte model is that both require caspase-7 and form plasma membrane pores during the cell death processes. We hypothesized that caspase-7-amplified ASM activity could explain its role downstream of NK cell perforin attack. Depletion of ASM in wild-type mice caused an increase in the burden of C. violaceum in wild-type mice, but not in Casp7 –/– mice (Fig. 4g ). In infected livers, ceramide staining became intense around the inflammatory foci, encircling the lesion within cleaved-caspase-7-positive cells in wild-type mice, but not in Casp7 –/– or IMP-treated mice (Extended Data Fig. 11 ). The few ceramide-positive cells after treatment with IMP may have escaped complete ASM depletion. We also infected Smpd1 DA / DA mice with C. violaceum and found increased burdens and decreased ceramide staining, similar to Casp7 –/– and IMP-treated wild-type mice (Fig. 4h,i ). These data suggest that several key events are required for the clearance of C. violaceum : NK cell perforin attack, caspase-7 cleavage, ASM cleavage and ceramide production. NK cells clear Listeria through caspase-7 Listeria monocytogenes is useful to compare to C. violaceum because both are tropic for hepatocytes (Extended Data Fig. 12a ). However, unlike C. violaceum , L. monocytogenes evades inflammasomes in vivo 26 , 27 , limiting IL-18 release 28 to a level that avoids priming the host NK cytotoxic response. In line with this, wild-type and Casp7 –/– mice have equal burdens three days after infection. This susceptibility was corrected with IL-18 therapy in wild-type mice, which was not successful in NK-cell-depleted mice, Casp7 –/– mice (Extended Data Fig. 12b,c ) or Prf1 –/– mice 23 , matching our results with C. violaceum . Therefore, IL-18 primes NK cell cytotoxicity to clear a Gram-negative and a Gram-positive intracellular bacterium, both requiring caspase-7. CTLs clear Listeria through caspase-7 Although L. monocytogenes evades NK cytotoxicity during a natural infection, it is efficiently identified and eradicated by the CTL response. CTL perforin attack during L. monocytogenes infection is typically studied using adoptive transfer experiments, which minimizes redundancy from the responses of T helper 1 cells 29 , 30 (Extended Data Fig. 12d–k ). As a control, naive CTL transfer resulted in the expected high burdens in both wild-type and Casp7 –/– mice. Immune CTLs from vaccinated mice reduced burdens in wild-type (or Casp6 –/– ) mice, but were significantly defective in Casp7 –/– recipients (Fig. 5a and Extended Data Fig. 12l ). Perforin was required in CTLs whereas caspase-7 was required in the recipient mouse (Fig. 5b,c ). Finally, when we compared Prf1 –/– CTLs in a wild-type recipient (an effective single perforin knockout) to Prf1 –/– CTLs in a Casp7 –/– recipient (which should be an effective double knockout), there was no additive effect (Extended Data Fig. 12m ). The residual clearance in Casp7 –/– mice was mostly dependent on IFNγ (Extended Data Fig. 12n ), as expected. Fig. 5: Clearance of L. monocytogenes after CTL perforin attack requires cleavage of caspase-7 and ASM. a – c , CTL donor mice were treated with PBS (naive) or vaccinated with ∆ actA L. monocytogenes (immune). Recipient mice were infected with L. monocytogenes and transferred with CTLs at 0 dpi, and the liver burdens were determined at 3 dpi. A timeline for the adoptive transfer experiments is shown in Extended Data Fig. 12d . d , Recipient mice were injected intraperitoneally with IMP or PBS (day −1 to infection, then daily). e , Recipient mice were infected with L. monocytogenes and transferred with immune CTLs at 0 dpi, and the liver burdens were determined at 3 dpi. All data are pooled from two experiments. Mouse numbers: a , naive WT n = 7, immune WT n = 9, naive Casp7 –/– n = 7, immune Casp7 –/– n = 6; b , WT mice with Casp7 –/– CTLs n = 7 each, naive Prf1 –/– CTLs n = 8, immune Prf1 –/– CTLs n = 7; c , naive recipients n = 7 each, immune recipients Prf1 –/– n = 8, Casp7 –/– n = 7; d , WT n = 7, WT IMP n = 7, Casp7 –/– n = 8, Casp7 –/– IMP n = 7; e , WT n = 7, Smpd1 DA / DA n = 7. *P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001; NS, not significant (two-sided Mann–Whitney U -test in a , e ; one-way ANOVA in b – d ). Bars indicate median values. Box plots show median and 25th–75th percentiles; whiskers are minimum and maximum values. Exact P values in Source Data. Source data Full size image Cleaved-caspase-7-positive hepatocytes appeared specifically after immune CTL transfer, assessed by flow cytometry and histological staining within infected foci (Extended Data Fig. 13a–g ). A large percentage of cleaved caspase-7-positive cells had identifiable L. monocytogenes (Extended Data Fig. 13h,i ). Thus, perforin activity is required in CTLs, whereas caspase-7 activity is required in hepatocytes. Caspase-7 activates ASM after CTL attack Again, we hypothesized that the mechanism of caspase-7 function was through activation of ASM. IMP-treated wild-type mice showed increased L. monocytogenes burdens after immune CTL transfer, but again IMP treatment did not affect burdens in Casp7 –/– mice (Fig. 5d ). Furthermore, Smpd1 DA / DA mice also showed increased burdens (Fig. 5e ). Thus, caspase-7 cleaves ASM so as to facilitate L. monocytogenes clearance after CTL attack. Therefore, after NK cell or CTL attack, caspase-7 cleaves and activates ASM; this drives the production of ceramide, which is required for the eventual clearance of C. violaceum and L. monocytogenes . Caspase-7 is dispensable in some cases Amongst viral infections, mouse cytomegalovirus (MCMV) and lymphocytic choriomeningitis virus (LCMV) are common models to study perforin defence after NK cell and CTL attack, respectively 31 , 32 . Although perforin is important for both, caspase-7 and caspase-6 were not (Extended Data Fig. 14 ). That caspase-7 was required for resistance against two bacteria but not two viruses might indicate direct bactericidal effects of caspase-7; however, we found no evidence for this (Extended Data Fig. 15 ). Thus, the role of caspase-7 depends on the nature of the infecting pathogen. Caspase-7 is dispensable for pyroptosis During C. violaceum infection, we previously proposed that there are at least two niches defended by different cell death modalities: hepatocytes cleared by apoptosis; and macrophages cleared by pyroptosis. This raises the question of whether direct cleavage of caspase-7 by caspase-1 might also be important to clear C. violaceum in the spleen, where pyroptosis is likely to defend the niche. We found that Gsdmd –/– mice were susceptible and had spleen burdens equivalent to Casp1 –/– Casp11 –/– mice, but that Casp7 –/– mice (and other genotypes studied herein) competently cleared their spleens (Extended Data Fig. 16a–h ). Caspase-1 and gasdermin D are required for pyroptosis, but also for the release of IL-18, which is upstream of the perforin–caspase-7 defence pathway; thus, caspase-1 and gasdermin D are required for liver defence (Extended Data Fig. 16c ). Consistently, caspase-7 was not required in other infectious models in which pyroptosis clears bacteria in vivo (Extended Data Fig. 16i–m ). Therefore, in niches in which pyroptosis dominates as the regulated cell death defence, caspase-7 is not required. Conversely, when apoptosis is dominant with evasion of caspase-1—as seen during L. monocytogenes infection 26 , 27 —caspase-1 is not required for the perforin–caspase-7 defence pathway (Extended Data Fig. 16n ). Phylogeny of caspase-7 and pores Finally, we examined the phylogenetic conservation of Casp7 , Prf1 and Gsdmd . Casp7 and Prf1 were present even within Chondrichthyes (sharks). However, Gsdmd first arises in marsupials and is absent in more-primitive organisms (Extended Data Fig. 16o ). Therefore, caspase-7 and perforin arose long before gasdermin D. We speculate that the original evolved function of caspase-7 was to counteract perforin pores, and that a secondary function to counteract gasdermin D pores appeared later in evolution. If this is the case, it could explain why we observed stronger phenotypes in bacterial clearance during perforin-mediated immune defence. Discussion Here we show that the unique function of caspase-7 is to activate ASM and thereby drive plasma membrane repair (see schematic in Extended Data Fig. 17 ). Our results indicate that ASM has two modes of action: basal activity and caspase-7-potentiated activity. Total loss of ASM causes Niemann–Pick disease, a lysosomal storage disease 10 . That Casp7 –/– and Smpd1 DA / DA mice are healthy provides evidence that caspase-7 cleavage is dispensable for basal ASM activity. The basal activity of pro-ASM-generated ceramide will rapidly repair a few pores in the plasma membrane without the need for caspase-7. Similarly, basal ESCRT-III–dependent exocytosis can repair gasdermin D or MLKL pore opening 33 , 34 ; however, caspases are not known to enhance ESCRT-III function. Notably, the combined activity of the many basal membrane repair pathways, including basal ASM, ESCRT-III, constriction and patching 10 , do not compensate for the loss of caspase-7-enhanced membrane repair in our models. When caspase-7 cleaves ASM, this boosts the activity of ASM, which results in copious production of ceramide that repairs numerous membrane pores. Therefore, caspase-7 improves on the normal membrane repair capacity. Gasdermin D and perforin pores will allow a massive influx of calcium, causing lysosomal exocytosis that will deliver pro-ASM to the cell surface to repair the membrane. However, caspase-1, or NK cell or CTL attack, might generate numerous gasdermin or perforin pores that exceeds the basal membrane repair capacity. We propose a model in which activated caspase-7 passes through gasdermin D or perforin pores to encounter ASM. This is an elegant solution to enable caspase signalling to reach across the membrane to activate extracellular ASM. This caspase-7–ASM pathway provides one regulatory mechanism to slow down pore-mediated lysis. This should be in parallel to other regulatory mechanisms; for example, caspase-1 self-inactivation by cleavage between its CARD and protease domains 35 , and caspase-3 inhibition by XIAPs 36 . Why membrane repair is required to facilitate the successful extrusion of IECs is a question that requires further attention. Opening the gasdermin D pore probably triggers IEC extrusion, but the open pore would also cause the loss of cytosolic molecules required to complete the extrusion process. For example, ATP is essential for normal actomyosin contraction in extruding IECs 8 . In the absence of repair, IECs might become depleted of ATP or other critical constituents before the extrusion process is completed. This incomplete extrusion could be detrimental to neighbouring cells, which could explain the abnormal clustering of extruding cells that we observe during S . Typhimurium infection in Casp7 –/– and Smpd1 DA / DA mice. The precise identity of the cytosolic extrusion effectors that are compromised under caspase-7 and ASM deficiency remains unknown. Why membrane repair is essential to enable the clearance of intracellular bacteria after NK-cell- or CTL-mediated perforin attack is another unresolved point. There is evidence that multiple CTL attacks are required in vivo in other infectious models 37 , which would generate many perforin pores. These should be problematic for completing apoptosis because rapid swelling and membrane rupture might occur faster than caspase-3 can act 38 , 39 . Apoptosis requires time and intact cellular energetics. We speculate that caspase-7 permits the cell the time needed to complete apoptosis after perforin-mediated attack. We propose that completion of the apoptotic process is required to clear C. violaceum and L. monocytogenes , and that perforin-driven lysis is not sufficient. However, why lysis would not be sufficient to clear the bacteria is not immediately apparent. Indeed, pyroptosis drives lysis in vivo to trap bacteria in pore-induced intracellular traps that lead to neutrophil efferocytosis, thereby killing bacteria; 40 this mechanism is likely to clear C. violaceum in the spleen. Which aspect of apoptosis leads to bacterial clearance from hepatocytes is a question that requires further investigation. Overall, our results suggest that caspase-7 is not simply a weak back-up for caspase-3 during apoptosis. Instead, caspase-7 is a facilitator of cell death pathways that is independently essential during regulated cell death. Notably, in our IEC extrusion model caspase-7 acts downstream of the pyroptotic caspase-1, but in the NK cell and CTL model, caspase-7 acts downstream of granzyme B and probably in concert with the apoptotic caspase-3. We therefore propose to change the designation of caspase-7 from an apoptotic executioner to a general cell death facilitator that is useful for both inflammatory and apoptotic pathways when a cell needs to maintain an intact membrane for a certain period of time. Methods Mice Mice were housed in a specific-pathogen-free facility: wild-type C57BL/6 (The Jackson Laboratory), Casp7 –/– (Jackson 006237), Casp3 –/– (Jackson 006233; note these mice are born at sub-Mendelian ratios from heterozygous × heterozygous or heterozygous × homozygous breeding, with the latter often failing to breed; weaned Casp3 –/– mice appear normal and healthy by visual inspection), Gsdmd –/– (ref. 3 ), Nlrc4 –/– (ref. 41 ), Casp1 –/– Casp11 129mt/129mt referred to as Casp1 –/– Casp11 –/– (ref. 42 ), Casp6 –/– (Jackson 006236) 43 and Prf1 –/– (Jackson 002407) 32 mice were used. Smpd1 D249A/D249A mice (this paper; referred to as Smpd1 DA / DA ) were generated by the Duke Cancer Institute, breed normally and are healthy by visual inspection to at least five months of age at the time of publication. Animal protocols were approved by the Institutional Animal Care and Use Committee (IACUC) at the University of North Carolina at Chapel Hill or by the IACUC at Duke University and met guidelines of the US National Institutes of Health (NIH) for the humane care of animals. All strains were maintained on 12–12 light cycles, at 22.2 ± 1.1 °C, and under the humidity set point of 45%. All strains were maintained on the C57BL/6 background. For all mouse infections, 8–12-week-old mice were infected with the designated CFU or plaque-forming units (PFU). C. violaceum , S. Typhimurium, Burkholderia thailandensis and MCMV were delivered in PBS by intraperitoneal (IP) injection; L. monocytogenes and L. monocytogenes Δ actA mutant were delivered in PBS by intravenous (IV) injection; LCMV was delivered in Dulbecco's modified Eagle's medium (DMEM) by IP injection. For bacterial and viral enumeration, organs were bead-homogenized and serially diluted on brain heart infusion (BHI) agar plates ( C. violaceum , L. monocytogenes and L. monocytogenes Δ actA mutant), Luria-Bertani (LB) agar plates ( S. Typhimurium and B. thailandensis ) or for plaque or 50% tissue culture infectious dose (TCID 50 ) assays (LCMV and MCMV). Both male and female mice were used in equivalent numbers in groups unless otherwise stated. Mice were allocated to groups for experiments in a non-biased manner. Blinding of mice was not performed except for histological scoring of pathology. The target sample size was six mice per group, based on power analysis and historical trends in data variance; however, smaller or larger group sizes were used sometimes owing to mouse availability. Strains and growth conditions Bacterial strains used in this work: S. enterica serovar Typhimurium on the 14028s background with or without flgB:: Tn10 were used for competitive index, comparing kanamycin vector control (pWSK129) to FliC ON (pEM087) 44 ; B. thailandensis (strain previously passaged through a Casp1 –/– Casp11 –/– mouse), C. violaceum (ATCC 12472), L. monocytogenes (10403s derivative, native inlAB replaced with mouse-specific inlAmB , PMC 2869327) and L. monocytogenes Δ actA mutant (a gift from the laboratory of D. Portnoy) were also used. We refer in the text to the L. monocytogenes strain with the mouse-specific inlAmB as wild-type L. monocytogenes . C. violaceum , L. monocytogenes and L. monocytogenes Δ actA mutant were grown in BHI. S . Typhimurium and B. thailandensis were grown in LB medium. All bacterial strains were grown overnight at 37 °C and back-diluted (1:40) for 2 h for all experiments. Viral stocks of LCMV were generated from infected BHK-21 monolayers (laboratory of J.K.W.). Viral stocks of MCMV (Smith strain, ATCC VR-1399) were grown in 3T12 cells (ATCC), passaged in weanling BALB/c mice, collected from the salivary glands and quantified in viral plaque assays 45 (generated in the laboratory of M.G.B). Tissue culture cell lines L-WRN cells were purchased directly from ATCC, which is considered a reputable vendor; they were further authenticated by their phenotype of supporting organoid growth, a property that other cell lines cannot accomplish. HeLa cells were purchased from Duke University Cell Culture Facility, and were authenticated by short tandem repeat (STR) analysis performed by this facility. Hepa1-6 and YAC-1 were purchased directly from ATCC, which is considered a reputable source. Hepa1-6 cells were partially authenticated by visual morphology. YAC-1 cells were partially authenticated by the phenotype of being attacked by NK cells. Cell lines were not further authenticated. Cell lines were tested for mycoplasma contamination. These cell lines are not included in the list of commonly misidentified cell lines by ICLAC. S . Typhimurium oral infection For S . Typhimurium infection, streptomycin pretreatment was performed as previously described 46 . Mice were deprived of food and water for 4 h and treated orally with 20 mg kg −1 streptomycin the day before infection. S . Typhimurium SL1344 was grown overnight at 37 °C and back-diluted (1:40) for 4 h in LB, then diluted in PBS before infection. Mice were deprived of food and water for 4 h, then S . Typhimurium was delivered by oral gavage on day 0 (5 × 10 6 CFU were used unless otherwise indicated). Water was provided thereafter, and food was provided 2 h later ad libitum. For ASM depletion, mice were injected IP with 10 mg kg −1 IMP (Sigma I0899) or PBS control daily, starting from the day before infection. For S . Typhimurium enumeration, organs were collected, bead-homogenized and serially diluted on LB agar plates with streptomycin. Caeca were incubated in gentamycin–PBS (50 µg ml −1 ) for an hour, then washed three times and collected. Competitive index For competitive indices with S . Typhimurium, bacteria were grown to stationary phase overnight in LB. Eight-to-ten-week-old mice were infected with 10 5 total CFU composed of vector control pWSK129 (kanamycin-resistant) mixed at a 1:1 ratio with FliC ON (ampicillin-resistant) S . Typhimurium. Bacteria were diluted in PBS and injected into mice by IP injection. Tissues were collected two days after infection and homogenized, and dilutions were plated onto LB plus antibiotics. Competitive index is expressed as log (FliC ON CFU/vector control CFU), thus a −2.0 competitive index reflects a ratio of 1 FliC ON to 100 vector control CFU. In vivo treatments For NK cell depletion, mice were injected IP with 100 μg anti-NK1.1 (PK136, BioXcell, BE0036) or isotype control (C1.18.4, BioXCell, BE0085). Depletion of NK1.1-positive cells was confirmed by flow cytometry. For IL-18 therapies, mice were injected IP with 0.2 μg recombinant mouse IL-18 (rmIL-18; MBL) at the time of infection, and daily until collection as we previously described 23 . For ASM depletion, mice were injected IP with 10 mg per kg per mouse of IMP (Sigma) or PBS control beginning at day −1 before infection, then daily. For in vivo IFNγ blockade, mice were injected IP with 500 μg per mouse of anti-mouse IFNγ (XMG1.2, BioXcell, BE0055) or isotype (Rat IgG1, BioXcell BE0290) control beginning at day −1 before infection, then every other day. Enzyme-linked immunosorbent assay Mice serum was collected and stored at −80 °C until use. IFNγ levels in serum were determined by enzyme-linked immunosorbent assay (ELISA) (R&D Systems). For ELISA, 100 µl of capture antibody diluted in PBS was plated into a 96-well plate, covered with parafilm and left overnight at room temperature. After washing with wash buffer four times, blocking buffer was added (300 µl) and incubated for 1 h at room temperature. After washing four times, 100 µl of each sample and standard per well was added and reacted for 90 min at room temperature. After washing four times, 100 µl of detection antibody was added. After washing four times, 100 µl of streptavidin–horseradish peroxidase (HRP) was added and incubated for 20 min at room temperature. After development with 100 µl of substrate solution for 20 min, the reaction was stopped with 50 μl of 2N H 2 SO 4 and signal at 450 nm was detected using a Epoch Microplate Spectrophotometer (BioTek) running Gen 5 v.3.10 software. Induction of acute colitis by DSS Mice were initially maintained on 2.5% w/v DSS (MP Biochemical 160110) ad libitum for 7 days, followed by 10 days of regular drinking water. Body weights were measured before the induction of colitis and daily thereafter until mice were euthanized. For histology, mice were treated with 2.5% w/v DSS ad libitum for 5 days and euthanized, then tissues were placed in 10% formalin for at least 24 h before embedding, cutting and H&E staining by the UNC Cell Services and Histology Core. Immunofluorescence and analysis Caecum or organoid samples were fixed with 4% paraformaldehyde (PFA) in PBS overnight at 4 °C. Liver was perfused through the portal vein with 2% PFA in PBS and left to fix in 5 ml of 2% PFA overnight. After overnight incubation, samples were rinsed in PBS and replaced in 30% sucrose in PBS for two days, then embedded in OCT compound (Sakura Finetek 4583) and frozen on dry ice and cut into 5- to 8-μm sections. Tissues were air-dried, washed in PBS and permeabilized with 1% Triton X-100 in PBS for 30 min, and blocked in 1% BSA PBS for 1 h at room temperature. Primary antibodies were incubated overnight. After washing in PBS for 10 min three times, secondary antibody reactions were performed for 2 h at room temperature, followed by another three washes in PBS. Finally, slides were mounted with DAPI (Fluoroshield, Sigma F6057). Primary antibodies detecting cleaved caspase-7 (1:400, Cell Signaling 9491), cleaved caspase-3 (1:400, Cell Signaling 9661), EpCAM (1:1,000, BioLegend 118202, clone G8.8) and ceramide (both 1:200, ENZO Life Sciences ALX-804-196, clone MID 15B4, and 1:200, Glycobiotech MAB_0014, clone S58-9, were used) in 1% BSA PBS were used. For hepatocyte or Listeria colocalization analyses, slides were then stained with CPS1 as a marker of hepatocytes 47 or a polyclonal anti- Listeria antibody (Abcam 68592, 1:10) for 1 h. Phalloidin (1:1,000, Invitrogen, A12379, A34055, A22287) was used to stain F-actin. Alexa-conjugated antibodies (anti-rabbit; Cell Signaling 4412, 4414; anti-rat, Abcam ab175475; anti-mouse, Invitrogen A10037) were used as secondary antibodies (1:1,000). Anti-PARP1 antibody (D214, STJ90100, St John’s Laboratory; 1:400) was also used. For TUNEL staining, tissues were instead fixed in 4% PFA and permeabilized with 0.2% Triton, with 0.2% Triton remaining in all further staining steps. Slides were stained overnight for cleaved caspase-7 as above. Tissues were then stained for TUNEL signal with the Roche in situ cell death detection kit TMR red (Sigma 12156792910). Mounting medium contained DAPI. All images were taken on a Zeiss 880 machine with a Plan-Neofluar 40×/1.3 objective operated by ZEN Black Edition v.14.0 software, or Olympus BX61 machine. The images shown in Extended Data Fig. 9 are composed of multiple merged 20× images in the zones of cleaved caspase-3 or caspase-7 staining, which were overlaid above a 10× image that occupies the upper left quadrant of the image (where there is primarily DAPI staining). Anti-CPS1–HRP antibody was obtained from Abcam (198969, 1:300) and is an intracellular marker (thus requires fixation and permeabilization before antibody use). HRP-conjugated antibodies can be used with tyramide signal amplification techniques to generate a fluorescent signal detectable by flow cytometry and immunofluorescence. AlexaFluor 488 signal was generated using the Thermo Fisher Scientific kit T20922, and cells or tissues were incubated with the solution for 10 min before stopping the reaction and washing. Endogenous peroxidase signal was blocked with 0.3% hydrogen peroxide for 60 min before incubation with anti-CPS1 antibody. Isolation, culture and treatment of IECs Jejunal-ileal crypts were isolated from 10-to-15-week-old wild-type mice as previously described 48 . After euthanasia, the intestine was removed and cut open, then washed three times, followed by 2.5 mM EDTA chelation for 30 min at 4 °C and mechanical dissociation. The isolated crypts were pelleted and washed three times in 2% sorbitol PBS with low-speed centrifugation (400 rpm, 3 min) at 4 °C and resuspended in 1% FCS/DMEM. After being filtered through 70-μm strainers, crypts were pelleted and embedded in Matrigel (BD Biosciences 356321) and incubated for 20 min at 37 °C. After gels were solidified, warmed 50% L-WRN conditioned media from L-WRN cells (ATCC 3276) based on Advanced DMEM/F12 (Gibco 12634010) were added 49 . These conditioned media were supplemented with 20 ng ml −1 mouse EGF (Peprotech 315-09). The medium was changed every two days until the following use of cultured organoids. For organoid imaging experiments, day-2 organoids from various genotypes were grown for a further 24–48 h in Advanced DMEM/F12 containing 1× glutamax, 1× penicillin–streptomycin, 2.5 mM N -acetylcysteine (Sigma A9165), 500 ng ml −1 mouse Rspo1 (R&D Systems 3474-RS-050), 20 ng ml −1 mouse EGF and 100 ng ml −1 mouse Noggin (R&D Systems 6997-NG-025) to allow differentiation. For organoid stimulation, 3 µg ml −1 FlaTox 50 , 20 ng ml −1 TNF (Peprotech 315-01A), 5 µg ml −1 CHX (Sigma C4859) and 50 µM IMP (Sigma I7379) were used. For organoid imaging experiments, PI (100 µg ml −1 , Invitrogen P3566) was loaded before 1 h of imaging. Calcein-AM (2 µM, Invitrogen C3100) was loaded before 30 min of experiments as per the manufacturer’s protocol. For ceramide introduction, 1 mM of C-16 ceramide (Cayman Chemical 10681) stock solution was prepared in N , N -dimethylformamide (Sigma D4551). C-16 ceramide (final concentration; 500 nM) or control vehicle was added to the cell culture medium before 2 h 30 min of imaging. Time-lapse imaging of organoids Time-lapse imaging of the organoids were performed on MetaMorph v.7.10 software on a VivaView Incubator Fluorescence Microscope (Olympus) with an X-Cite eXacte as the illumination source. Differential interference contrast (DIC) images as well as fluorescent images (PI and calcein) were acquired through a UPLSAPO 20× objective (0.75 NA) onto an Orca R2 cooled CCD camera (Hamamatsu). After 30 min to 1 h of adaption, single-plane imaging was performed for 1.5–6 h at 120–320-s intervals just after treatment with FlaTox or TNF + CHX. ImageJ v.1.4.3.67 software (NIH) was used to measure the mean intensity of each organoid in a hyperstack image for PI and calcein intensity analysis. Liver cell enrichment Livers were extracted and perfused with collagenase type I (1 mg ml −1 in RPMI) by portal vein injection until visible blanching of all lobes was observed. Liver lobes were then finely cut into small pieces using a razor blade and incubated for 10 min at 37 °C with CO 2 . They were then mashed through a cell strainer (Falcon, 70 µm) into a 50-ml conical and washed with around 40 ml of plain RPMI. Cells were spun in a tabletop centrifuge (Eppendorf, model 5810R) at 50 g for 5 min at 4 °C. The supernatant was discarded, the pellet resuspended in 15 ml of RPMI and spun for an additional two times at 50 g for 5 min at 4 °C. The cell pellets, now consisting of more than 96% hepatocytes (as previously reported 51 and validated by flow cytometry) were resuspended once in PBS and spun again as before. Cells were resuspended in RBC lysis buffer, incubated for 5 min at room temperature (around 22 °C), spun at 1,500 rpm in a small tabletop centrifuge (Eppendorf model 5810R) and the pellet resuspended in RPMI or 1× PBS depending on usage. For splenocyte isolation, spleens were mashed through a cell strainer (Falcon, 70 µm) into a 50-ml conical and washed with 15 ml of plain RPMI. Cells were spun at 1,000 g for 5 min, once in RPMI, and once in PBS. They were then RBC-lysis-treated, spun and the pellet resuspended in RPMI or 1× PBS. Adoptive transfer of NK cells Splenocytes were collected from naive mice as described above. Cells were seeded into 10-cm non–tissue-culture-treated dishes with RPMI + 10% fetal bovine serum (FBS) + non-essential amino acids (NEAA) + pencillin–streptomycin + 15 ng ml −1 IL-2. Four days later, the supernatant was removed and spun at 1,000 rpm for 5 min to remove dead cells. The supernatant was returned to the dish along with 2 ml of new medium with an additional 10 ng ml −1 of IL-2. Six days after initial plating, the cells were counted and used for transfer. Adoptive transfer with enumeration of bacterial count For experiments using bacterial counts from L. monocytogenes , donor mice were vaccinated with 1 × 10 6 of the Δ actA L. monocytogenes mutant strain or mock-injected with PBS as described previously 52 . NK cells were depleted (see 'In vivo treatments' above) in all donor mice on day 5 after challenge to remove potential confounding NK contributions. Splenocytes were collected from naive or immunized mice as described above. Recipient mice were then adoptively transferred by IV injection with 5 × 10 7 bulk splenocytes. One-hour after adoptive transfer, recipient mice were IV injected with 5 × 10 4 CFU of L. monocytogenes . Livers and spleens were collected three days later, homogenized and dilutions plated on BHI. Plates were incubated at 37 °C and bacterial counts were enumerated 16–24 h later. Adoptive transfer with flow and immunofluorescence analysis For experiments examining the expression of cleaved caspase-7 (Extended Data Fig. 12 ) in L. monocytogenes infections, donor mice were vaccinated, NK cells were depleted and splenocytes were collected as in the adoptive transfer experiments described above ('In vivo treatments' and 'Adoptive transfer of NK cells'). Recipient mice were IV injected with 5 × 10 4 L. monocytogenes two days before adoptive transfer. Recipient mice were then transferred with 8 × 10 7 bulk splenocytes by IV injection. Mice were euthanized 24 h after adoptive transfer. One lobe of the liver was prepared for immunofluorescence, and hepatocytes were isolated from the remaining liver lobes for flow cytometry (see 'Liver cell enrichment' above and 'Flow cytometry' below). Flow cytometry Surface and intracellular staining was performed directly ex vivo. Cells were Fc-blocked with anti-CD16/CD32 (1:100) and surface stained with anti-CD45.2-–PercPCy5.5 (104, Biolegend 109828; 1:400), anti-NK1.1–FITC (PK136, Biolegend 108706; 1:400) and anti-CD8a (Biolegend 109807; 1:400). Cells were fixed in 2% PFA either overnight at 4 °C or for 20 min on ice before permeabilizing with 10× Fix/Perm Buffer (Biolegend 421002) on ice. Cells were incubated with cleaved caspase-7 antibody (Cell Signaling 9491, 1:500, 45 min) followed by APC-conjugated secondary anti-rabbit antibody (Cell Signaling, 4414s, 1:1,000, 45 min). After five washes, cells were stained with anti-CPS1–HRP antibody (Abcam 198969, 1:300, detailed description below, 45 min), and samples were analysed on a FacsCalibur machine courtesy of the laboratory of J.K.W. Flow cytometry data were collected using BD CellQuest Pro v.5.2.1 software and analysed using FlowJo v.10.3 software. Quantification of viral titre LCMV viral titre in the liver was quantified by plaque assay on Vero cell monolayers 53 . MCMV viral titre in the liver was quantified by TCID 50 assay on 10.1 mouse embryonic fibroblasts (MEFs). MEFs were seeded at 5,000 cells per well into 96-well plates and allowed to adhere for 24 h. Serial dilutions of liver homogenate were then added and cultured for six days before determining cytopathic effects. Titre was determined using the Reed–Muench method. In vitro co-culture assays NK cells were expanded ex vivo as follows: splenocytes were collected from naive mice as described above. Cells were seeded into 10 cm non-TC-treated dishes with RPMI + 10% FBS + NEAA + penicillin–streptomycin + 15 ng ml −1 IL-2. Four days later, the supernatant was removed and spun at 1,000 rpm for 5 min to remove dead cells. The supernatant was returned to the dish along with 2 ml of new medium with an additional 10 ng ml IL-2 −1 . Six days after initial plating, the cells were counted and used for transfer. Hepa1-6 cells (mouse hepatocyte cell line; ATCC CRL-1830) were maintained in DMEM with 10% FBS. Cells were seeded at 5 × 10 4 cells per well of a treated 96-well plate and allowed to adhere overnight. The following day (approximately 24 h later), they were infected at a multiplicity of infection (MOI) of 0.5 with L. monocytogenes for 1 h before washing and replacing with medium containing gentamicin (50 μg μl −1 ). The next morning (approximately 16 h later), the cells were washed to remove gentamicin from all wells, and co-cultured with NK cells at an effector:target (E:T) ratio of 5:1 with or without gentamicin added to the medium. Bacterial counts and caspase-3 or -7 activation were analysed 5 h after co-culture. YAC-1 cells (non-adherent mouse lymphoblast cell line; ATCC TIB-160) are inherently targeted by NK cells and often used for NK killing assays. YAC-1 cells were maintained in RPMI with 10% FBS. They were also seeded at 5 × 10 4 cells per well of a treated 96-well plate. They were infected for 1 h at a MOI of 10 with either C. violaceum or L. monocytogenes before washing and replacing with medium containing gentamicin (50 μg μl −1 ) for 1 h. YAC-1 cells were then washed, co-cultured with NK cells and analysed for bacterial counts and caspase-3 or -7 activation as with the Hepa1-6 cells. In vitro granzyme B For each experiment, Hepa1-6 cells were lifted and 1 × 10 6 cells per replicate for each treatment condition were lysed in 50 μl of 1% Triton X-100. Purified recombinant mouse granzyme B (PeproTech 140-03) was then added at the indicated amounts (0.2, 0.4 or 0.8 μg) and incubated with the lysates for 1 h at room temperature. A small volume (10 μl) was removed at the end of the hour to validate the cleavage of caspase-3 and -7 by western blot. Then 50 μl containing 1 × 10 6 L. monocytogenes was added and incubated at room temperature. Small volumes of lysate were dilution-plated over 16 h to quantify possible effects on bacterial viability in the presence of granzyme B, active caspase-3 and active caspase-7. Western blots The amount of total protein from organoid lysates was normalized in a BCA kit (Pierce); proteins were resolved on NuPage precast 4%–12% Bis-Tris gels (Invitrogen) and were transferred to polyvinylidene difluoride membranes. After blocking in 2% BSA/TBST for 40 min at room temperature, primary antibody was added and incubated overnight at 4 °C. Caspase-3 cleavage was analysed using a cleavage-specific antibody (1:500, Cell Signaling 9661). Caspase-7 cleavage was analysed using a cleavage-specific antibody (1:500, Cell Signaling 9491). Caspase-1 cleavage (1:200, Santa Cruz sc-514 clone M-20), caspase-8 cleavage (1:500, Cell Signaling 8592) and caspase-9 cleavage (1:400, Cell Signaling 9504) were detected using those antibodies. Gasdermin D was analysed using an antibody detecting both the full-length and the cleaved form (1:500, Abcam ab209845). GAPDH (1:1,000, Cell Signaling 97166) was detected as an internal control. After washing in TBST three times, secondary antibody was added and incubated for 2 h at room temperature. Secondary HRP-conjugated anti-rabbit antibody was purchased from Cell Signaling (1:2,000, Cell Signaling 7074). After three washes in TBST, signals were developed with ECL substrates (Thermo Fisher Scientific) and analysed. For the detection and validation of ASM, ASM-knockout HeLa cell lines were generated with CRISPR–Cas9 technology at the Duke Functional Genomics Core. Two-guide RNA targeting was used (target sites; 5′-GAACCCAATGTGGCTCGCGT-3′ and 5′-ACAATGGATTGGCACACGGC-3′). For the detection of ASM in IECs, organoids were isolated from ECM and washed three times in PBS with low-speed centrifugation (850 rpm × 1 min). After dead cell removal, each organoid from various genotypes was separated equally into two parts, then treated with or without FlaTox and both incubated for 20 min at 37 °C. After incubation, organoids were collected and lysed, then ASM was detected (1:1,000, Invitrogen PA-5 72432). ASM was deglycosylated with PNGase F (New England BioLabs P0704) per the manufacturer’s protocol. In brief, lysates were denatured at 100 °C for 10 min in glycoprotein denaturing buffer, chilled and centrifuged quickly, then incubated with PNGase F in GlycoBuffer at 37 °C for 1 h. After the reaction, ASM was detected by western blot. Histological scoring For S. typhimurium infection, histology was assessed using a modified scheme adapted from a previous study 46 . Slides were blind scored (0 to 3) for each of four criteria: epithelial hyperplasia and damage; immune infiltrate into the mucosa and lamina propria; goblet cell loss; and submucosal oedema. The average score of four criteria was then calculated. For DSS experiments, slides were blind scored based on a precvious report 54 . Three major criteria (0 to 3)—crypt hyperplasia, inflammatory cell infiltration and muscle thickening—and two minor criteria (0 to 1)—goblet cell depletion and crypt abscess—were used. These criteria were added together. Immunofluorescence quantification in the intestine For the quantification of cleaved-caspase-7-positive cells, cleaved-caspase-3-positive cells, and EpCAM-positive cells, 60 to 90 fields with 40× Lenz in each mouse sample were randomly examined and the number of DAPI + caspase-7 + cells, DAPI + caspase-3 + cells or DAPI + EpCAM + cells was counted. Immunofluorescence quantification in the liver For both C. violaceum and L. monocytogenes infections, inflammatory foci or lesions were easily identifiable by the visualization of abundant immune cell nuclei in the DAPI channel, and random lesions were selected for multi-channel pictures. These were split into single-channel images and regions of interest (ROI) were drawn around the foci using the DAPI channel as a guide (in ImageJ); thus, we were blinded for cleaved caspase-7 intensity. The ROI was copied to the single-channel image with cleaved caspase-7 and the integrated density was measured. Background integrated density (defined as the signal in an area outside the lesions) was subtracted for the reported values. To analyse C. violaceum -infected liver images, multiple 10× pictures were stitched together owing to the large size of the lesions, and two lesions per mouse were quantified for cleaved caspase-7 signal (Extended Data Fig. 9a,b , quantification in Extended Data Fig. 9c ). The number of CPS1 and cleaved caspase-7 double-positive cells was determined in Extended Data Fig. 9d . This was done by drawing a ROI encompassing 45 cells on the CPS1 single-channel image and copying it to the cleaved caspase-7 single-channel image. The percentage of double-positive cells was calculated for one region per IL-18-treated mouse, with six regions scored and plotted as the mean with standard deviation. Cleaved-caspase-7-positive cells were scored as CPS1-positive or -negative. Cleaved caspase-7 signal was quantified from five lesions per mouse for L. monocytogenes -infected livers at 20× magnification (Extended Data Fig. 12d , quantification in Extended Data Fig. 12g ). The percentage of cleaved-caspase-7-positive signal colocalizing with L. monocytogenes staining was determined in Extended Data Fig. 12i from three lesions per mouse. Organoid quantification For the organoid rupture counting experiment, 10–20 organoids after FlaTox treatment were tracked for 2–3 h in each live-imaging experiment to determine their rupture behaviour. When the organoid architecture collapsed and the inner content of the organoid composed mostly of extruded IECs increased and ruptured into the exterior of the organoid, this was determined to be one ruptured organoid. All of the ruptured organoids were counted and calculated as one rupture ratio shown as one indicative dot in the graphs, and were then pooled and combined. For examining the extrusion starting time, the time when a total of four cells in one organoid started extrusion was recorded as the extrusion initiation time of that organoid to exclude the non-specific (homeostatic) epithelial extrusion. For examining the PI or calcein intensity of the organoid, single-plane imaging was performed for 1.5–6 h at 180–480-s intervals just after FlaTox or TNF + CHX treatment. ImageJ software (NIH) was used to measure the mean intensity of each organoid in a hyperstack image and the average of the mean intensity in each time point was calculated. Expression data analysis For BioGPS expression data, the indicated genes were used as queries ( ) 6 . Mouse data from the dataset: GeneAtlas MOE430, gcrma were selected 7 , and the raw data were downloaded from the download tab. Selected organ data were graphed. The mean and s.e.m. are shown from two data points. For purified IEC expression data, the dataset GDS3921 (ACCN) was searched in the GEO Dataset Browser at NCBI ( ). This dataset consists of previously published expression data from purified IECs 7 . Indicated genes were used as search terms for this dataset, and on the results page the graph image was selected to access the raw data page. Raw data from the value (count) chart were collated and plotted from the control mice that were not treated with antibiotics. Antibiotic-treated mouse data is not shown but results were equivalent. The mean with s.e.m. are shown from six data points. Statistics Error bars represent the standard deviation of technical replicates and bars indicate median. Two-tailed unpaired t -test, two-tailed Mann–Whitney U -test, one–way ANOVA with Tukey’s multiple comparison test and log-rank Mantel–Cox test were used for statistical analysis. P values of ≤ 0.05 were considered significant. Statistical analysis was performed using GraphPad Prism v.5 and v8 software and Microsoft Excel 2013. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All relevant data are included in the Article or its Supplementary Information files. More details are available from the corresponding author upon request. Source data are provided with this paper.
Researchers have unmasked a component of the cell death process that could play a vital role in a better infection-fighting strategy. Scientists at the Duke University School of Medicine and the UNC School of Medicine partnered with researchers at the University of Virginia to identify the function behind caspase-7, an enzyme that is part of a cell's self-destruct program. While researchers have known the enzyme is involved in the process, its exact function has been unclear. The findings are published June 15 in the journal Nature. The researchers found that caspase-7 kicks off a Rube Goldberg series of events that allow the cell to die in an orderly fashion, said the study's senior author, Edward Miao, MD, Ph.D., a professor in the departments of Immunology and Molecular Genetics and Microbiology at Duke and previously an associate professor in the UNC Department of Microbiology & Immunology. An orderly cell death is essential for the immune response. Without the enzyme, however, the dying cell might violently explode and cause collateral damage. Miao said this research lays groundwork to explore exciting possibilities for therapeutic applications, especially if caspase-7 can be boosted or blocked. "There is still so much we need to understand about the basic components of our cells, what they do and why," Miao said. "If we can uncover this blueprint, it may become a map for understanding how disease moves in the body, allowing scientists to devise a more precise plan of action. With the discovery of caspase-7's role in the cell, we're one step closer to a complete schematic." In the study, the researchers found that caspase-7 serves as a timing device in cell death. It activates a protein, called acid sphingomyelinase or ASM, which kicks off a cell membrane repair mechanism, which in turn gives the cell enough time to get its affairs in order before dying. To identify the function of the caspase-7 enzyme, the team studied different infection models in genetically altered mice and cultured intestinal tissue. They looked at caspase-7's role in two types of orderly cell death—extrusion and apoptosis. Caspase-7's role was different in the context of different pathogens—Salmonella, Listeria, and a rare pathogen called Chromobacterium. With Salmonella, cells undergo an orderly manner of cell death called extrusion. In this context, researchers found the absence of caspase-7 caused the unnecessary death of healthy cells, which were the collateral damage of nearby infected cells failing to detach themselves, or extrude, from their neighbors. With Chromobacterium and Listeria, dying cells normally undergo a process called apoptosis, or programmed cell death. The lack of caspase-7 enabled the bacteria to survive an immune system attack, presumably because their infected cell host didn't accomplish some apoptotic task before exploding. If caspase-7 can be manipulated, Miao said a potential application is investigating the possibility of targeting the enzyme as a detonator of cell death to circumvent antibiotic resistance. Antibiotics are a standard and broad approach for fighting infection, but pathogens have devised strategies to adapt and resist. "Antibiotics don't take into account that each pathogen has its own strategy—some of them are living outside cells, some are living inside cells. If a pathogen is inside the cell," he said, "you could boost caspase-7 to allow the infected cell to die in the correct way. If you know the strategy that the pathogen is using, you can help the immune system by tweaking it in the correct direction." "On the other hand, if you have a pathogen traveling outside of the cell, and cells are exploding inappropriately, maybe we could boost caspase-7 to keep them alive, thereby prevent excessive damage," Miao said, noting that this approach might be effective against sepsis. Miao said this enzyme could also potentially play a major role in triggering the immune system to fight cancer. "Cancer cells are probably running the full Rube Goldberg machine and dying in an orderly manner," Miao said. "When the immune system comes and looks around, it sees that everything seems to be in order and then leaves. "If you could shut down the Rube Goldberg contraption, instead of putting itself away nice and neat, the tumor cell would be just dead on the floor," Miao said. "This might cause the immune system to become alarmed and activated. Theoretically, that could cause the immune system to attack a tumor it would otherwise ignore."
10.1038/s41586-022-04825-8
Biology
New Cas9 model maps DNA cutting behavior for the first time
Behrouz Eslami-Mossallam et al, A kinetic model predicts SpCas9 activity, improves off-target classification, and reveals the physical basis of targeting fidelity, Nature Communications (2022). DOI: 10.1038/s41467-022-28994-2 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-022-28994-2
https://phys.org/news/2022-03-cas9-dna-behavior.html
Abstract The S. pyogenes (Sp) Cas9 endonuclease is an important gene-editing tool. Sp Cas9 is directed to target sites based on complementarity to a complexed single-guide RNA (sgRNA). However, Sp Cas9-sgRNA also binds and cleaves genomic off-targets with only partial complementarity. To date, we lack the ability to predict cleavage and binding activity quantitatively, and rely on binary classification schemes to identify strong off-targets. We report a quantitative kinetic model that captures the Sp Cas9-mediated strand-replacement reaction in free-energy terms. The model predicts binding and cleavage activity as a function of time, target, and experimental conditions. Trained and validated on high-throughput bulk-biochemical data, our model predicts the intermediate R-loop state recently observed in single-molecule experiments, as well as the associated conversion rates. Finally, we show that our quantitative activity predictor can be reduced to a binary off-target classifier that outperforms the established state-of-the-art. Our approach is extensible, and can characterize any CRISPR-Cas nuclease – benchmarking natural and future high-fidelity variants against Sp Cas9; elucidating determinants of CRISPR fidelity; and revealing pathways to increased specificity and efficiency in engineered systems. Introduction CRISPR-Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats—CRISPR-associated protein 9) has become a ubiquitous tool in the biological sciences 1 , 2 , with applications ranging from live-cell imaging 3 and gene knockdown/overexpression 4 , 5 to genetic engineering 6 , 7 and gene therapy 8 , 9 . Streptococcus pyogenes ( Sp ) Cas9 can be programmed with a ~100 nucleotide (nt) single-guide RNA (sgRNA) to target DNAs based on the level of complementarity to a 20 nt segment of the sgRNA 10 . Wildtype Sp Cas9 (henceforth Cas9) induces site-specific double-stranded breaks and the catalytically dead Cas9 (dCas9) mutant allows for binding without cleavage 3 , 5 . Apart from complimentary on-targets, Cas9-sgRNA also binds and cleaves non-complementary off-targets 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 . Off-target cleavage risks deleterious genomic alterations, which has so far impeded the widespread implementation of the CRISPR toolkit in human therapeutics 19 . Strong off-target sites are identified in silico by a growing set of tools. These tools use bioinformatics 20 , 21 , machine learning 22 , 23 , or heuristic 12 , 14 , 24 , 25 approaches to rank genomic sites based on distinctive off-target activity scores. Though such models can identify strong off-targets, they are not quantitative and cannot assess activity on the many lesser off-targets; nor can they predict how activity changes with exposure time and enzyme concentration—even though these parameters are frequently exploited to limit off-target activity in cells 26 . To implement quantitative activity prediction, Cas9 targeting must be modelled in physical terms. Existing physical models 24 , 27 , 28 assume binding equilibration before cleavage, and it remains unclear what predictive power such approaches can ultimately deliver in this non-equilibrium system 29 , 30 . To account for the nonequilibrium nature of the targeting reaction, we construct a mechanistic model that captures binding and cleavage reactions in kinetic terms. To gain insights into general mechanisms, we train and validate our model on high-throughput datasets that capture both binding and cleavage in bulk experiments 15 , 31 . Though we restrict our training to off-targets with two or less mismatches, we accurately predict the activities on all more highly mismatched off-targets in the same datasets, as well as those reported in two independent high-throughput datasets 11 . To reveal the physical basis of Cas9 fidelity on genomic scales, we extract the free-energy landscapes that control PAM binding, strand-replacement, and cleavage on any target. Our characterization of Cas9 supports the notion that observed differences in binding and cleavage activities 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 stem from a relatively long-lived DNA-bound RNA-DNA hybrid (R-loop) intermediate. This R-loop intermediate was recently observed directly in single-molecule experiments 42 , and our model predicts both its location and its conversion rates. Though the strengths of our model lies in that it allows us to calculate how (d)Cas9 activity evolves in time under various conditions, we also sought to compare our approach to existing binary off-target classifiers that identify strong off-targets. To this end, we reduce our quantitative activity predictor to a binary off-target classifier that outperforms the leading tools used today 12 , 24 , 28 , 43 . Results The kinetic model In Fig. 1a we show the reaction pathway that underpins the Cas9 targeting reaction on every target 44 . The reaction starts with Cas9-sgRNA ribonucleoprotein complex exiting the solution state to specifically bind to a 3nt protospacer adjacent motif (PAM) DNA sequence—canonically 5’-NGG-3’—via protein-DNA interactions 44 , 45 . Binding to the PAM sequence (state 0) opens the DNA double helix, and allows the first base of the target sequence to hybridize with the sgRNA 44 , 45 , forming the first R-loop state (state 1). The DNA double helix further denatures as the RNA-DNA hybrid is extended in the guide-target strand-replacement reaction 46 , 47 , 48 , 49 (state 2-20). The hybrid grows and shrinks in single-nucleotide steps, until it is either reversed and Cas9 dissociates, or it reaches completion at 20 base pairs (bp) in state 20. If the full hybrid is formed, Cas9 can use its HNH and RuvC nuclease domains to cleave both DNA strands 50 . Fig. 1: The reaction scheme and the implications of the model assumptions. a The general microscopic reaction scheme for PAM (blue rectangle) binding from solution, followed by strand replacement and eventual cleavage (Cas9 only). The bound states are labeled 0-20, starting with the PAM bound state, and ending with the state having a fully open R-loop (20 bp hybrid). b An example on-target free-energy landscape \({F}_{n}\) (pink), and the resulting free-energy landscape when using our mechanistic-model assumptions on an off-target where mismatches enter the hybrid at length 3 and 15 bp (blue). Each mismatch (dashed red line) has an energetic cost \({{\epsilon }}_{n}\) (red arrow) added onto the free energy of all later R-loop states. The solution state is chosen as a reference for the free energy, and set to \(0{k}_{{{{{{\rm{B}}}}}}}T\) (black point). Full size image If we know the conversion rates in Fig. 1a for a particular guide and target, the reaction scheme can be solved to calculate the binding and cleavage probabilities at any time (Methods). Fully parameterizing the model over all guide and target sequences requires the estimation of ~10 26 rates. To render parameter estimation tractable, we make four mechanistic-model assumptions: (1) Mismatch positions are more important than mismatch types (e.g. G-G vs. G-A). This can be directly inferred from data 11 , 15 , and we treat all 12 mismatch types equally. (2) Mismatch energies are determined by local interactions. The energetic cost of multiple mismatches is taken to be equal to the sum of the energetic costs of the individual mismatches. (3) dCas9 differs from Cas9 only in that dsDNA bond-cleavage catalysis is completely suppressed ( k cat = 0); all other rates are taken to be identical 51 , 52 . (4) All selectivity is governed by the hybrid-bond-reversal rates. Hybrid-bond-formation rates are treated as equal, independent of complementarity and location. These assumptions reduce the total number of microscopic parameters to 44 (see Methods): the (concentration dependent) rate of PAM binding from solution ( k on ) and the associated free-energy gain ( F 0 ); a single internal forward bond-formation rate ( k f ); 20 free energies dictating R-loop progression at the on-target ( \({F}_{1},\ldots ,\,{F}_{20}\) ); 20 free-energy penalties for mismatches at different R-loop positions ( \(\delta {{\epsilon }}_{1},\ldots ,\,\delta {{\epsilon }}_{20}\) ); and the rate at which the final cleavage reaction is catalyzed for Cas9 ( k cat ). Once model parameters are estimated, all possible off-target free energies can be directly calculated using assumptions 1–4 above. In Fig. 1b we illustrate the calculation taking us from the on-target (pink) to the off-target (blue) free-energy landscape with mismatches entering the hybrid at the 3rd and 15th bp. How to translate between free energies and rates is detailed in Methods. Base-pairing interactions, protein-DNA interactions 52 , and induced conformational changes 50 , 51 , 53 , 54 all contribute to the stability of the Cas9-sgRNA-DNA complex. To account for the varying nature of these interactions, we allow for varying gains and losses in the on-target free-energy landscape as the hybrid is extended. These variable gains and losses allow for the formation of metastable states on the on-target, and constitutes an essential extension of our previous fixed-gain model for RNA-guided nuclease kinetics 30 , as well as of models describing DNA displacement reactions occurring in solution 55 , 56 , 57 , 58 . Training on binding and cleavage for moderately mismatched targets We seek to reveal general properties of Sp Cas9 DNA targeting on genomic scales. To this end, we train and validate our model on data from two highly reproducible bulk-biochemical experiments performed on a large library of moderately to highly mismatched off-targets. The first set 15 (NucleaSeq) has 97% correlation between replicated experiments, and estimates the effective cleavage rates ( \({k}_{{{{{{\rm{clv}}}}}}}^{{{{{{\rm{eff}}}}}}}\) ) for a library of off-targets exposed to Cas9-sgRNA for 16 hours. The second set 15 , 31 (CHAMP) has 94% correlation between replicated experiments, and reports on the effective association constant ( \({K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{eff}}}}}}}\) ) over the same library and guide, but this time exposed to dCas9-sgRNA for 10 min. In Methods we detail how the experiments are modeled. We estimate the model parameters by minimizing the total experimental-error weighted residue between prediction and experiment for off-targets (see Methods) with no more than two mismatches in the NucleaSeq (Fig. 2a–c ) and CHAMP (Fig. 2d–f ) experiments. The rates and association constants from different types of mismatches are averaged (see Methods and Supplementary Data 1 ), and the optimal solution is sought with a Simulated Annealing algorithm 59 (see Methods). Fig. 2: Training on cleavage and binding for moderately mismatched targets. a Training data (triangles) for effective cleavage rates (NucleaSeq) on single-mismatch targets, and the model estimates (line). b Training data (upper-left triangle) for effective cleavage rate on double-mismatch targets, and the model estimates (lower-right triangle). c Correlation plot for all effective cleavage rate data used for training (single- and double-mismatch targets). d Training data (triangles) for effective association constant (CHAMP) on single-mismatch targets, and the model estimates (line). e Training data (upper-left triangle) for effective association constant on double-mismatch targets, and the model estimates (lower-right triangle). f Correlation plot for all effective association constant data used for training (single- and double-mismatch targets). All data is averaged over mismatch type (see Supplementary Data 1 ). The quoted correlation coefficients are Pearson-correlation coefficients, and correlation plots are displayed with log-scales to show the quantitative agreement also for weak targets. The dashed line in the correlation plots correspond to perfect quantitative prediction. Full size image The two training sets differ significantly (Fig. 2 , and Supplementary Fig. 1a ). Our model still reproduces effective cleavage rates (Fig. 2a–c ) and effective association constants (Fig. 2d–f ) with a Pearson correlation of 93% and 98% respectively, and quantitatively captures the difference between binding and cleavage activity. The time and concentration dependence of (d)Cas9 activity can be explored through a dashboard we provide (see Code Availability). Validation on highly mismatched targets and independent data sets Apart from the data we use for training (two or less mismatches), the NucleaSeq 15 and CHAMP 15 , 31 sequence libraries also includes block-mismatched targets with more than two mismatches. In Fig. 3a, b we show that we quantitatively predict effective association constants on these highly mismatched targets at a correlation of 98%. Our method also successfully separates out the single dominating off-target present among highly mismatched targets in the NucleaSeq experiments (Supplementary Fig. 1b ), resulting in a perfect correlation. Fig. 3: Validation on highly mismatched targets and independent HiTS-FLIP data. a Validation data (upper-left triangle) for effective association constant (CHAMP) on block-mismatched targets, and model estimates (lower-right triangle). The two terminal mismatch positions in the block are marked on the axes. b Correlation plot between measured effective association constants and model predictions on block-mismatched targets. c Validation data (triangles) for association rates (HiTS-FLIP data set 11 ) on single-mismatch targets, and model estimates (line). d Validation data (upper-left triangle) for association rates on double-mismatch targets, and model estimates (lower-right triangle). e Correlation plot for all positive association rates, including moderately (1–2 mismatches, dark purple) and highly (3–20 mismatches, light purple) mismatched targets. f Validation data (triangles) for dissociation rates (HiTS-FLIP data set 11 ) on single-mismatch targets, and model estimates (line). The missing mismatch-averaged dissociation rates in the seed are negative. g Validation data (upper-left triangle) for dissociation rates on double-mismatch targets, and model estimates (lower-right triangle). h Correlation plot for all positive dissociation rates, including moderately (1–2 mismatches, dark green) and highly (3–20 mismatches, light green) mismatched targets. Mismatch-averaged rates dominated by negative scores are excluded from the analysis, and all data is averaged over mismatch type (see Methods and Supplementary Data 1 ). The quoted correlation coefficients are Pearson-correlation coefficients, and correlation plots are displayed with log-scales to show the quantitative agreement also for weak targets. The dashed lines in the correlation plots correspond to perfect quantitative prediction. Full size image To further validate our model, we test against two data sets from HiTS-FLIP experiments reported in the literature 11 . The first independent validation set records the association rate relative to the on-target, estimated over 1500 seconds of exposure to dCas9-sgRNA at 1 nM concentration (Fig. 3c–e ). The second independent validation set records the dissociation rate relative to the on-target, estimated over 1500 seconds following 12 hours of exposure to a saturating dCas9-sgRNA concentration (Fig. 3f–h ). Our model quantitatively captures the relative association rates for all reported targets with 82% correlation (Fig. 3e ). For the relative dissociation rates, the correlation is more modest at 46% (Fig. 3h ), and the quantitative agreement is lost in some regions (Fig. 3f–h ). We still seem to capture the general trends on moderately mismatched targets (Fig. 3f, g ), though our model will never give binding/dissociation rates above/below that of the on-target, as is reported for some highly mismatched targets (Fig. 3e, h ) Physical characterization of SpCas9 and the intermediate R-loop state As our model parameters carry physical meaning, estimating them from data amounts to characterizing the system in physical terms. For Cas9, it has been experimentally shown that R-loop progression is controlled by an intermediate metastable state on the on-target 42 . We expect this intermediate state to show up as a local minimum in our estimated on-target free-energy landscape. The free energy of any metastable state will have a strong influence on the observed dynamics, and we expect such energies to be well constrained by the data. We expect barriers between metastable states to be harder to resolve, as the details of barrier regions matter less for the observable dynamics. We here report 33 near-equivalent optimization runs that all resulted in a residue that fell within 15% of the best solution found (see Supplementary Video 1 ). In Fig. 4a we plot the resulting on-target free-energy landscapes, with the optimal solution highlighted in pink. As expected, we see metastable states in the on-target free-energy landscape. With Cas9 in solution or PAM-bound, we have a well-defined free-energy minimum where the R-loop is closed (C). The on-target free energy (Fig. 4a ) increases substantially when forming the first hybrid bp in state 1, and remains relatively high and poorly constrained up to and including state 8. The energy of state 9-12 are well constrained, and among them we find a second local minimum. We identify these states as belonging to an intermediate (I) R-loop state. For hybrids of length 13 to 19 bp we again see an ill-constrained barrier, ending when we enter a well-constrained local minimum of a fully formed hybrid at state 20. This last minima defines the open (O) R-loop. Fig. 4: Physical parameters estimated from NucleaSeq and CHAMP datasets. a The on-target free-energy landscape \({F}_{n}\) for (d)Cas9-sgRNA at the reference concentration 1 nM. The solution state (black dot) is taken as a reference for the free energy, and set to \(0{k}_{{{{{{\rm{B}}}}}}}T\) . State 0 is the PAM-bound state, and the remaining states are the R-loop states with hybrid length 1–20 bp. Three well defined local minima separated by barriers are visible, indicating that there are three meta-stable states in the system. b Energetic penalties \(\delta {{\epsilon }}_{n}\) incurred by mismatches as a function of position \(n\) in the hybrid. c) The estimates for the on-rate at 1 nM Cas9-sgRNA concentration ( \({k}_{{{{{{\rm{on}}}}}}}\) ), the internal forward rate ( \({k}_{{{{{{\rm{f}}}}}}}\) ), and the bond-cleavage catalysis rate ( \({k}_{{{{{{\rm{cat}}}}}}}\) ). In all figures, the 33 near-equivalent solutions (see text) are plotted in grey, with the optimal solution highlighted in pink (Supplementary Data 1 ). Full size image Mismatch penalties are all around 5 k B T (Fig. 4b ), but show reproducible variation along the hybrid. Comparing Fig. 2a, d with Fig. 4b , it is clear that variations in mismatch penalties in the first 8 states correlate strongly with the measured effective cleavage rate/dissociation constant on targets with a single seed mismatch at the corresponding hybrid position. It is not clear if these variations are due to varying interactions with the protein, or reflects the fact that the possible mismatch types vary with position. In Fig. 4c we show the remaining rates needed to predict Cas9 cleavage activity at any target, time, and Cas9-sgRNA concentration (see Methods). R-loop dynamics captures single-molecule experiments The recent direct observation of the R-loop dynamics between metastable states 42 allows us to further test our model against quantitative single-molecule data. To this end, we define a coarse-grained model (Fig. 5a ) and calculate the effective rates between metastable states from our microscopic free-energy landscapes (see Methods). In Supplementary Fig. 2 we show that predictions based on our coarse-grained model replicate those of the microscopic model. Fig. 5: Metastable states control the targeting dynamics. a) A coarse-grained version of the reaction scheme shown in Fig. 1a . Apart from the unbound and post-cleavage state, the targeting-reaction pathway is reduced to just three states: PAM bound and R-loop closed (0 bp hybrid), intermediate R-loop (7–13 np hybrid), and open R-loop (20 bp hybrid). b Microscopic free-energy landscape for the on-target exposed to 1 nM (d)Cas9-sgRNA (Fig. 4a ) with coarse-grained states and rates indicated in black. c The calculated (see Methods) coarse-grained forward and backward rates on the on-target. Purple triangles are rates from Ivanov et al . 42 , when available at zero torque. d Microscopic free-energy landscape for an off-target with a mismatch at position 3 (blue), together with the on-target free-energy landscape (pink). Red arrow indicates the free-energy penalty \(\delta {{\epsilon }}_{3}\) at the mismatch, and black arrow indicates the resulting shift in barrier height. e The calculated coarse-grained forward and backward rates on an off-target with a mismatch at position 3. Orange arrow highlights the rate that changed considerably compared to on-target. Purple triangles are rates from Ivanov et al . 42 , when available at zero torque. f Microscopic free-energy landscape for an off-target with a mismatch at position 15 (blue), together with the on-target free-energy landscape (pink). Red arrow indicates free-energy penalty \(\delta {{\epsilon }}_{15}\) at the mismatch, and black arrow indicates the resulting shift in barrier height. g The calculated coarse-grained forward and backward rates on an off-target with a mismatch at position 15. Orange arrow highlights the rate that changed considerably compared to on-target. In Fig. 5c, e, and g , central line represents the median, the box plots represent the interquartile range, and whiskers represent the full range among our 33 near equivalent solutions. Full size image Using effective rates between metastable states, we can rationalize the broad strokes of Cas9 fidelity by considering a few important examples 42 . For on-targets (Fig. 5b ), the transition between the PAM bound state and the intermediate R-loop state is reversible ( \({k}_{{{{{{\rm{PI}}}}}}}\approx {k}_{{{{{{\rm{IP}}}}}}}\) ) (Fig. 5c ). Complexes that enter the intermediate state typically also enter the fully opened state ( \({k}_{{{{{{\rm{IP}}}}}}}\ll {k}_{{{{{{\rm{IO}}}}}}}\) ). The transition from intermediate to open R-loop configuration is irreversible ( \({k}_{{{{{{\rm{IO}}}}}}}\gg {k}_{{{{{{\rm{OI}}}}}}}\) ), and entering the open configuration guarantees cleavage ( \({k}_{{{{{{\rm{OI}}}}}}}\ll {k}_{{{{{{\rm{cat}}}}}}}\) ). Taken together, the on-target reaction is essentially unidirectional toward cleavage, once the intermediate state is entered. The transition into the intermediate R-loop state is rate-limiting ( \({k}_{{{{{{\rm{PI}}}}}}}\ll {k}_{{{{{{\rm{IO}}}}}}}\ll {k}_{{{{{{\rm{cat}}}}}}}\) ) for cleavage. Mismatches between the target DNA and the sgRNA have differential effects on R-loop propagation depending on position. A PAM-proximal mismatch (position 1–8) (Fig. 5d ) strongly suppresses the rate of transition from a closed to intermediate R-loop state (Fig. 5e ). In contrast, a PAM-distal mismatch (position 12–17) (Fig. 5f ) limits the effective rate of cleavage by reducing the intermediate to open transition rate (Fig. 5g ), and allowing for re-closure of the R-loop before entering the open state ( \({k}_{{{{{{\rm{IO}}}}}}}\approx {k}_{{{{{{\rm{IP}}}}}}}\) ). These observations are in agreement with the experimental observation 42 , and in Fig. 5c, e we use purple triangles to indicate measured rates 42 when available at zero torque. We quantitatively predict the conversion rates out of the intermediate R-loop state. The model also captures the position of the on-target intermediate state as being around hybrid length 9-12. Our model does not capture the rate of the open to intermediate transition, and future work will have to determine if this is due to a difference in experimental conditions or because our choice of training data is ill-suited to determine the free energies past the intermediate state. Our model predicts rates on all off-targets, and so extends and refines the long-established rule of thumb that off-target rejection in the PAM proximal seed requires only one mismatch, while off-target rejection outside the seed region requires multiple mismatches 10 . In particular, our model quantifies the intermediate activity resulting from PAM distal mismatch, and so enables prediction of activity titration. R-loop dynamics resembles conformational dynamics Next, we wondered what structural properties of Cas9 give rise to the free-energy landscape of Fig. 4a . A comparison between DNA-bound and unbound Cas9-sgRNA structures have revealed that Cas9 repositions its HNH and RuvC nuclease domains to catalyze cleavage 45 , 60 , 61 . Ensemble FRET experiments detected two dominant Cas9 conformers with distinct HNH states 50 , and single-molecule FRET studies have identified a third intermediate conformer 51 , 53 , 54 . The relative position and occupancy of the HNH states is affected by R-loop mismatches 51 , 53 , 54 , and Ivanov et al . 42 suggest that the intermediate R-loop state is linked to the intermediate structural state seen in FRET experiments 51 . To test this hypothesis, we mimicked the experiments of Dagdas et al. 51 , and considered the time evolution of the occupancy of our metastable R-loop states for two target sequences (Fig. 6 ). The HNH-domain completes its conformational change within seconds after Cas9-sgRNA binds to on-target DNA 51 , and our model demonstrates a similar behavior for R-loop progression (Fig. 6a ). The intermediate structural state is visited only transiently 51 , as is the intermediate R-loop state in our model (Fig. 6a ). Compared to the on-target, PAM-distal mismatches maintain the entry rate into the intermediate structural state, while increasing the time spent in this state 51 ; again in close agreement with our findings for the intermediate and open metastable R-loop states in the presence of a PAM distal mismatch (Fig. 6b ). Taken together, our model supports the notion that the intermediate R-loop state is linked to the intermediate structural state seen in FRET experiments. Fig. 6: Dynamics among metastable states resemble structural dynamics. a Time-resolved relative occupancy for the on-target among the closed R-loop state (solution and PAM bound), the intermediate R-loop state, and the open R-loop and cleaved state (c.f. Fig. 2d of Dagdas et al. 51 ); b Relative occupancy at different time points for an off-target with the last 3 PAM distal base pairs mismatched (c.f. Fig. 2f of Dagdas et al. 51 ). Full size image Kinetic modelling improves genome-wide off-target prediction Current methods 12 , 14 , 20 , 21 , 22 , 23 , 24 , 25 , 28 , 43 for identifying strong off-targets rank genomic sequences according to various measures of activity. They do not quantitatively predict biochemically measurable parameters, nor do they normally capture changes in conditions or activity over time. Our approach overcomes these limitations, and we do not suggest that these benefits should be abandoned in order to construct a binary off-target classifier. Still, to strengthen the case for including the full non-equilibrium nature of the problem in any Cas9 modelling, we reduce our quantitative kinetic model to a binary classifier (referred to as kinetic classifier) and test how well it performs against three established state-of-the-art off-target predictors: a recent benchmarking of models 28 shows the CRISPRoff classifier to outperform the competition, so we first test against this tool; second, we test against the more recent uCRISPR 24 tool, which is based on hybrid energetics and has not been tested against CRISPRoff; lastly, we test against the Cutting Frequency Determination (CFD) score 12 , since it is a much-used tool for off-target classification. To compare our model against the three selected off-target classifiers, we choose to rank all genomic sites based on cleavage activity in the low enzyme-concentration limit (see Methods). We make our comparison over all canonical PAM sites in the human genome. True positive off-targets are collected from sequencing-based cleavage experiments that used industry-standard sgRNAs and reported multiple off-target cleavage sites 35 , 36 , 37 , 38 , 40 , 41 , 62 (Supplementary Table 1 ). We tested how well our kinetic model’s ranking of activity compares to that of the CFD score 12 , CRISPRoff 28 , and uCRISPR 24 . For each sgRNA, we separately tested the models by using the union (sites found in any experiment) and intersection (sites found in every experiment) sets of the reported off-target sites as true positives. We perform precision-recall (PR) analysis (Supplementary Fig. 3 ) rather than using receiver-operator characteristics (Supplementary Fig. 4 ) since the datasets are highly unbalanced, with many more true negatives than true positives. Figure 7a shows the PR curve when models are tested against the union of all reported off-targets while targeting the HBB gene. As the threshold for what is judged a strong off-target is swept, PR curves display the fraction of predicted off-targets that are found experimentally (precision) against the fraction of experimentally found off-targets that are also predicted (recall). Our kinetic classifier typically produces higher precision for all recalls, outperforming the other classifying schemes for the union set on the HBB gene. More importantly, the kinetic classifier also outperforms the leading off-target predictors for highly-mismatched genomic off-targets of other sgRNAs: performing best on the majority of targets in every pairwise matchup on both union (Fig. 7b, c ) and intersection (Fig. 7d, e ) sets, and irrespectively of if max. F1 or area under the curve (AUC) scores are used. Fig. 7: Genome-wide off-target classification. a PR curves on the HBB gene using the CFD score (light purple), uCRISPR score (purple), CRISPRoff (dark purple), and our kinetic classifier (green). The precision and recall is calculated over all targets in the genome with a canonical PAM site, taking all experimentally validated off-targets as true positives. b) max. F1 scores for target sites EMX1, FANCF, HBB, RNF2 and VEGFA site 1 using all experimentally identified off-targets as true positives (union set) (Supplementary Fig. 3 ). c AUC scores for the same target sites and true positives as in Fig. 7b. d max. F1 scores using off-targets identified in all experiments as true positives (intersection set) (Supplementary Fig. 3 ). e AUC scores for the same target sites and true positives as in Fig. 7d . Matching the models pairwise we can determine which model performs best overall. Using max. F1 scores to count wins on union sets: kinetic:uCRISPR = 4:1; kinetic:CFD = 5:0; kinetic:CRISPRoff = 4:1. Using AUC scores to count wins on union sets: uCRISPR = 5:0; kinetic:CFD = 5:0; kinetic:CRISPRoff = 3:2. Using max. F1 scores to count wins on intersection sets: kinetic:uCRISPR = 2:1; kinetic:CFD = 2:1; kinetic:CRISPRoff = 2:1. Using AUC to count wins on intersection sets: uCRISPR = 2:1; kinetic:uCFD = 3:0; kinetic:CRISPRoff = 2:1. The kinetic classifier wins every pairwise matchup irrespective of if we use max. F1 or AUC scores, on both union and intersection sets. Full size image Discussion Training our model (Fig. 1 ) of Sp Cas9 target activity on moderately mismatched targets, we extract the physical parameters (Fig. 4 ) that control activity on any target (Figs. 2 and 3 ). Going beyond present-day binary off-target classification schemes, we quantitatively predict cleavage and binding activity as a function of both time and Sp Cas9-sgRNA concentration. We show that Sp Cas9’s targeting reaction contain an intermediate R-loop state, with both position and conversion rates that agree with single-molecule experiments 42 (Fig. 5 ). Mismatches affect the dynamics of the R-loop states (Fig. 6 ) in a manner similarity to how they affect the configurational states of Sp Cas9’s nuclease domains 42 , 51 , 53 . Based on this, we lend support to the notion that R-loop formation is tightly coupled to protein conformation—pointing toward the relevant structure-function relation for the most important RNA-guided nuclease in use today. Though our model captures the abundant low-activity off-targets that are discarded by binary classifiers, we sought to demonstrate the general utility of kinetic modelling by reducing our quantitative activity predictor to a binary classification tool. The resulting kinetic classifier outperforms established state-of-the-art classification tools on canonical PAM sites in the human genome (Fig. 7 ). In a recent study, Jost et al. 5 demonstrated that a series of mismatched guides can be used to titrate gene expression using CRISPRa/CRISPRi. Wildtype Sp Cas9 can also be (effectively) inactivated with PAM-distal mismatches in the guide 63 . Our model can guide such titration of Sp Cas9-sgRNA inactivation by careful placement of mismatches. Our approach can also be used to calculate the total off-target activity over a genome, and so inform the design of sgRNAs for novel gene targets. For simplicity and robustness, we built our model to exclude mismatch type parameters. This allows for extensive training using datasets based on a single guide sequence and off-target DNAs containing up to two mismatches. The limited set of adjustable model parameters (44 in total) and efficient data usage (422 data points used for training) does not seem to limit the model’s applicability (Figs. 2 , 3 , 7 ). The success of our low-complexity model strongly suggest that the path to increased predictive power and therapeutic relevance runs through bottom-up modelling of RNA-guided nucleases in kinetic terms. Taken together, we have shown that our mechanistic and kinetic model gives biophysical insight and quantitative predictive power far beyond the training sets. This predictive power is only expected to increase when including sequence features and allowing for alternative PAM sequences in future modelling efforts. Sp Cas9 is only one of many RNA-guided nucleases with biotechnological applications, and other CRISPR associated nucleases (such as Cas12a, Cas13 and Cas14) offer a diversified genome-engineering toolkit 15 , 64 , 65 , 66 , 67 , 68 , 69 . These nucleases can all be characterized with our approach, and it will be especially interesting to compare the free-energy landscape of our Sp Cas9 benchmark to that of engineered 41 , 54 , 70 and natural (e.g. N. meningitides Cas9 71 ) high-fidelity Cas9 variants. Methods Modelling of the (d)Cas9 targeting reaction We consider a single DNA target sequence with a PAM, in contact with (d)Cas9-sgRNA in solution at fixed concentration (Fig. 1a ). (d)Cas9-sgRNA binding to the PAM site is assumed to be first order, $${k}_{{{{{{\rm{on}}}}}}}={k}_{{{{{{\rm{on}}}}}}}^{{{{{{\rm{ref}}}}}}}[{{{{{\rm{Cas}}}}}}9-{{{{{\rm{sgRNA}}}}}}]$$ where [Cas9-sgRNA] is the concentration of active complexes relative to some reference concentration (we use 1 nM). Binding is followed by a Cas9-mediated strand exchange reaction between sgRNA and the DNA. Once a 20 bp hybrid is formed, Cas9 can cleave the DNA, while dCas9 cannot. We model the targeting recognition as a stochastic hopping process along a sequence of states: target unbound ( n = −1), PAM bound ( \(n=0\) ), and strand exchange ( \(n=1,2,\ldots ,20\) ). We use the column vector \({{{{{\bf{P}}}}}}(t)={({P}_{-1}(t),\ldots ,{P}_{20}(t))}^{T}\) to represent the probabilities of being in the various states at time t . The evolution of probabilities is captured by the Master Equation $${\partial }_{t}{{{{{\bf{P}}}}}}(t)={{{{{\bf{K}}}}}}\cdot {{{{{\bf{P}}}}}}(t),$$ where \({{{{{\bf{K}}}}}}\) is a tri-diagonal rate matrix. Letting \({k}_{n}^{{{{{{\rm{f}}}}}}}\) be the forward ( \(n\to n+1\) ) transition rate, \({k}_{n}^{{{{{{\rm{b}}}}}}}\) to be the backward ( \(n\to n-1\) ) transition rate (Fig. 1a ), and defining \({k}_{-1}^{{{{{{\rm{b}}}}}}}=0\) , we can give the elements of \({{{{{\bf{K}}}}}}\) as $${{{{{{\bf{K}}}}}}}_{nm}=\left\{\begin{array}{ll}{k}_{n-1}^{{{{{{\rm{f}}}}}}} & m=n-1\\ -({k}_{n}^{{{{{{\rm{f}}}}}}}+{k}_{n}^{{{{{{\rm{b}}}}}}}) & m=n\\ {k}_{n+1}^{{{{{{\rm{b}}}}}}} & m=n+1\\ 0 & |n-m|\ge 2.\end{array}.\right.$$ The Master Equation has the formal solution $${{{{{\bf{P}}}}}}(t)=\exp ({{{{{\bf{K}}}}}}t)\cdot {{{{{\bf{P}}}}}}(0)$$ which can be computed numerically, given any set of rates \({{{{{\bf{K}}}}}}\) and initial probabilities \({{{{{\bf{P}}}}}}(0)\) . The above expression, with initial probabilities and rates adjusted to experimental conditions (see below), allows us to capture the full time-dependent evolution of the targeting reaction in quantitative terms. Parameter reduction Based on the mechanistic-model assumption 1, we average the data over mismatch types (see below), and only keep track of if there is a match or a mismatch at every position. Model assumption 3 means that the model of dCas9 is the same as for Cas9, but with \({k}_{20}^{{{{{{\rm{f}}}}}}}=0\) . Model assumption 4 implies that \({k}_{0}^{{{{{{\rm{f}}}}}}}={k}_{1}^{{{{{{\rm{f}}}}}}}=\ldots ={k}_{19}^{{{{{{\rm{f}}}}}}}\equiv {k}_{{{{{{\rm{f}}}}}}}\) . To see the implications of model assumption 2, we move to a description in terms of free energies. Denote the free energy of any state n with F n , and imagine that states n and \(n-1\) are allowed to mutually equilibrate. Equilibration means that the relative occupancy is described by Boltzmann weights and that there are no net probability currents between the states $$\frac{{P}_{n-1}^{{{{{{\rm{EQ}}}}}}}}{{P}_{n}^{{{{{{\rm{EQ}}}}}}}}=\frac{\exp \left(-\frac{{F}_{n-1}}{{k}_{{{{{{\rm{B}}}}}}}T}\right)}{\exp \left(-\frac{{F}_{n}}{{k}_{{{{{{\rm{B}}}}}}}T}\right)},\,{P}_{n-1}^{{{{{{\rm{EQ}}}}}}}{k}_{n-1}^{{{{{{\rm{f}}}}}}}={P}_{n}^{{{{{{\rm{EQ}}}}}}}{k}_{n}^{{{{{{\rm{b}}}}}}}.$$ The above relationships tie rates to free-energy differences through $$\Delta {F}_{n}={F}_{n}-{F}_{n-1}={k}_{{{{{{\rm{B}}}}}}}T\,{{{{\mathrm{ln}}}}}\left(\frac{{k}_{n}^{{{{{{\rm{b}}}}}}}}{{k}_{n-1}^{{{{{{\rm{f}}}}}}}}\right).$$ Using \(n=-1\) as the free-energy reference ( \({F}_{-1}=0\,{k}_{{{{{{\rm{B}}}}}}}T\) ), the assumption that binding is first-order implies $${F}_{0}={F}_{0}^{{{{{{\rm{ref}}}}}}}-{k}_{{{{{{\rm{B}}}}}}}T\,{{{{\mathrm{ln}}}}}([{{{{{\rm{Cas}}}}}}9-{{{{{\rm{sgRNA}}}}}}]).$$ Here \({F}_{0}^{{{{{{\rm{ref}}}}}}}\) is the free energy of the PAM bound state at the reference concentration (1 nM). Mechanistic-model assumption 2 now implies that \(\Delta {F}_{1\le n\le 20}\) only depends on if there is a mismatch at position \(n\) or not, and we can write $$\Delta {F}_{n}=\left\{\begin{array}{ll}{{\epsilon }}_{n}, & {{{{{\rm{match}}}}}}\\ {{\epsilon }}_{n}+\delta {{\epsilon }}_{n} & {{{{{\rm{mismatch}}}}}}\end{array}\right.,\,n=1,\ldots 20.$$ Here \({{\epsilon }}_{n}\) is the free-energy increase when extending the hybrid from length \(n-1\) to length \(n\) if the \(n\) :th hybrid bp is correctly matched, and \(\delta {{\epsilon }}_{n}\) is the additional energy needed when the bp is incorrectly matched. We can write the backward transition rates as $${k}_{n}^{{{{{{\rm{b}}}}}}}=\left\{\begin{array}{ll}{k}_{{{{{{\rm{on}}}}}}}^{{{{{{\rm{ref}}}}}}}\exp (\frac{{F}_{0}^{{{{{{\rm{ref}}}}}}}}{{k}_{{{{{{\rm{B}}}}}}}T}), & n=0,\\ {k}_{{{{{{\rm{f}}}}}}}\exp (\frac{\Delta {F}_{n}}{{k}_{{{{{{\rm{B}}}}}}}T}), & n=1,\ldots ,20.\end{array}\right.$$ The model is now parameterized it in terms of 41 free energies ( \({F}_{0}^{{{{{{\rm{ref}}}}}}}\) , \({{\epsilon }}_{1},\ldots ,{{\epsilon }}_{20}\) , \(\delta {{\epsilon }}_{1},\ldots ,\delta {{\epsilon }}_{20}\) ) and three forward rates ( \({k}_{{{{{{\rm{on}}}}}}}^{{{{{{\rm{ref}}}}}}}\) , \({k}_{{{{{{\rm{f}}}}}}}\) , and \({k}_{{{{{{\rm{cat}}}}}}}\) ). Predicting NucleaSeq cleavage rates To produce predications for training and validation, we model experimental setups. To model NucleaSeq data 15 , we use the solution to the Master Equation to calculate the expected cleaved fraction at any complementarity pattern. NucleaSeq is performed by exposing targets to saturating concentrations of Cas9-sgRNA, which we model by setting \({F}_{0}=-1000{k}_{{{{{{\rm{B}}}}}}}T\) and taking \({P}_{-1}(0)=1\) , \({P}_{0\le n\le 20}(0)=0\) as initial condition. As done in the original experiment, we record the fraction of DNA that remains uncleaved ( \({\sum }_{n=-1}^{20}{P}_{n}(t)\) ) at the time points t = 0 s, 12 s, 60 s, 180 s, 600 s, 1800 s, 6000 s, 18000 s, and 60000 s, and fit-out a single effective cleavage rate \({k}_{{{{{{\rm{clv}}}}}}}^{{{{{{\rm{eff}}}}}}}\) . There is no a priori reason for the uncleaved fraction to follow an exponential decay, but as long as we follow the experimental data-analysis protocol we can use the effective cleavage rates to train and validate our model. Predicting CHAMP association constants We model the CHAMP experiments 15 , 31 by calculating the bound fraction ( \({\sum }_{n=0}^{20}{P}_{n}(t)\) ) of dCas9-sgRNA after 10 min at concentrations 0.1 nM, 0.3 nM, 1 nM, 3 nM, 10 nM, 30 nM, 100 nM and 300 nM, starting with the probabilities \({P}_{-1}(0)=1\) , \({P}_{0\le n\le 20}(0)=0\) . We use the equilibrium binding fraction $${P}_{{{{{{\rm{bnd}}}}}}}^{{{{{{\rm{EQ}}}}}}}=\frac{[{{{{{\rm{Cas}}}}}}9-{{{{{\rm{sgRNA}}}}}}]}{[{{{{{\rm{Cas}}}}}}9-{{{{{\rm{sgRNA}}}}}}]+1/{K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{eff}}}}}}}}$$ to fit out an effective association constant \({K}_{A}^{{{{{{\rm{eff}}}}}}}\) . Again, there is no a priori reason to believe that this non-equilibrium system will equilibrate within 10 min, but as long as we follow the experimental data-analysis protocol we can use \({K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{eff}}}}}}}\) for training and validation. Predicting HiTS-FLIP association rates To predict measured association rates in the HiTS-FLIP experiment 11 , we assume the recorded fluorescence signal to be proportional to our calculated bound fraction of dCas9-sgRNA, when starting with the probabilities \({P}_{-1}(0)=1\) , \({P}_{0\le {{{{{\rm{n}}}}}}\le 20}(0)=0\) . Following the experiments we use linear regression to extract an effective association rate by fitting a straight line to the bound fraction at time points 500 s, 1000 s and 1500 s. Predicting HiTS-FLIP dissociation rates To predict measured dissociation rates in the HiTS-FLIP experimen 11 , we again compared the fluorescence signal to our calculated bound fraction of dCas9, starting with the probabilities \({P}_{-1}(0)=1\) , \({P}_{0\le n\le 20}(0)=0\) . We let the protein associate at saturating concentrations for 12 h, and record the resulting occupational probabilities. We then use these probabilities as new initial probabilities, while also letting \({k}_{{{{{{\rm{on}}}}}}}=0\) ( \([{{{{{\rm{Cas}}}}}}9-{{{{{\rm{sgRNA}}}}}}]=0\) ) in \({{{{{\bf{K}}}}}}\) , before further evolving the system. This allows us to model complex dissociation in the presence of a high concentration of competitor on-targets in solution. Following the experiments, we fit an exponential decay to our predictions at timepoints 500 s, 1000 s, and 1500 s. Averaging over mismatch types Our model does not account for mismatch types, and for training we need to average over all experimentally measured mismatch sequences \(s\) consistent with a mismatch pattern \(p\) . We expect rates to be proportional to exponentiated transition-state free energies, and association constants to be controlled by exponentiated binding free energies. We therefore choose to perform our mismatch-type averages over the logarithm of rates and association constants, bringing these averages close to averages of energies. For measured quantities \(m={k}_{{{{{{\rm{clv}}}}}}}^{{{{{{\rm{eff}}}}}}}\) or \({K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{ref}}}}}}}\) , we chose a weighted mismatch-type average $${\langle {\log }_{10}{m}^{\ast }\rangle }_{p}=\mathop{\sum\limits}_{s\in \left(\begin{array}{c}{{{{{\rm{sequences}}}}}}\,{{{{{\rm{with}}}}}}\\ {{{{{\rm{mm}}}}}}\,{{{{{\rm{pattern}}}}}}\,p\end{array}\right)}{W}_{s}{\log }_{10}{m}_{s}^{\ast }.$$ Here \({m}_{s}^{\ast }\) is the measured value for target sequences \(s\) . We take the weights to be given by $${W}_{s}=\frac{1/\delta {({\log }_{10}{m}_{s}^{\ast })}^{2}}{{\sum }_{\sigma \in \left(\begin{array}{c}{{{{{\rm{sequences}}}}}}{{{{{\rm{with}}}}}}\\ {{{{{\rm{mm}}}}}}{{{{{\rm{pattern}}}}}}p\end{array}\right)}1/\delta {({\log }_{10}{m}_{\sigma }^{\ast })}^{2}}.$$ Here \(\delta ({\log }_{10}{m}_{s}^{\ast })\) is the experimental error for the logarithm of the measurement at a particular sequence \(s\) . This choice of weights minimizes the error-normalized square deviation on the sequence resolved data, if we have complete freedom to set the average for each mismatch pattern. Our model is more constrained then this, but with this weighing our model could—at least in principle—give the best possible approximation of the sequence resolved data. The squared error in the mismatch-type average can be calculated as $$\delta .$$ Cost function We look to simultaneously optimize our predictions of both effective cleavage rates from NucleaSeq ( \({k}_{{{{{{\rm{clv}}}}}}}^{{{{{{\rm{eff}}}}}}}\) ) and effective dissociation constants from CHAMP ( \({K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{ref}}}}}}}\) ). We combine the cost from each experiment $${\chi }^{2}={\chi }_{{k}_{{{{{{\rm{clv}}}}}}}^{{{{{{\rm{eff}}}}}}}}^{2}+{\chi }_{{K}_{{{{{{\rm{A}}}}}}}^{{{{{{\rm{ref}}}}}}}}^{2}$$ by summing log deviations $${\chi }_{m}^{2}=\mathop{\sum\limits}_{p\in \left(\begin{array}{c}{{{{{\rm{all}}}}}}\,{{{{{\rm{mm}}}}}}\,{{{{{\rm{patters}}}}}}\\ {{{{{\rm{used}}}}}}\,{{{{{\rm{for}}}}}}\,{{{{{\rm{training}}}}}}\end{array}\right)}{w}_{p}^{m}{({\log }_{10}({m}_{p})-{\langle {\log }_{10}{m}^{\ast }\rangle }_{p})}^{2}.$$ In the above \({m}_{p}\) represent the model prediction for the average measured quantity at mismatch pattern \(p\) . The weights \({w}_{p}^{m}\) are chosen so the error-weighted contribution from the on-target, the \(20\) singly mismatched off-targets, and the \(20\cdot 19/2=190\) doubly mismatched off-targets are weighted equally as groups $${w}_{p}^{m}=\frac{1}{\delta {\langle {\log }_{10}{m}^{\ast }\rangle }_{p}^{2}}\cdot \left\{\begin{array}{cc}1, & p={{{{{\rm{on}}}}}}\,{{{{{\rm{target}}}}}}\\ 1/20, & p\in {{{{{\rm{single}}}}}}\,{{{{{\rm{mm}}}}}}\\ 1/190, & p\in {{{{{\rm{double}}}}}}\,{{{{{\rm{mm}}}}}}.\end{array}\right.$$ Simulated annealing The Simulated Annealing algorithm 59 is commonly used for high-dimensional optimization problems. We optimize with respect to model parameters \({F}_{0}^{{{{{{\rm{ref}}}}}}}\) , \({{\epsilon }}_{1},\ldots ,{{\epsilon }}_{20}\) , \(\delta {{\epsilon }}_{1},\ldots ,\delta {{\epsilon }}_{20}\) , \({\log }_{10}({k}_{{{{{{\rm{on}}}}}}}^{{{{{{\rm{ref}}}}}}}/{{{{{\rm{s}}}}}})\) , \({\log }_{10}({k}_{{{{{{\rm{f}}}}}}}/{{{{{\rm{s}}}}}})\) , and \({\log }_{10}({k}_{{{{{{\rm{cat}}}}}}}/{{{{{\rm{s}}}}}})\) . Trial moves are generated by adding a uniform noise of magnitude \(\alpha\) to the present value of each model parameter. The process is initiated with a noise strength \(\alpha =0.1.\) In the initiation cycle the temperature is adjusted until we have an acceptance fraction of 40–60% over 1000 trial moves, based on the Metropolis condition. After this initial cycle, the temperatures follow an exponential cooling scheme with a 1% cooling rate ( \({T}_{k+1}=0.99{T}_{k}\) ). At every temperature, we adjust the noise strength \(\alpha\) until an acceptance fraction of 40–60% is reached over 1000 trial moves. Once the desired acceptance fraction is reached, an additional 1000 trial moves are performed to allow the system relax before the next cooling step. Once the temperature has dropped to one percent of its initial value we, apply the stop condition $$|{\bar{\chi }}_{k}^{2}-{\bar{\chi }}_{k-1}^{2}|\le {10}^{-5}{\bar{\chi }}_{k-1}^{2}.$$ In the above, \({\bar{\chi }}_{k}^{2}\) denotes our cost function averaged over the last 1000 trial moves performed at temperature \({T}_{k}\) . The results of this optimization is shown in Fig. 4 . Calculating coarse-grained transition rates First we find the intermediate state on every possible target. As the central-local minimum in free energy (Fig. 4a ) can be slightly displaced by mismatches on off-targets, we seek the free-energy minimum \({n}_{{{{{{\rm{I}}}}}}}\) between R-loop state 7 and 13 for every target. To calculate the effective rates of the coarse-grained model in Fig. 5a , we consider the first passage between metastable states. Take for example the passage from the PAM-bound state ( \(n=0\) ) to the intermediate state ( \(n={n}_{{{{{{\rm{I}}}}}}}\) ) on a specific target. To calculate the associated first-passage time, we truncate the full system to only include states \(n=0,\ldots ,{n}_{{{{{{\rm{I}}}}}}}-1\) . We use the rate matrix \({{{{{{\bf{K}}}}}}}_{{{{{{\rm{PI}}}}}}}\) with elements $${({{{{{{\bf{K}}}}}}}_{{{{{{\rm{PI}}}}}}})}_{nm}={{{{{{\bf{K}}}}}}}_{nm},\,0\le n,m\le {n}_{{{{{{\rm{I}}}}}}}-1$$ and \({k}_{0}^{{{{{{\rm{b}}}}}}}=0\) . With the initial state \({{{{{{\bf{P}}}}}}}_{{{{{{\rm{PI}}}}}}}(0)={(1,0,\ldots ,0)}^{T}\) we solve the Master Equation, and calculate the first-passage time distribution as $${\Psi }_{{{{{{\rm{PI}}}}}}}(t)=-(1,\ldots ,1)\cdot {\partial }_{t}{{{{{{\bf{P}}}}}}}_{{{{{{\rm{PI}}}}}}}(t).$$ The effective transition rate \({k}_{{{{{{\rm{PI}}}}}}}\) is the inverse of the average first-passage time \({\tau }_{{{{{{\rm{PI}}}}}}}\) , which can be calculated as $${\tau }_{{{{{{\rm{PI}}}}}}}={\int }_{0}^{\infty }{{{{{\rm{d}}}}}}t\,t{\Psi }_{{{{{{\rm{PI}}}}}}}(t)=(1,\ldots ,1)\cdot {{{{{{\bf{K}}}}}}}_{{{{{{\rm{PI}}}}}}}^{-1}\cdot {{{{{{\bf{P}}}}}}}_{{{{{{\rm{PI}}}}}}}(0).$$ The same process was used to calculate all other rates of directly transitioning between meta-stable states, repeated over every target sequence. Constructing a binary off-target predictor We rank all canonical PAM sites in the human genome according to their relative cleavage rate in the low concentration limit. In this limit, the cleavage rate is given by the PAM binding rate times the probability to cleave once the PAM site is bound. As the PAM binding rate is not expected to depend on the sgRNA sequence \(s\) , we can rank our off-targets based on the cleavage probability once bound 30 , $${P}_{{{{{{\rm{PAM}}}}}}\to {{{{{\rm{clv}}}}}}}(s)=\frac{{k}_{{{{{{\rm{cat}}}}}}}\,{e}^{\frac{{F}_{-1}(p(s))}{{k}_{{{{{{\rm{B}}}}}}}T}}}{{k}_{{{{{{\rm{cat}}}}}}}{\sum }_{n=0}^{19}{e}^{\frac{{F}_{n}(p(s))}{{k}_{{{{{{\rm{B}}}}}}}T}}+{k}_{{{{{{\rm{f}}}}}}}{e}^{\frac{{F}_{20}(p(s))}{{k}_{{{{{{\rm{B}}}}}}}T}}}.$$ Here \(p(s)\) is the mismatch pattern of sequence \(s\) . Statistics & Reproducibility Only experimental data giving physical positive values for mismatch-averaged rates and association constants were included in the correlation analysis. See Supplementary Data 1 . Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data supporting the findings of this study are available from the corresponding authors upon reasonable request. Mismatch averaged experimental data used for training and validation (Figs. 2 and 3 ), estimated microscopic parameters (Fig. 4 ), and genome wide off-target classification evaluation (Fig. 7b–e ), are all provided as Supplementary Data 1 . Code availability The code enabling quantitative off-target activity prediction for any guide-target pair is available on our GitLab page ( ). There you will also find a small dashboard application, allowing time resolved activity predictions given a particular sequence and enzyme concentration. A clone of the repository at publication is also permanently available at . The purpose made optimization code will be made available upon request.
Researchers from the TU Delft have come up with a physical-based model that establishes a quantitative framework on how gene-editing with CRISPR-Cas9 works, and allows them to predict where, with what probability, and why targeting errors (off-targets) occur. This research, which has been published in Nature Communications, gives us a first detailed physical understanding of the mechanism behind the most important gene editing platform of today. The discovery of the CRISPR-Cas9 protein has greatly simplified gene editing, and raised hopes to find a cure to many hereditary diseases. However, routine and safe use of this technique in human therapeutics requires extreme precision and predictability of any off-target effects. A research team lead by Martin Depken at TU Delft's department of Bionanoscience has now demonstrated a new, physical-based model that greatly improves on existing models: not only does the model predict where the DNA is likely to be cut, but also with what probability this will happen. Physics-based approach to understand Cas9 gene-editing A great limitation of current bioinformatics models for gene editing lies in the fact that they are binary in nature: they classify targets on the genome as either likely or unlikely to be cut. These models focus only on very high-probability targeting errors (off-targets), and will miss the many lower probability off-targets that together could amount to the majority of editing errors in the genome. Now, the new physical model which the researchers created takes into account both high-probability and low-probability off-targets; it can be used to physically characterize any Cas9 variant and predict the probability that any site will be cleaved. Martin Depken explains his lab's new physics-based approach: "In gene editing, you want to maximize the probability of cutting at the intended site, while minimizing the amount of cutting in the rest of the genome. It is therefore crucial to understand cutting in terms of probabilities. Drawing from experiments in single-molecule physics and structural data, we created a model that can do this. We changed the way in which to describe the gene editing from a binary choice to a complete probabilistic picture." Improving accuracy in gene editing By giving physical insights into why off-targets occur, this research also marks an important next step towards a more rational way of engineering new gene-editing platforms, and for characterizing, evaluating, and comparing existing ones. With their probabilistic description of gene-editing, the researchers also hope to help improve the risk assessment in genome editing by accounting for all possible off-targets. "Together with our experimental collaborators at University of Texas at Austin, we've now benchmarked Cas9 with our model," says Depken. "Our next goal is to do the same with other high-precision gene-editing platforms to understand how and why they differ. With this we hope to reveal the path to even higher precision in gene editing."
10.1038/s41467-022-28994-2
Physics
New method for the measurement of nano-structured light fields
Eileen Otte et al, Polarization nano-tomography of tightly focused light landscapes by self-assembled monolayers, Nature Communications (2019). DOI: 10.1038/s41467-019-12127-3 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-12127-3
https://phys.org/news/2019-09-method-nano-structured-fields.html
Abstract Recently, four-dimensional (4D) functional nano-materials have attracted considerable attention due to their impact in cutting-edge fields such as nano-(opto)electronics, -biotechnology or -biomedicine. Prominent optical functionalizations, representing the fourth dimension, require precisely tailored light fields for its optimal implementation. These fields need to be like-wise 4D, i.e., nano-structured in three-dimensional (3D) space while polarization embeds additional longitudinal components. Though a couple of approaches to realize 4D fields have been suggested, their breakthrough is impeded by a lack of appropriate analysis techniques. Combining molecular self-assembly, i.e., nano-chemistry, and nano-optics, we propose a polarization nano-tomography of respective fields using the functional material itself as a sensor. Our method allows a single-shot identification of non-paraxial light fields at nano-scale resolution without any data post-processing. We prove its functionality numerically and experimentally, elucidating its amplitude, phase and 3D polarization sensitivity. We analyze non-paraxial field properties, demonstrating our method’s capability and potential for next generation 4D materials. Introduction Within the last decades, functionalized nano-systems have exceled contributing to future four-dimensional (4D) nano-materials and their applications in nano-(opto)electronics, -biotechnology, or -biomedicine 1 , 2 , 3 , 4 , 5 , 6 . For instance, peptides have been employed as functional components in nano-systems for disease treatments, or stimuli-responsive nano-carriers were applied for drug delivery 4 , 7 . Nowadays one can summarize the class of 4D materials generally as three-dimensionally (3D) nano-structured materials embedding an addressable functionality as fourth dimension. Most man-made nano-technology is based on top-down approaches governed by the concept of continuous miniaturization to the nano-scale. In contrast, nature implements a bottom-up approach as the prime strategy to construct dynamic, adaptive and learning systems at the nano-scale. This strategy includes self-assembly as an attractive route to the customization of 3D nano-structures that at the same time exhibit an electronic, magnetic, or optical functionality as fourth dimension 5 , 6 . Leading functionalities are addressed optically as, e.g., light-based material changes in azobenzenes on surfaces, which requires a light field of appropriate characteristics for its optimal implementation. Therefore, light needs to be precisely tailored in all three spatial dimensions as well as in all its degrees of freedom, namely amplitude, phase, and 3D polarization. Thus, for optimally addressing a 4D nano-material, the respective light fields need to be likewise nano-structured in 4D, only achievable in the non-paraxial regime. Converting paraxial to non-paraxial light by, e.g., tightly focusing (numerical aperture (NA) ≥ 0.7), initial radial electric field components are tilted and transformed into non-negligible longitudinal field contributions 8 , 9 as fourth dimension. Hence, focal 4D light fields are shaped 10 , 11 , 12 , 13 , 14 , 15 , 16 , which include complex topological structures as optical Möbius strips or ribbons 17 , 18 , 19 , 20 . Note that 3D polarization states and topologies as Möbius strips may also be realized by, e.g., off-axis interference of structured beams 21 . However, the desired nano-scale structure is achieved only in a tightly focused field. The 3D polarization nature as well as associated sub-diffractive, thus, nano-scale complexity of non-paraxial 4D fields 22 , 23 represent the required tool for the effective implementation of novel 4D functional nano-materials. However, the realization and application of these light fields is actually obstructed by the lack of appropriate analysis techniques. The benefit of using the complex 4D nature and nano-scale structure of non-paraxial fields represent at the same time a major difficulty impeding the application of metrology tools precisely working in the paraxial regime. Hence, there is an urgent demand for fast, thus, single-shot nano-tomographic techniques allowing for the immediate identification of focal fields including their amplitude, phase and 3D polarization. “Single-shot” refers to the fact that only a single measurement step, e.g., one camera image, is required for the field identification. Until now, a few limited approaches have been proposed 24 , 25 , 26 , but are failing to satisfy the topical demand since they are based on slow scanning techniques with multiple shots in combination with the requirement of precise knowledge of the scanning probe characteristics and extensive reconstruction algorithms. Here, we show a single-shot nano-tomographic approach that does not require any post-measurement data processing for the identification and investigation of 4D light fields. For this purpose, we combine nano-chemistry and nano-optics to analyze light fields by the functional 4D nano-material itself as sensor. We apply nature inspired bottom-up assembly of fluorescent sulforhodamine B molecules for the creation of a functional molecular nano-system, or, more precisely, self-organized functionalized nano-surfaces 27 , 28 . We develop these self-assembled monolayers (SAMs) 29 , 30 , 31 by exploiting the process of π – π -stacking, resulting in a fluorescent nano-tomographic detector. Crucially, respective fluorescence, i.e., the material’s fourth dimension, is sensitive to the amplitude, phase and 3D polarization of the exciting non-paraxial light field, which we prove numerically as well as experimentally. Hence, the created tool enables the qualitative single-shot analysis by a single camera image of complete transverse planes of a focal 4D field with spatial resolution at the nanometer-scale in all three spatial dimensions. Our approach finally enables the demanded experimental study of 4D structured fields, and thereby unlocks their potential arising in combination with 4D functional nano-systems. Results Tailored non-paraxial 4D light fields The appropriate demonstration of the nano-tomographic approach requires representative 4D light fields, fully-structured 32 , 33 , 34 in all its degrees of freedom. Hence, these fields need to embed amplitude, phase as well as polarization structuring of well-defined shape which can be tailored on demand. Therefore, we chose to customize non-paraxial light fields by holographic phase and polarization modulation in the paraxial regime. More precisely, we realize higher-order vector field of pure linear polarization with additional phase vortices and tightly focus these by a high-NA microscope objective (MO). Thereby, complexly shaped 4D fields of three-dimensional polarization \({{\cal{E}}}({x,y,z}) = [{\cal{E}}_{x}({x,y,z}),{\cal{E}}_{y}({x,y,z}),{\cal{E}}_{z}({x,y,z})]^{T}\) are formed with non-neglectable longitudinal polarization component \({\cal{E}}_z\) . Note that also other kinds of fully-structured light fields could be applied for the demonstration of our nano-tomographic method’s operating principle. In the following we reason our choice of light fields. Higher-order vector fields are of point symmetric shape with its symmetry point representing an on-axis V-point singularity of undefined polarization 35 , 36 , 37 , 38 . This singularity or the respective field is characterized by the index σ 12 , defined according to the corresponding phase Φ 12 of the complex Stokes field Σ 12 = S 1 + i S 2 = A 12 exp(iΦ 12 ) (normalized Stokes vector S = [ S 0 , S 1 , S 2 , S 3 ] T ) with \(\sigma _{12} = {\oint}_c {\mathrm{d}} {\Phi} _{12}/2\pi\,(|\sigma_{12}|/2\in \mathbb{N})\) 38 , 39 , 40 . Surrounding linear states of polarization form | σ 12 − 2| flower petals ( σ 12 > 0) or spider web sectors ( σ 12 < 0), as exemplarily illustrated in Fig. 1c , d. As a crucial consequence, the ratio of azimuthal and radial components of flower- and web-shaped beams, being responsible for the occurrence of focal longitudinal electric field contributions, is dependent on the order σ 12 of the singular light field, thus can be tailored precisely in amount and shape on demand 10 . As an illustrative example, well visualizing this property, we chose σ 12 = ±8. Here, in the focal plane ( z = 0), transverse and longitudinal focal electric field components feature conspicuous intensity configurations with | σ 12 | = 8 and | σ 12 − 2| petals, respectively (Fig. 2 ). Further, the total focal intensity reveals a dark star ( σ 12 > 0) or bright flower ( σ 12 < 0) shape. Hence, shapes clearly differ for incident flower or web structures with sophisticated, recognizable structures in the non-paraxial regime, as demanded for the demonstration of our approach. Fig. 1 Concept of detecting non-paraxial light fields. a Microscope system for tightly focusing tailored paraxial light fields and detecting fluorescence; b setup for the generation of tailored vectorial light fields with additional phase vortices (paraxial regime; SLM: spatial light modulator, L: lens, M: mirror, A: aperture, QWP 1,2 : quarter wave plate, CCD camera); c vectorial flower and d spider web structure ( σ 12 = ±8), whose polarization distribution is indicated black lines (red: flow lines), with according phase Φ 12 of complex Stokes field Σ 12 Full size image Fig. 2 Numerics on tailored non-paraxial light fields \({{{\cal{E}}}}{({x},{y},{0})}\) realized by tightly focusing tailored vectorial fields. Vectorial flower ( a ) or web ( b ) structures ( σ 12 = ±8) with additional phase vortices of charge \(\ell\) are considered as input fields. The total focal intensity \(|{{{\cal{E}}}}|^2\) as well as intensity contributions \(|{\cal{E}}_{x,y,z}|^2\) of transverse ( \({\cal{E}}_{x,y}\) ) and longitudinal ( \({\cal{E}}_z\) ) polarization components in the focal plane ( x , y , 0) are presented (NA = 0.8). The relation of the maximum (max) or mean value of \(|{\cal{E}}_{x,y,z}|^2\) to the maximum or mean of \(|{{{\cal{E}}}}|^2\) , respectively, is given below images Full size image In order to highlight the SAM-based method’s sensitivity to amplitude, phase as well as polarization, we additionally include on-demand phase variation. For this purpose, we imprint additional global phase vortices of topological charge \(\ell = \pm \{ 0,1,2,3,4,5\}\) onto the initial paraxial vector field. In Fig. 2 , we present the respective numerical simulations (details on numerical approach can be found in the Methods) for the focused flower (a) and spider web (b) structure with additional vortices at z = 0. The total intensity \(|{{{\cal{E}}}}|^2\) as well as the contributions per polarization component \(|{\cal{E}}_{x,y,z}|^2\) in the focal plane are shown. All distributions are normalized by its own maximum, while the ratio of the maximum or mean value of \(|{\cal{E}}_{x,y,z}|^2\) to the maximum or mean value of \(|{{{\cal{E}}}}|^2\) , respectively, is given below the images (max or mean). By the addition of phase vortices, we create a distinct difference in the relative phase of each focal electric field component (see next section) and clearly vary the shape as well as ratio of contributing components (for details see Methods). These differences are appropriate tools for testing the functionality of our nano-tomographic approach as they will cause characteristic fluorescence images for these field (see next section). Different methods can be implemented for the experimental creation of fully-structured non-paraxial light fields 23 . For our system we chose a holographic approach because of the dynamic adaptability of created modes. We apply a holography-based dynamic modulation system 33 , 41 in combination with a tightly focusing microscope configuration (for details see Methods), as visualized in Fig. 1a, b . By double passing a phase-only spatial light modulator (SLM) in split-screen-mode (Fig. 1b ), we tailor the phase (first pass) as well as polarization (second pass) of light in the paraxial regime (wavelength λ = 532 nm). Note that the first half of the SLM is imaged on the second by a 4 f -configuration. In the next step, the created beam is tightly focused by a high-NA MO (×100, NA = 0.8), so that a non-paraxial light field is formed. For this purpose, the second half of the SLM is imaged by 2 f 1 –2 f 2 system on the back-aperture of the MO. By adapting the spatial phase or polarization information, encoded on the SLM, the realization as well as the dynamic customization of intended fully-structured 4D light fields is enabled. SAMs as nano-detectors Up to now, non-paraxial 4D light fields as introduced above could not be analyzed as required for unlocking their potential in combination with 4D nano-materials. Solving this issue, here, we present the operating principle of our fast, single-shot method for the qualitative analysis of non-paraxial fields. For this polarization nano-tomography we apply a 4D material itself as nano-sensor, namely SAMs of fluorescent sulforhodamine B silane (maximal absorption: λ abs = 572 nm; maximal emission: λ fl = 594 nm) produced on a silica glass cover slip (for details see Methods and Supplementary Methods ). Here, the property of fluorescence represents the fourth dimension embedded in the 3D nano-structured monolayer. Typically, a single molecule in a transverse ( x , y )-plane reveals a fluorescence rate R according to 24 $$R(x,{\kern 1pt} y) = a|{\mathbf{d}}\cdot {\mathrm{ }}{{{\cal{E}}}}(x,{\kern 1pt} y)|^2,$$ (1) with the constant a depending on the absorption cross-section and quantum yield of the molecule. Further, d = [ d x , d y , d z ] T represents the unit vector along the absorption dipole moment of the molecule. Considering a monolayer of random dipole orientation per fluorescent molecule, the vector d is replaced by transverse position dependent vector d ( x , y ). Note that d z > 0 due to the molecules’ defined bonding site on the cover slip or, more precisely, silane (see Methods). This random dipole orientation on a micrometer level in combination with positive d z -value, achieved by self-assembly (see Methods), is the key feature of the SAM which enables its application as nano-sensor for non-paraxial fields, as it will turn out in the following. One might expect to observe fluorescence resembling a disturbed image of the intensity \(|{{{\cal{E}}}}|^2\) of the non-paraxial electric field in the chosen transverse plane, since the random dipole vector orientation may annihilate the polarization dependence of the fluorescence image. However, this is not the case, as, besides amplitude and polarization, one needs to consider the relative phases φ x , y , z between contributing electric field components \({\cal{E}}_{x,y,z} = {\cal{E}}_{x,y,z}^0\exp ({\mathrm{i}}\varphi _{x,y,z})\) . To emphasize this, we calculate the fluorescence rate assuming all dipoles to be oriented according to d ( x , y ) = c [1, 1, 1] T ( ∀ ( x , y ), normalization constant c ), resulting in $$R(x,{\kern 1pt} y) \propto |{\cal{E}}_x^\prime + {\cal{E}}_y^\prime + {\cal{E}}_z^\prime |^2.$$ (2) Here, \({\cal{E}}_{x,y,z}^\prime\) inherit the amplitude and phase of \({\cal{E}}_{x,y,z}\) , respectively, whereby \({\cal{E}}_{x,y,z}^\prime\) can be considered as scalar light fields of the same polarization emitted by the fluorescent dipoles. Hence, the resulting fluorescence will resemble the interference of the three scalar fields \({\cal{E}}_x^\prime\) , \({\cal{E}}_y^\prime\) , and \({\cal{E}}_z^\prime\) . As representative example, we apply a tightly focused vector beam with σ 12 = 8 and \(\ell = 2\) (Fig. 2a ) as exciting light field at the focal plane ( z = 0) with respective \({\cal{E}}_{x,y,z}^\prime\) showing the intensity \(|{\cal{E}}^\prime_{x,y,z} |^2\) and phase \(\varphi^\prime_{x,y,z}\) distributions as visualized in Fig. 3a (simulation). As stated above, by the additional phase vortex in the paraxial field we tailored a difference in phase distributions for transverse and longitudinal focal components as observable in \(\varphi _{x,y,z}^\prime\) . The fields \({\cal{E}}_{x,y}^\prime\) both include a central phase vortex of charge \(\ell = 2\) (or two very close singularities of charge \(\ell = 1\) ). Thus, their superposition \({\cal{E}}_t^\prime = {\cal{E}}_x^\prime + {\cal{E}}_y^\prime\) also reveal a similar phase structures \(\varphi _t^\prime\) and a symmetric distribution in intensity, as illustrated in Fig. 3b . Crucial is the contribution of \({\cal{E}}_z\) since its phase differs from the phase of transverse components with only embedding a singular central phase vortex: as a consequence, in the overall superposition \({\cal{E}}^\prime = {\cal{E}}_t^\prime + {\cal{E}}_z^\prime\) , we observe a transverse variation between constructive and destructive interference of \({\cal{E}}_t^\prime\) and \({\cal{E}}_z^\prime\) . This results in the appearance of an asymmetric distribution for \(|{\cal{E}}^\prime |^2\) and, thus, for the fluorescence rate R ( x , y ), as presented in Fig. 3b . This asymmetric shape is distinctive of the tailored focal fields, only appearing due to the interaction of respective transverse and non-negligible longitudinal components, their amplitude and their relative phases. This interaction is similarly observed for random d ( x , y ) whereby d z > 0 is an essential condition provided by our SAMs. Vice versa, the observation of the characteristic asymmetric shape reveals the significant contribution of longitudinal \({\cal{E}}_z\) components, representing the non-paraxial characteristic which is the most difficult to be visualized. Here, by measuring the distinctive fluorescence distribution irradiated by the monolayer, this most of the time invisible property can easily be detected in a single shot without any data post-processing. Fig. 3 SAM fluorescence excited by non-paraxial light. Analysis for d ( x , y ) = c [1, 1, 1] T , ∀ ( x , y ) and fluorescence caused by non-paraxial light fields with significant \({\cal{E}}_z\) contributions shown by the example of a focused flower structure ( σ 12 = 8) with additional phase vortex \(\ell = 2\) . a Normalized intensity \(|{\cal{E}}_{x,y,z}^\prime |^2\:(\in [0,\,1])\) and relative phase \(\varphi _{x,y,z}^\prime\:(\in[0,\,2\pi])\) contributions emitted by monolayer, excited by \({\cal{E}}_{x,y,z}\) . b Normalized intensities \(|{\cal{E}}_t^\prime |^2\) and \(|{{{\cal{E}}}}^\prime |^2\:(\in [0,\,1])\) with relative phases \(\varphi _t^\prime\) and \(\varphi\prime (\in[0, \ 2\Pi])\) corresponding to \({\cal{E}}_t^\prime = {\cal{E}}_x^\prime + {\cal{E}}_y^\prime\) and \({{{\cal{E}}}}^\prime = {\cal{E}}_x^\prime + {\cal{E}}_y^\prime + {\cal{E}}_z^\prime\) , respectively Full size image Differentiating non-paraxial from paraxial fields Within the nano-tomographic approach, the observed asymmetry in fluorescence including its intensity (amplitude) and phase distribution mirrors the non-paraxial properties of investigated focal 4D light field. In contrast to non-paraxial fields, paraxial ones only embed negligible longitudinal electric field contributions. As the asymmetric shape is caused by the contribution of longitudinal polarization components, this characteristic shape is not observed in the paraxial regime. Hence, the symmetry of fluorescence represent a fundamental means to differentiate between paraxial and non-paraxial light fields. This can be easily demonstrated by the numerical analysis of fluorescence in the case of applying paraxial fields for the monolayer excitation. For this purpose, we chose the far field distribution of introduced tailored vectorial beams ( σ 12 = ±8, \(\ell = \{ 0,1,2,3,4,5\}\) ) to be radiated onto a SAM of random orientation d ( x , y ), d z > 0 . The far field distribution is determined according to \({{{\cal{E}}}}_{{\mathrm{far}}} = {\cal{F}}({\mathbf{E}}_{{\mathrm{in}}}(x,{\kern 1pt} y,{\kern 1pt} 0))\) ( \({\cal{F}}\) : Fourier transform, E in : paraxial input field; see Methods, Eq. ( 3 )). Based on Eq. ( 1 ) with \({{{\cal{E}}}} = {{{\cal{E}}}}_{{\mathrm{far}}}\) , considering \({\cal{E}}_z \approx 0\) , we calculate the respective normalized fluorescence rate distributions, as shown in Fig. 4 . Here, Fig. 4a or b belongs to the exciting light field with σ 12 = 8 (flower) or σ 12 = −8 (spider web). Note that, for comparison reasons, we adapted the spatial resolution of shown fluorescence distributions according to the experimental system applied later on (see Methods). Obviously, there are only slight differences visible between results in Fig. 4a, b , even if applied light fields look significantly different in polarization (Fig. 1c, d ). Consequently, one cannot distinctly differentiate between a flower or spider web structure by monolayer fluorescence detection. Moreover, all structures are symmetrically distributed (point symmetric), as there is no influence of longitudinal polarization components. In contrast, we expect unique asymmetric fluorescence distributions dependent on σ 12 and \(\ell\) to be observed in the case of non-paraxial light fields effected by characteristic \({\cal{E}}_z\) contributions. Fig. 4 Numerical evaluation of SAM fluorescence for paraxial light fields. SAM (random d ( x , y ), d z > 0) is examplarily excited by the far field distribution of ( a ) a vectorial flower ( σ 12 = +8) or ( b ) web structure ( σ 12 = −8) with additional phase vortices of charge \(\ell\) . Total normalized intensity is shown Full size image Experimental implementation Finally, we experimentally prove the functionality of the nano-tomographic SAM-based approach. For this purpose, we investigate the fluorescence distributions excited by tailored non-paraxial 4D light fields to draw conclusions about the fields’ non-paraxial properties. As representative examples, we consider the focal fields ( z = 0) shown in Fig. 2a, b in experiment and support our results theoretically. Experimentally, fully-structured 4D fields are realized as explained above (Fig. 1a, b ), whereby a power of 20 mW is measured at the back-aperture of focusing MO. We place the monolayer probe (random dipole orientation, d z > 0) in the focal plane of the non-paraxial light field. A high-NA MO (×100, NA = 0.9) in combination with a lens (focal distance: 500 mm) is applied to image the fluorescence in the plane of the monolayer (=focal plane) onto a sensitive CCD camera, whereby excited fluorescence is observed in transmission. The exciting light field ( λ = 532 nm) is filtered in front of the CCD, so that pure fluorescence ( λ fl ~594 nm) is detected (details can be found in the Methods). In Fig. 5 , we present the experimentally measured (a and c) as well as theoretically calculated (b and d) fluorescence intensity (normalized) when the tightly focused vectorial flower and web structure with additional phase vortices \(\ell\) are applied as exciting beam. Both, experiment and simulation (spatial resolution adapted to experimental system, see Methods), demonstrate a change in the measured transverse fields diameter with increasing \(\ell\) , which mirrors numerical calculated focal fields in Fig. 2 . Further, simulations show characteristic asymmetric transverse distributions, especially visible for \(\ell \ge 2\) , which are clearly affirmed by experimental results (deviation explained in Discussion) . Crucially, these detected fluorescence distributions prove the existence and significant contribution of longitudinal polarization components of the non-paraxial field. We confirm this observation by detailed analysis of fluorescence intensity in dependence on angular position (for details see Methods), presented in Figs. 6 and 7 . Here, we averaged theoretically calculated (blue edged image) and measured (red edge image) intensity values in angular segments of a ring shaped subspace, marked white. Respective graphs show these mean values as a function of angular position α with respective errors (see Methods), additionally proving the agreement of experiment and theory as well as observed asymmetry. Beyond, in contrast to paraxial measurements, the differentiation between flower and web configurations is facilitated, as particularly visible in the graphical analysis (see, e.g., results for \(\ell = 2\) , Figs. 6 and 7c ). Hence, we distinctly evince the dependence on the non-paraxial polarization, amplitude and phase configuration of our nano-tomographic approach. Consequently, our fast single-shot method based on a SAM, not requiring any data post-processing, facilitate the direct qualitative visualization of non-paraxial characteristics, enabling the identification and investigation of focal field properties. Fig. 5 Detecting non-paraxial fields by fluorescent SAM. The normalized fluorescence intensity of tightly focused vectorial flower or web ( σ 12 = ±8) with additional phase vortices of charge \(\ell\) is analyzed experimentally ( a and c , respectively) as well as numerically ( b and d , respectively), proving the detection of non-paraxial properties. Shown scale corresponds to focal size Full size image Fig. 6 Graphical analysis of fluorescence for focused flower configurations. a – f σ 12 = 8, \(\ell = \{ 0,1, \ldots ,5\}\) . Studied intensity images are shown to the left of each graph (experiment: red edged, simulation: blue edged). In graphs, the mean intensity I fl in an angular segment of a ring shaped subspace (white marks in intensity images) is shown in dependence of angular position α (red: experiment, blue: simulation). Intensity errors are given by standard deviation, experimental angular errors are calculated on the basis of the point spread function of the imaging system Full size image Fig. 7 Graphical analysis of fluorescence for focused web configurations. a – f σ 12 = −8, \(\ell = \{ 0,1, \ldots ,5\}\) . Studied intensity images are shown to the left of each graph (experiment: red edged, simulation: blue edged). In graphs, the mean intensity I fl in an angular segment of a ring shaped subspace (white marks in intensity images) is shown in dependence of angular position α (red: experiment, blue: simulation). Intensity errors are given by standard deviation, experimental angular errors are calculated on the basis of the point spread function of the imaging system Full size image Discussion Nowadays, a broad range of different applications is awaiting the inclusion of 4D structured light fields. In particular, the optimal implementation of 4D nano-materials with optically addressable functionality demands an adequate 4D counterpart, namely, precisely tailored 4D light fields. Due to the complexity and nano-scale of 4D non-paraxial fields including its 3D polarization nature, there is a lack of appropriate analysis methods required for the in-depth evaluation and application of these focal fields. Fast, single-shot nano-tomographic techniques are demanded enabling the immediate analysis of focal field properties, whereby this demand is not satisfied by currently proposed methods. These are typically based on slow scanning procedures in combination with the essential precise knowledge of scanning probe characteristics and complex post-measurement algorithms. Here, we presented and theoretically as well as experimentally verified a single-shot nano-tomographic approach without any data post-processing, fully reacting to the current demand and finally enabling the effective implementation of next generation 4D nano-materials. Considering an inverse approach, we apply the concept of molecular self-assembly to form functionalized 4D nano-surfaces of randomly oriented, polarization sensitive rhodamine B (sulforhodamine B silane), including the key feature of d z > 0, as spatially resolved nano-detectors for non-paraxial 4D light fields. If being irradiated by 3D polarization structures, we observe spatial fluorescence distributions being strongly related to the exciting non-paraxial electric field including its amplitude, phase as well as polarization. Due to the contribution of longitudinal field components \({\cal{E}}_z\) , non-paraxial fields result in characteristic asymmetric fluorescence structures, as we demonstrated in theory and verified experimentally. Thus, the demanded direct and fast identification and investigation of non-paraxial fields is presented. Deviations of experimental from theoretical results originate from, for example, non-exact perpendicular orientation of the probe in relation to the beam’s optical axis, or, in particular, the limited resolution of the detecting imaging system in combination with low fluorescence power (compared to background noise) and the self-excitation of fluorescent molecules. These reasons are also reflected by error bars in graphs of Figs. 6 and 7 . While intensity errors represent the standard deviation of calculated mean values (simulation and experiment), experimental angular positions reveal relatively large error bars determined on the basis of the imaging system’s point spread function (PSF) for a self-luminous point (=SAM molecule) and self-excitation of neighboring molecules (for details see Methods). The respective transverse position of each molecule has an error of approximately ±176 nm (~±0.33 λ ), impeding the resolution of, e.g., the individual intensity spots visible in simulations for \(\ell = 0\) or \(\ell = 1\) (Figs. 6a, b and 7a, b ). Note that the system’s resolution can be increased by decreasing the fluorescence wavelength by choosing another molecule and/or adapting the imaging MO and lens (see Methods), whereby the limited resolution of the sensitive fluorescence CCD camera need to be considered as well. Basic spatial resolution provided by the monolayer is at the nanometer scale, dependent on molecule size. The same applies for resolution in z -direction: The nanometer thickness of monolayers only allows a specific plane of the non-paraxial light field to excite molecules. Subsequently, the detection of excited fluorescence follows the typical rules for the longitudinal resolution limit of the microscopic system. Further, fluorescence and, thus, experimental results can be enhanced and self-excitation avoided by applying another fluorescent molecule, which absorbs maximally at the systems wavelength 532 nm and shows low self-excitation, such as quinacridone 42 . Beyond direct and fast nano-tomographic identification of non-paraxial fields, our approach includes the potential for the full reconstruction of the focal electric field. For this advancement, molecular layers need to be of defined orientation, namely, purely oriented in x -, y -, and (on average) diagonal direction, i.e., d = [1, 0, 0] T , [0, 1, 0] T and \([1,1,1]^T{\mathrm{/}}\sqrt 3\) , which could be realized by programmable self-assembly of quinacridone molecules. These sophisticated monolayers will facilitate the spatially resolved single-shot analysis of each electric field component individually by implementing each quinacridone SAM separately: as the field irradiated by molecules is \(\propto {\mathbf{d}} \cdot {{{\cal{E}}}}\) , transversely oriented SAMs could enable the detection of relative phase and intensity of \({\cal{E}}_{x,y}\) . Phases are quatifiable by off-axis interference of fluorescence and a reference beam—of course, in this case the low coherence of fluorescent light, due to the life span of excited molecular states, need to be considered. Subsequently, the diagonally oriented SAM could be used to identify \({\cal{E}}_z\) . Note that diagonal orientation is chosen instead of more intuitive z -orientation as purely z -polarized fluorescence would experience significant transformation in the imaging system impeding the detection of \({\cal{E}}_z\) . Beyond, full 4D analysis can be implemented by scanning the three-dimensional volume \({{{\cal{E}}}}(x,y,z)\) in z -direction by the monolayers. Hence, different z -slices of the non-paraxial field will be investigated, whereby, 4D tomography in nano-scale resolution is facilitated since monolayers of nanometer thickness are applied. It is worth mentioning that, naturally, our approach cannot only be used for the investigation of non-paraxial light fields but, finally, paves the way for the pending advancement of a broad range of applications awaiting the inclusion of 4D structured fields. For instance, in an inversion of our tomographic approach, if 4D light fields are analyzed and, thus, verified or even fully reconstructed in its non-paraxial characteristics, we can apply focal fields for the evaluation of unknown properties of monolayers or of 4D functional nano-systems as assemblies of phase and polarization sensitive nano-particles. Hence, 4D light fields analyzed by our approach represent a useful nano-technological tool for the effective study and implementation of 4D nano-materials. Methods Holography-based customization of light SLMs are well-established, dynamic tools for the on-demand formation of structured light. Therefore, for the realization of tailored vectorial flower or spider web structures including additional phase vortices we combine two holographic modulation techniques by applying a reflective phase-only SLM (Holoeye Pluto, parallel aligned liquid crystal HD display) in split-screen-mode 33 , 41 , as shown in Fig. 1b . The initial light field (collimated and expanded; wavelength λ = 532 nm; continuous wave frequency-doubled Nd:YAG laser, company: Coherent) is of horizontal polarization, as the SLM can only modulate horizontally polarized light in phase. In a first step, we tailor the phase of the light field by encoding the desired phase distribution on one half of the SLM, being passed first. We combine the desired phase structure with an additional blazed grating, so that the modulated field is created in the first diffraction order of the hologram 43 , which can be filtered in the far field. By this, the quality of modulation is enhanced and polarization purified. To add polarization modulation, the phase structured light field is guided to the SLM a second time, while the first half of the SLM is imaged onto the second half by a 4 f -system consisting of two lenses (L). Note that in Fourier space, i.e., in the focal plane of the first lens, we filter the first diffraction order of the first SLM half by an aperture (A). The polarization modulation is based on a combination of the polarization selective SLM and two quarter wave plates (QWP 1,2 ; multiorder wave plates) 33 , 36 . First, the orientation of QWP 1 defines the ratio of horizontal and vertical polarization components reaching the SLM. Next, the SLM introduces a chosen, spatially varying phase shift between horizontal and vertical components, as horizontal light can be modulated, whereas vertical components are reflected unaffectedly. Last, QWP 2 recombines horizontal and vertical components, dependent on its orientation. In the transverse plane of the modulated light field all states of polarization located on a ring on the Poincaré unit sphere, spanned by normalized Stokes parameters, can be realized. The radius and position of the ring is affected by the QWPs’ orientation. Consider that the modulation of polarization by this method results in an additional phase variation 36 , 37 , which is corrected by according phase modulation on the first half of the SLM. For the creation of vector beams, consisting of only linear states of polarization, both wave plates are set to −45° with respect to horizontal input polarization. In this case, the applied hologram is equal to the generated field’s (Stokes field) phase Φ 12 36 , 39 , 40 , defined by the Stokes field Σ 12 = S 1 + i S 2 = A 12 exp(iΦ 12 ). Following this, we can directly choose the order of the created vector field or, more precisely, the included V-point singularity index σ 12 by the choice of the hologram. Fully-structuring focal 4D fields After generating the tailored paraxial beam, the phase and polarization structured field can be transformed into the non-paraxial regime. For this purpose, we image the second half of the SLM (Fig. 1 a) onto the back-aperture of the applied MO (Fig. 1 b). Here, a 2 f 1 –2 f 2 system is applied as imaging system adapting the transverse size of the paraxial field to the size of the back-aperture (slightly overfilled back-aperture). In the Fourier plane between lenses, we filter the zeroth diffraction order of the SLM, which includes the modulated paraxial field. Note that guiding mirrors (silver mirror; dichroic mirror, Thorlabs DMLP605R) are chosen to have almost no effect on the polarization. However, deviations of the intended field, investigated at the position of the back-aperture of the focusing MO, are rectified by correction holograms on the SLM 41 . By tightly focusing this corrected, paraxial field by the MO (Nikon, TU Plan ELWD, ×100, NA = 0.8 in air, working distance 4.5 mm), a non-paraxial light field 4D structured in amplitude, phase and polarization, i.e., a fully-structured 32 , 34 focal field is formed. The respective degrees of freedom can be customized by the choice of holograms on the SLM. Within our results we present focal fields created by combining higher-order vector beam, whose polarization structure is of flower- or spider-web-shape, with additional phase vortices (Fig. 2 , NA = 0.8, z = 0). If a flower (web) of index σ 12 without additional phase modulation ( \(\ell = 0\) ) is tightly focused, a focal intensity landscape resembling a | σ 12 − 2|-fold dark star (bright flower) is realized 10 . This means, by the index σ 12 we are able to tailor the total intensity. In addition, the ratio of intensity contributions assigned to transverse and longitudinal polarization states can be varied as indicated by the “max” or “mean” values (ratio of maximum or mean value of \(|{\cal{E}}_{x,y,z}|^2\) to the maximum or mean value of \(|{{{\cal{E}}}}|^2\) , respectively; see ref. 10 ). Another tool for further tailoring the focal field is the addition of phase vortices of chosen charge \(\ell\) to the input light field. As presented in Fig. 2 , the total focal intensity \(|{{{\cal{E}}}}(x,y,0)|^2\) as well as contributions \(|{\cal{E}}_{x,y,z}(x,y,0)|^2\) vary depending on \(|\ell |\) . For example, the transverse size firstly decreases ( \(|\ell | \le 4\) ) then increases again ( \(|\ell | > 4\) ) with growing \(|\ell |\) . Moreover, the max and mean values are changing. Interestingly, for the focused flower (a) as well as web (b) structure a Gaussian-like distribution for \(|{{{\cal{E}}}}|^2\) is achieved if a vortex of charge \(|\ell | = |\sigma _{12}|/2 = 4\) is applied, which we verified by simulations for different σ 12 . In this case, the contribution of \({\cal{E}}_z\) is relatively small (see max). Furthermore, distributions for a focused flower (a) with additional vortex \(\ell = \pm 3\) (±5) resembles the ones for a focused web (b) with \(\ell = \pm 5\) (±3). Here, for \(|\ell | = |\sigma _{12}/2 - 1|\) a very strong \(|{\cal{E}}_z|^2\) contribution of Gaussian shape is created, whereas for \(|\ell | = |\sigma _{12}/2 + 1|\) longitudinal contributions are significantly smaller and of donut shape. Note that, besides tailoring intensity landscapes in the focal plane ( z = 0), the additional modulation of phase results in the customization of the total focal volume \({{{\cal{E}}}}(x,y,z)\) as indicated in ref. 44 . Analyzing non-paraxial fields Tailored non-paraxial 4D fields are analyzed by a fluorescent SAM. For this purpose, the probe, consisting of the monolayer produced on a glass cover slip (refractive index n = 1.33, thickness 170 μm) is placed in the focal plane of the non-paraxial field. Note that the fluorescent molecules are directly irradiated by the light field, thus, the probe is placed with the monolayer at the bottom on its holder (monolayer-glass-order in beam propagation direction, see Fig. 1a ). By this, aberration effects occurring when the beam is transmitted trough the cover slip are avoided, as the MO focuses is aberration free in air. Excited fluorescence is observed in transmission, passing through the cover slip. For collecting scattered fluorescence and imaging the fluorescence in the plane of the monolayer (=focal plane) onto a CCD camera (Photometrics CoolSNAP MYO), we apply a high-NA MO (Nikon C-AB Abbe Condenser, ×100, NA = 0.9) in combination with a lens (focal distance: 500 mm). The exciting light field ( λ = 532 nm) is filtered in front of the CCD (longpass filter, Thorlabs FEL0550), so that pure fluorescence ( λ fl ~ 594 nm) is observed . Note that noise within recorded fluorescence images is reduced by taking the mean value of ten images and applying background subtraction. Self-assembled monolayers For the fluorescent SAM a sulforhodamine B silane (C 36 H 51 N 3 O 9 S 2 Si) was used because of its strong fluorescence and wide application on surfaces as for example in bio arrays 45 . The xanthene substructure in rhodamine is well known and used in a great variety of fluorophores 46 , 47 . Sulforhodamine B (in solution) has its absorption maximum at a wavelength of 558 nm and its emission maximum at 576 nm, and is therefore a red emitting fluorophore. Further, rhodamine has a rigid structure with a broad π -system with a length of about 1.3 nm. Note that depending on the tilting angle of the molecule on the surface its transverse dimension may vary between 1.3 nm and approximately 2 nm (Fig. 8 ). The preparation and synthesis (see Supplementary Methods ) of the SAMs of the sulforhodamine B silane on glass ( λ abs = 572 nm; λ fl = 594 nm) is depicted in Fig. 8 (idealized structures) . The successful formation of the densely packed molecules to a monolayer has been proven by surface analysis (see Supplementary Methods ), which shows the expected and desired assembly of triethoxysilanes reacting with glass (SiO 2 ) surfaces 27 , 28 , 29 , 30 , 31 . Fig. 8 Preparation and synthesis of SAMs of sulforhodamine B silane on glass. The preparation of a sulforhodamine B chloride and, subsequently, b sulforhodamine B silane, as well as c the final formation of the SAM are illustrated (idealized visualization) Full size image The structure of the SAM is dependent on the arising covalent bonds between the silane and the surface and the interactions between the sulforhodamine B moieties in the backbones of the silane. Thus, the sulforhodamine B molecules self-assemble in that sense that they will interact and assemble in a spatially ordered way due to π – π stacking. The resulting SAM consists of nano-scale domains in which molecules exhibit similar orientations. However, the whole SAM on the glass surface reveal a random orientation on a micrometer level of dipole moments d with positive d z due to the defined bonding site of rhodamine B on silane. In addition, aromatic dyes typically tend to stack due to their π -system, which may vary the SAM’s characteristics. Note that, due to overlapping of absorption and emission spectra, neighboring rhodamine units can excite each other (self-excitation). We expect a maximum interaction distance of three to four molecules within the monolayer. SAMs were characterized in detail by different analytic methods. Analytical data can be found in the Supplementary Methods . Numerical approaches To simulate the expected focal field distributions as well as fluorescence images, we applied numerical methods. As described in refs. 10 , 48 , we numerically calculate the non-paraxial electric field \({{{\cal{E}}}}(x,y,z) = [{\cal{E}}_x(x,y,z),{\cal{E}}_y(x,y,z),{\cal{E}}_z(x,y,z)]^T\) (here for ( z = 0)) by solving Richards and Wolfs’ integrals 49 fast Fourier transform (FFT) operations (NA = 0.8, refractive index n = 1). For this purpose, the input (transverse) vectorial light field of order σ 12 with additional phase vortex of charge \(\ell\) is described as $${\mathbf{E}}_{{\mathrm{in}}} = \left[ {\cos \left( {\frac{{\sigma _{12}}}{2}\cdot \phi } \right),{\kern 1pt} \sin \left( {\frac{{\sigma _{12}}}{2}\cdot \phi } \right)} \right]^T\cdot \exp \left( {{\mathrm{i}}\ell \phi } \right),$$ (3) whereby ϕ represents the polar angle. For the calculation of the resulting fluorescence, we utilize Eq. ( 1 ). Considering the size of fluorescent molecules as well as built islands (see above), we assume N × N = 1000 × 1000 randomly oriented molecules in the respective numerical field of view (FoV) of the calculated non-paraxial light field (( x , y )-plane, b × b with b = 256 px). Thus, we create a three-dimensional array d ( u , v ) = [ d x ( u , v ), d y ( u , v ), d z ( u , v )] T with u , v ∈ [0, N ] reflecting the number of molecules. Then, we rescale the respective array to the numerical size of the FoV ( d ( u , v ) with u , v ∈ [0, N ] → d ( x , y ) with x , y ∈ [0, b ]), so that one FoV pixel embeds N / b molecules. Next, we calculate the transversely varying fluorescence rate by Eq. ( 1 ). In a last step, when we compare simulations and experimental results, we consider the spatial resolution achieved by the system imaging the fluorescent monolayer onto the CCD camera (pixel size: 4.54 μm). Thus, we sum up 5 × 5 adjacent pixels of the numerical FoV so that the spatial resolution is decreased according to the experimental system. Note that the PSF of the imaging system (see below) is not considered here. Graphical fluorescence analysis In order to analyze our experimental in comparison to numerical data in detail, we study the fluorescence intensity dependent on the azimuthal angle. Results are presented in Figs. 6 and 7 . For this analysis, we first choose a ring-shaped subspace within fluorescence images, as marked in white in experimental (red edged) and numerical images (blue edged; original resolution). The ring is positioned in such a way that it embeds most conspicuous characteristics of intensity images while having a width of 2–3 pixels in experiment (10–12 pixels in numerics). In the next step, the ring is devided into angular segments of size δα with at least including three intensity pixels in the experimental case (experiment or simulation: δα = 10° to 45° or δα = 5° to 10°). We calculate the mean values of the normalized intensity per angular segment within the ring and plot these in dependence on the respective angular position α (central value of angular segment; see Figs. 6 or 7a , bottom left). Experimental and numerical data points are visualized by red and blue circles within graphs, respectively. For experiment and simulation, the errors in intensity represent the standard deviation of the respective mean value. The angular position error Δ α for the experimental results considers the PSF of the imaging system and the self-excitation of neighboring molecules. More precisely, we assume each molecule of the SAM as a self-luminous point in the microscopic imaging system, so that the transverse position error (Cartesian coordinates) of this point can be described by the half of the full width half maximum (FWHM) of the PSF given by 50 $${\mathrm{FWHM}} = \frac{{0.51\lambda _{{\mathrm{fl}}}}}{{{\mathrm{NA}}_{{\mathrm{im}}}}} \approx 337{\kern 1pt} {\mathrm{nm}},$$ (4) (wavelength of fluorescence λ fl = 594 nm; NA of imaging MO NA im = 0.9). Further, we consider the possible self-excitation with a maximum interaction distance of four molecules, i.e., an additional error in transverse position of ±4 ⋅ s , s = 2 nm ( s : long axis of fluorescent molecule). Hence, in total, we calculate a transverse position error of $$\begin{array}{c}\Delta x = \Delta y = \pm \left( {\frac{{{\mathrm{FWHM}}}}{2} + 8{\kern 1pt} {\mathrm{nm}}} \right)\\ = \pm 176{\kern 1pt} {\mathrm{nm}} \approx \pm 0.33\lambda .\end{array}$$ (5) From this, we can derive the angular error via propagation of uncertainty, resulting in $$\Delta \alpha = \Delta x\cdot (\sin \alpha - \cos \alpha )/r{,}$$ (6) with r as the central radius of the ring subspace. Obviously, by increasing NA im or decreasing λ fl one can reduce the FWHM and, thereby, Δ x or Δ α , which improves lateral imaging resolution. Data availability Experimental source data are available from the corresponding author upon reasonable request.
Structured laser light has already opened up various different applications: it allows for precise material machining, trapping, manipulating or defined movement of small particles or cell compartments, as well as increasing the bandwidth for next-generation intelligent computing. If these light structures are tightly focused by a lens, like a magnifying glass used to start a fire, highly intense three-dimensional light landscapes will be shaped, facilitating a significantly enhanced resolution in named applications. These kinds of light landscapes have paved the way for such pioneering applications as Nobel prize awarded STED microscopy. However, these nano-fields themselves could not be measured, since components are formed by tight focusing which is invisible for typical measurement techniques. Up to now, this lack of appropriate metrological methods has impeded the breakthrough of nano-structured light landscapes as a tool for material machining, optical tweezers, or high-resolution imaging. A team around physicist Prof. Dr. Cornelia Denz of the Institute of Applied Physics and chemist Prof. Dr. Bart Jan Ravoo of the Center for Soft Nanoscience at the University of Münster (Germany) successfully developed a nano-tomographic technique which is able to detect the typically invisible properties of nano-structured fields in the focus of a lens—without requiring any complex analysis algorithms or data post-processing. For this purpose, the team combined their knowledge in the field of nano-optics and organic chemistry to realize an approach based on a monolayer of organic molecules. This monolayer is placed in the focused light field and replies to this illumination by fluorescence, embedding all information about the invisible properties. By the detection of this response the distinct identification of the nano-field by a single, fast and straightforward camera image is enabled. "This approach finally opens the till now unexploited potential of these nano-structured light landscapes for many more applications," says Cornelia Denz, who is heading the study. The study has been published in the journal "Nature Communications".
10.1038/s41467-019-12127-3
Biology
Pesticides influence ground-nesting bee development and longevity
Nicholas L. Anderson et al, Chronic contact with realistic soil concentrations of imidacloprid affects the mass, immature development speed, and adult longevity of solitary bees, Scientific Reports (2019). DOI: 10.1038/s41598-019-40031-9 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-40031-9
https://phys.org/news/2019-03-pesticides-ground-nesting-bee-longevity.html
Abstract The non-target effects of pesticides are an area of growing concern, particularly for ecologically and economically important organisms such as bees. Much of the previous research on the effects of neonicotinoids, a class of insecticide that has gained attention for non-target effects, on bees focused on the consumption of contaminated food resources by a limited number of eusocial species. However, neonicotinoids are known to accumulate and persist in soils at concentrations 2 to 60 times greater than in food resources, and may represent an important route of exposure for diverse and ecologically important ground-nesting bees. This study aimed to assess the effect of chronic contact exposure to realistic soil concentrations of imidacloprid, the most widely used neonicotinoid pesticide, on bee longevity, development speed, and body mass. Cohorts of Osmia lignaria and Megachile rotundata were used as proxies for ground-nesting species. We observed species- and sex-specific changes to adult longevity, development speed, and mass in response to increasing concentrations of imidacloprid. These results suggest that chronic exposure to nesting substrates contaminated with neonicotinoids may represent an important route of exposure that could have considerable physiological and ecological consequences for bees and plant-pollinator interactions. Introduction For much of the past two decades, research on the lethal (e.g., increased mortality over 24–48 hours) and sublethal (e.g., reduced performance) non-target effects of neonicotinoids on pollinators has primarily focused on the consumption of contaminated pollen and nectar in honeybees and, more recently, bumblebees 1 . Although there appears to be no consistent effect on adult mortality rates in honeybees at dosages commonly recovered from pollen and nectar, a wide range of significant sublethal effects of acute and chronic exposure are well documented 2 . Observed sublethal effects include delayed larval development 3 , impaired mushroom body growth and neurological function 4 , 5 , 6 , and disruptions to reproduction including reduced production of reproductive female offspring 7 , 8 , 9 , 10 . While the consumption of neonicotinoids by honeybees and bumblebees may have important economic and ecological implications, there is also a need to assess additional routes of exposure and bee species to gain a better understanding of the non-target effects of neonicotinoids on bee communities as a whole. With most bees nesting underground 11 , prolonged contact with neonicotinoid contaminated soils may represent a significant route of exposure for many species. However, field-scale assessments of the effects of neonicotinoids on native bees have largely ignored the potential effects of contaminated nesting resources even when a number of affected species are ground-nesting and not thought to collect food resources from treated plants, e.g. 12 . While lethal concentrations of neonicotinoids are higher for contact than oral exposure 13 , soil concentrations of neonicotinoids often reach higher and more persistent levels than those in pollen and nectar. Soil concentrations of imidacloprid, a commonly used neonicotinoid, are often between 12–18 ppb, with values of up to 650 ppb reported, compared to 1–11 ppb in pollen and nectar 14 , 15 , 16 , 17 , 18 . Soils become contaminated with high concentrations of neonicotinoids as a result of much of the active ingredient, commonly applied as a seed treatment, spreading into the surrounding soil rather than being absorbed by targeted plants 16 , 19 , returning to the soil as treated plant material decomposes 20 , and having a relatively long half-life in soils 14 , 21 , 22 , 23 . Additionally, the long immature development period, relative to adult lifespan, exhibited by ground-nesting bees may amplify the effects of contaminated soils as the toxicity of neonicotinoids increases with exposure time 24 , 25 . The lack of an assessment of the effects of chronic contact exposure to realistic soil levels of neonicotinoids represents a major gap in our current knowledge, especially given the number of species at risk. Using imidacloprid, the most well-studied member of the neonicotinoid insecticide family 24 , 26 , we evaluated the sublethal effects of chronic contact exposure to realistic soil concentrations during immature development on Osmia lignaria Say, 1837 and Megachile rotundata (Fabricius, 1787). While not ground-nesting species themselves, O . lignaria and M . rotundata belong to genera containing ground-nesting species and were used previously to approximate the effects of soil conditions on soil-dwelling species 27 . The benefits to employing these species as proxies for ground-nesting bees are that they are easy to collect and rear and represent a worst case scenario of soil contact without a nest cell lining - structures that are highly variable within and between taxa 28 , 29 , 30 , 31 , 32 , 33 . We hypothesised that chronic contact exposure to imidacloprid would disrupt normal bee physiological functioning, possibly by altering the expression of genes associated with metabolism or detoxification 34 , 35 , 36 or reducing motor function 5 . These changes were expected to manifest as a decrease in body mass, development speed, or immature or adult longevity which could affect populations by reducing the total number of nest cells provisioned or altering emergence timing which disrupts mating and flower visitation. Due to differences in body mass (Table 1a ), life histories (Table 1b ), and the number of chromosomes (i.e., haplodiploid sex determination), we predicted that observed effects would be stronger for M . rotundata and male bees when compared to O . lignaria and female bees, respectively. Table 1 Differences in the ecology of and the methodologies used for Osmia lignaria and Megachile rotundata . Full size table Results The effects of chronic contact exposure to realistic soil concentrations of imidacloprid during immature development varied between O . lignaria and M . rotundata and often between males and females of the same species. In O . lignaria , we only detected an effect on adult female longevity which had an inverted u-shape, with a slight increase in longevity at low concentrations of imidacloprid and a decrease at high concentrations (Figs 1 and S1 – S3 ; Tables 2 – 4 ). Individuals treated with the highest concentration, 100 ppb, lived an average of 4.5 and 5 days fewer than 0 and 7.5 ppb treated bees (P = 0.032, P = 0.011, respectively). However, it is possible that these effect sizes are underestimated as most female O . lignaria in the 0 and 7.5 ppb groups survived until 14 days after emergence when they were censored for use in a concurrent study. Additionally, no O . lignaria females treated with the 15 ppb imidacloprid solution died before being censored (n = 16), and, thus, we were not able to fit a survival function for this group or statistically compare them to the other treatment levels. Despite this, we interpret the 15 ppb female O . lignaria as having lived longer than their 100 ppb treated counterparts and that there is a potential trend for increased longevity over individuals in the control group. Effects on male O . lignaria adult longevity and mass are potentially obscured by a loss of statistical power caused by an equipment malfunction during the overwintering period resulting in the loss of 70% (66 individuals) of adult male bees from across all treatments (Table S1 ). Therefore, we advise caution when interpreting the result of no detected effects on male O . lignaria adult longevity and mass. Figure 1 Effect of realistic soil concentrations of imidacloprid on adult bee longevity. Survival curves represent the proportion of bees that were alive on a given day. Inset graphs display the log hazard ratios ±95% confidence intervals (y-axis) associated with each imidacloprid treatment level (x-axis). Values below the centre line (i.e. more positive) represent a higher probability that an individual will die on a given day, provided that it has not already done so, relative to the overall mean. Values above the line indicate the opposite. Capital letters are used to signify significant differences (P < 0.05). *No adult female O . lignaria in the 15 ppb group (n = 16) died before being censored for a concurrent project, and a survival curve and hazard ratios cannot be estimated. Full size image Table 2 Summary of Cox Proportional-Hazards Regression models for bee longevity. Full size table Table 3 Summary of the Prentice, Williams, and Peterson total time extension for multiple events of the Cox Proportional-Hazards Regression models for bee development speed. Full size table Table 4 Summary of mixed-effects models for bee mass. Full size table While not statistically significant in comparison with our a priori α, there were strong trends for inverted u-shaped effects on female M . rotundata development speed and mass (Figs 1 – 3 and S1 ; Tables 2 – 4 ). Individuals treated with 15 ppb developed 1–3 days more slowly in both the pre- and post-overwintering period and weighed 11–20% more than the other treatment levels. Female M . rotundata treated with 100 ppb developed approximately 2 days faster than control bees during the pre-overwintering stage. Figure 2 Effect of realistic soil concentrations of imidacloprid on Megachile rotundata development speed. Curves represent the transition from one development stage to the next. For example, the group of curves between LA and CB for pre-overwintering male bees represent the transition from a larva to a cocoon-building larva under each treatment. Inset graphs depict the log hazard ratios ±95% confidence intervals (y-axis) associated with each imidacloprid treatment level (x-axis). Values above the centre line (i.e. more negative) represent a lower probability that an individual will transition to the next stage on a given day, provided that it has not already done so, relative to the overall mean. This would result in longer development time. Values below the centerline represent the opposite. Capital letters are used to indicate significant differences (P < 0.05). LA: larva; CB: cocoon-building larva; PP: pre-pupa; PU: pupa; PE: pre-emergent adult; AD: adult; OW: overwintering period. Full size image Figure 3 Effect of realistic soil concentrations of imidacloprid on Megachile rotundata mass. We included the initial natal cell mass as a covariate and the mass of the shed cocoon in the adult mass. Arrows indicate a treatment level that was significantly different from all other levels at that life stages (P < 0.05). PP: pre-pupa; PU: pupa; PE: pre-emergent adult; AD: adult. Full size image Chronic contact exposure to imidacloprid significantly decreased male M . rotundata adult longevity, increased post-overwintering development speed, and had a u-shaped effect on mass (Figs 1 – 3 and S1 ; Tables 2 – 4 ). Male bees treated with the 15 and 100 ppb imidacloprid solutions during development lived 3 and 4 days longer as adults compared to control bees (P = 0.040, P = 0.007, respectively) and there was a trend suggesting that individuals in the 100 ppb treatment lived 2.5 days longer than those in the 7.5 ppb group (P = 0.072). During the post-overwintering period, males in the 100 ppb group developed 1–2 days faster than those in the 0, 7.5, and 15 ppb groups (P = 0.018, P = 0.039, P = 0.010, respectively). Male M . rotundata treated with the 15 ppb imidacloprid solution were 9% lighter than those treated with 0, 7.5, and 100 ppb (P = 0.013, P = 0.017, P = 0.037, respectively). Discussion The results of this study suggest there are multiple ways imidacloprid contaminated soils can affect bees. Chronic contact exposure in O . lignaria and M . rotundata resulted in species- and sex-specific effects on adult longevity, immature development speed, and mass that could have negative consequences for bees more generally. In the case of O . lignaria , the main effect was decreased adult female longevity at high concentrations of imidacloprid. For M . rotundata , males responded to increasing imidacloprid exposure with a significant increase in adult longevity and development speed and a u-shaped response in mass. For females, there were strong trends for inverted u-shaped effects on development speed and mass. Species- and sex-specific variation in the effects of imidacloprid on bees have been reported in other studies, reviewed in 1 , 37 , and could be the result of differing body sizes 38 , life histories (which impacted the number of imidacloprid applications, see Methods ), genetic differences 17 , 39 , or number of chromosomes 40 . Despite these limitations, this study demonstrates the potential for neonicotinoid contaminated soils to affect bees. When exposed to imidacloprid at soil concentrations, we often found biphasic hormetic patterns where bees had opposite responses at intermediate and high concentrations. Hormetic responses are thought to occur when organisms compensate for the negative effects of a stressor at low intensities, often at the expense of other processes, but are unable to keep up at higher intensities 41 , 42 . Reports of hormetic effects of neonicotinoids, including imidacloprid, are not uncommon for insects 43 , 44 , 45 , 46 ; however, the underlying mechanisms are unknown. Extrapolating from the results of Derecka et al . 34 and De Smet et al . 36 about the effects of imidacloprid ingestion on honeybee gene expression, the hormetic responses observed here for M . rotundata development speed and mass may be due to increased expression of detoxification and cuticular protein genes and decreased expression of genes that regulate development. If increased expression of detoxification or cuticular protein genes diverts energy away from other processes 34 , 36 , then bees would be expected to develop slower or have lower mass. At higher concentrations of imidacloprid, upregulating these genes may be inadequate, and additional strategies are needed such as increased development speed, possibly by decreasing Hsp90 expression 34 , in order to reach the pre-emergent adult stage and the associated thicker cuticle. While this hypothesis explains many of the sub-lethal effects we observed, the insect nervous and endocrine systems are intricately connected and further research is necessary before reaching conclusions about the processes underlying the connection between neurotoxic neonicotinoids and bee development. In addition to uncertainties about the mechanisms behind sub-lethal effects of imidacloprid on bees, many questions remain about the generalizability across bee species due to genetics and the properties of nest cell linings. While there are ground-nesting Osmia and Megachile , ground-nesting bees are spread across all seven bee families. If the effects of neonicotinoids can vary based on honeybee genotype 17 , 39 , it seems likely that responses will vary across Anthophila. Further research on the effects of neonicotinoids across a broader range of taxa will allow us to describe this variability and better predict species’ reactions. Additionally, the effect of nest cell linings on the amount of contact bees have with contaminated soil is unknown. Nest cell linings, commonly secreted from the Dufour’s gland, consist of hydrophobic compounds 47 and are thought to help maintain moisture homeostasis in brood cells 48 . However, there is great variation between and within species in the use and structure of linings 28 , 29 , 30 , 31 , 32 , 33 and these barriers may be more permeable than commonly thought as water is hypothesised to cross through the lining into the nest cell 49 . The bee toxicology literature would be greatly enriched by the development of assays to elucidate the permeability characteristics of nest cell linings and to determine which, if any, soil contaminants can cross these barriers. Although there are reasons to be cautious in applying our results to other bees, the effects observed here suggest that imidacloprid exposure, even at concentrations corresponding to fields 1 to 2 years after treatment 14 , 16 , 17 such as 7.5 and 15 ppb, can have consequences for bee development and survival. Bees nesting in these soils may have increased male or female adult longevity, decreased female development speed, increased female mass, or decreased male mass. While increased longevity and female mass could have a positive effect on bee populations - by increasing the number of cells provisioned and flight ability 50 , 51 , 52 - reduced energy expenditures as a result of impaired foraging behaviors 10 , 53 , 54 could cause a similar pattern in body mass and would reduce fecundity. If prolonged contact exposure decreases sperm quality in ground-nesting bees like oral exposure does in honeybee drones 8 or if reduced mass affects male quality in other ways, then increased male longevity may negatively impact bee populations. By living longer, low-quality males could mate with more females, reducing the number of successful fertilisations and driving more male-biased sex ratios. Such an effect would be particularly problematic for individuals or species that only mate once or a few times. In areas with current or long-term imidacloprid use, effects similar to those observed in our 100 ppb treatment may be more common 14 , 16 . Because the number of offspring produced depends, in part, on adult female longevity, one of the biggest threats to populations in these areas is reduced adult female longevity as observed for O . lignaria . Further, increased male adult longevity or earlier emergence could intensify the population-level effects described for males in areas with 7.5 to 15 ppb imidacloprid. Further investigations into how neonicotinoid contaminated soils impact bee populations may help elucidate the relationship between our results and the decreases in native bee populations in agricultural landscapes reported elsewhere 12 . Our results, along with the knowledge that bees are unable to detect neonicotinoids via their olfactory senses and show a preference for contaminated food sources 55 , 56 , suggests that chronic contact exposure to realistic soil concentrations of neonicotinoids represent a potentially important route of exposure for ground-nesting bees. With the primary approach to bee conservation being the conversion of agricultural fields and adjacent lands into flower-rich habitats 57 , 58 , 59 , 60 , caution is advisable in landscapes with a history of neonicotinoid use. If the effects observed here persist in the field, these areas might become ecological traps that lure bees to apparently good resources but actually serve as demographic sinks 61 . Additionally, while the responses of bees to specific neonicotinoids may differ, reviewed in 1 , 62 , pesticide contamination profiles are likely more complex than a single compound and contaminants may interact in complex ways to strengthen adverse effects on ground-nesting bees. This emphasises the importance of considering and evaluating the effects of chronic contact exposure during development on ground-nesting bee populations in order to better inform responsible pesticide use and to maximise the effectiveness of bee conservation strategies. Methods In order to accommodate differences between M . rotundata and O . lignaria , we modified the protocol for each species. These differences and changes are summarised in Table 1 and will be referenced when pertinent in the following description. Immature treatment with imidacloprid We purchased wild-caught, newly laid eggs and early instar larvae from Crown Bees (Seattle, WA) during the spring and summer of 2015. In total, 295 O . lignaria and 233 M . rotundata were used for this study (see Table S1 for detailed sample sizes). Individual bees and their pollen provisions were weighed together and placed into a well of a tissue culture plate (Table 1c ). Individuals from the same nest were stratified across the treatments to limit the potential genetic biases that exist when evaluating responses to imidacloprid 17 , 39 . Once individuals reached the second instar larval stage, they were treated every 48 hours with 0.5 μL of 0, 7.5, 15, or 100 ppb imidacloprid (Sigma-Aldrich, PN 37894) in saline solution (Equate Sterile Multipurpose Solution, PN 68113173188) applied topically to their abdominal segments. These concentrations reflect realistic soil concentrations previously reported elsewhere 14 , 16 , 17 . Saline solution was used as the solvent because it is less detrimental to larval bees than deionised water (Craig Huntzinger, personal communication ). Imidacloprid solutions were replaced every 96 hours and kept in the dark at room temperature. In order to maintain a consistent temperature and prevent desiccation, tissue culture plates were kept inside an unheated incubator at room temperature with a 250 mL beaker filled with water. During this time, the chamber temperature was 23.6 ± 0.6 °C and the relative humidity was 84.5 ± 1.3%. We monitored bee survival and development daily and measured bee mass at important life stages: initial natal nest cell mass (egg and pollen provision), prepupa, pupa, pre-emergent adult, and emergent adult. Shed cocoon mass was added to emergent adult mass to isolate changes due to bee body mass. Tissue culture plates were left open until individuals began spinning cocoons. At that time, lids were replaced to aid in cocoon completion. Once bees constructed cocoons, development was monitored by back-lighting through individual cocoons using a LED light while observing through a stereomicroscope. In October, surviving individuals in their overwintering stages were stored at 4 °C to overwinter. During this time, we placed the tissue culture plates in 53 L plastic tote containers with a 250 mL beaker filled with water to prevent individuals from desiccating. Bees were checked twice a week to ensure humidity was appropriate and to monitor for mould growth. There were no visible signs of mould growth for either species. In the spring of 2016, bees were removed from cold storage and allowed to emerge ( O . lignaria ) or finish their development ( M . rotundata ). During this period, we reared M . rotundata at 28.2 ± 0.1 °C and 78.9 ± 1.8% relative humidity. In order to keep the number of imidacloprid solution applications consistent across individuals of the same species, treatment was stopped after the first individual emerged as an adult. This resulted in zero applications for O . lignaria and nine for M . rotundata in 2016 (Table 1e ). Effects on adult longevity After emergence, each adult was given a unique paint identifier on the thorax using acrylic paint (Royal Langnickel ACR12). The paint was periodically checked and reapplied as necessary (i.e. if it was damaged or partially missing). For painting, bees were temporarily anaesthetised either by chilling ( O . lignaria ) or with carbon dioxide ( M . rotundata ). Megachile rotundata are less cold tolerant (Tim Krogh, personal communication ) so they required a modified methodology to prevent undue stress. Adult bees were placed in 85 L plastic tote containers separated by treatment and species. We provided each treatment group with a flower array, four flowers provided Typha sp . pollen, two provided a 2.0 M sucrose solution, and two provided a 1.0 M sucrose solution. Every four days the colour, location within the array, sucrose concentration, and essential oil ( Eugenia caryophyllata , Mentha spicata , Gaultheria procumbens , and Cymbopogon flexuosus ) used in the artificial flowers was randomised to mimic changing resource availability. Similar diets have been provided for other lab cultured bees with success (Emily Dobbs, personal communication ) 63 , 64 . We also provided nesting tubes, nesting substrates (Table 1d ), and water and replenished these resources as needed. However, no nest cells were completed. Adult foraging containers were kept in an environmental chamber with a 14:10 light:dark cycle to mimic the daylight patterns of late spring and early summer in Illinois (Philips 32 Watt Alto II PN F32T8/ADV835). The temperature of the environmental chamber was set to 24 °C for O . lignaria and 28 °C for M . rotundata . We assessed adult bee mortality and removed deceased individuals daily. Statistics Due to the differences in the number of treatments and total imidacloprid applied (Table 1e–f ), O . lignaria and M . rotundata were analysed separately. We pooled across sexes for analysis of larval longevity, but otherwise analyzed male and female bees separately. Except where noted, α = 0.05 was used to determine statistical significance. Immature and adult longevity were analysed using Cox Proportional-Hazards Regression 65 using the ‘rms’ 66 and ‘survival’ 67 packages in the statistical program R 68 . The proportional hazards assumption, checked with the “correlation with time” test described by Harrell 69 , was met for all longevity models (P > 0.096). When there was a significant effect of imidacloprid on longevity, we used Fischer’s LSD contrasts for post-hoc analysis. Differences in development speed (i.e. the number of days to life events) was analyzed using the Prentice, Williams, and Peterson 70 total time extension for multiple events (PWP-TT) of the Cox Proportional-Hazards Regression model using the ‘rms’ and ‘survival’ packages in the statistical program R. We set ‘events’ as the transition points between important life stages: larva to cocoon building larva, cocoon building larva to prepupa, prepupa to pupa, pupa to pre-emergent adult, and pre-emergent adult to emergent adult. Separate models were used for the pre- and post-overwintering periods. We censored bees that died during the experiment on their last day of known activity (e.g., movement). The date on which individual bees began treatment with one of the imidacloprid solutions – termed “treatment start date” here – was included as a covariate in development speed models and had a significant effect (P < 0.001) in all models except for pre-overwintering M . rotundata females where it was subsequently removed (χ 2 1 = 0.01, P = 0.928). Increased development speed of bees laid later in the season (i.e. those with a later treatment start date) reflects a naturally occurring, yet not completely understood, phenomena 71 . All development models met the assumption of proportional hazards (P > 0.295). Post-hoc analyses using Fisher’s LSD contrast were conducted when there was a significant effect of imidacloprid on development speed. Bee mass was analysed using linear mixed-effects models with first-order antedependence covariance structures to account for correlation between measurements taken from the same individual at unequally spaced time points in the MIXED procedure in SAS 9.4. We included initial natal cell mass as a covariate in the mass models as final adult size is known to be strongly correlated with pollen provision size 72 , 73 . This factor was significant in all mass models (F > 9.01, P < 0.004). In order to better meet the assumption of normality, outliers were identified by looking at the Q-Q plots of the residuals and removing extreme values identified by the ROBUSTSCALE option within the UNIVARIATE procedure in SAS 9.4. This approach resulted in dropping one female and three male O . lignaria . Because linear mixed-effects models are robust against mild departures from the normality assumption, we used an α = 0.025 for Shapiro-Wilk tests of normality. After removing extreme values, the residuals of the linear mixed-effects models were normally distributed (P > 0.034). We used Fisher’s LSD contrasts for post-hoc analyses when we detected a significant effect of imidacloprid concentration in the full models. Data Availability Data is available through the Illinois Data Bank ( ) and by contacting the corresponding author (N.L.A. nlndrsn2@illinois.edu ).
Results from a new study suggest that bees might be exposed to pesticides in more ways than we thought, and it could impact their development significantly. The study, published in Nature's Scientific Reports, looks at the non-target effects of pesticides on ground-nesting bees, a group that actually makes up the majority of bee species. Non-target effects refer to the effects on organisms other than the ones intended. Much of the research currently available on non-target effects of pesticides has been limited to honey and bumble bees and their exposure to pesticides when collecting pollen and nectar. While these previous studies have shown that pesticide consumption by honey and bumble bees can have important ecological consequences, this new study is one of the first of its kind to determine the effects of contact with pesticides, such as those that occur in soils, that other bee species might encounter. "This is an important piece of work because it's one of the first studies to look at realistic concentrations of pesticides that you would find in the soil as a route of exposure for bees. It's a very under-explored route, especially for some of the more solitary species that nest in the ground," said Nick Anderson, graduate student in entomology, led the study with his advisor, Alex Harmon-Threatt, professor of entomology. A key difference between ground-nesting bees and their honey and bumble bee cousins is their smaller nest sizes (both in size and number of bees) which are made by digging into the soil. Bee species that nest this way can stay in the ground up to 49 weeks out of the year, emerging for only 3 weeks to forage, mate, and lay eggs. This leaves a lot of time for the bees to be exposed to the chronic, low levels of pesticides found in the soil after agricultural land use. Results from the study showed that females and males responded differently to the pesticides, which could impact larger population dynamics. Credit: Jesse Wallace The researchers were particularly interested in a class of pesticides called neonicotinoids. Derived from nicotine, neonicotinoids are widely used for their effectiveness against insects such as Japanese beetles and emerald ash borers, but they can be toxic to pollinators. They also have a long half-life, meaning they can persist in the soil for long periods of time. Anderson and Harmon-Threatt used bees that are very close relatives to ground-nesting species because they are better suited for testing in the lab and have been used in previous research to approximate impacts on ground-nesting species. When the bees were exposed to neonicotinoids in the lab, the researchers looked at levels similar to those found in the field. Results showed that females grew larger and did not live as long, while males were smaller and lived much longer. This conclusion suggests that chronic and low-level exposure to pesticides might cause a hormetic response in bees, where, at low levels of pesticide exposure, bees appear to benefit in small ways. However, the long-term impacts of some of these changes might not be readily apparent. The researchers believe that these lower doses are causing changes in the bee's development, such as diverting energy from normal developmental processes to fortify physical and biochemical barriers to counter the effects of the pesticide. "When you're doing neonicotinoid work on something like bees, I think people expect the conclusions to say whether it's good or it's bad, but a lot of the relationships we're seeing are more complicated than that. There are a lot of factors and developmental processes that can be affected," said Harmon-Threatt. "As we develop new pesticides, we have to be able to understand the effects," said Anderson. "Our work is part of that kind of risk assessment. We want to know what the implications are for ground-nesting bees so that when we're using the land for agriculture or trying to restore it, we can minimize the impact on these species." This study lays the foundation for the work that the Harmon-Threatt lab will be expanding upon in the field over the next five years. In 2018, Harmon-Threatt received a $1 million grant from the US Department of Agriculture's National Institute of Food and Agriculture to conduct research to better understand the role that soil contamination plays in bee diversity and conservation.
10.1038/s41598-019-40031-9
Biology
Cleverly located surface proteins make some pneumococcal strains especially dangerous
Anuj Pathak et al. Factor H binding proteins protect division septa on encapsulated Streptococcus pneumoniae against complement C3b deposition and amplification, Nature Communications (2018). DOI: 10.1038/s41467-018-05494-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-05494-w
https://phys.org/news/2018-08-cleverly-surface-proteins-pneumococcal-strains.html
Abstract Streptococcus pneumoniae evades C3-mediated opsonization and effector functions by expressing an immuno-protective polysaccharide capsule and Factor H (FH)-binding proteins. Here we use super-resolution microscopy, mutants and functional analysis to show how these two defense mechanisms are functionally and spatially coordinated on the bacterial cell surface. We show that the pneumococcal capsule is less abundant at the cell wall septum, providing C3/C3b entry to underlying nucleophilic targets. Evasion of C3b deposition at division septa and lateral amplification underneath the capsule requires localization of the FH-binding protein PspC at division sites. Most pneumococcal strains have one PspC protein, but successful lineages in colonization and disease may have two, PspC1 and PspC2, that we show affect virulence differently. We find that spatial localization of these FH-recruiting proteins relative to division septa and capsular layer is instrumental for pneumococci to resist complement-mediated opsonophagocytosis, formation of membrane-attack complexes, and for the function as adhesins. Introduction The invasive respiratory pathogens, Haemophilus influenzae, Neisseria meningitidis , and S treptococcus pneumoniae , have evolved similar strategies, such as the expression of polysaccharide capsules and complement Factor H (FH) recruiting surface proteins, to evade deposition of C3b and activation of the alternative pathway of the complement system 1 , 2 , 3 . The pneumococcal polysaccharide capsule is critical for virulence and is known to inhibit complement activity 4 . Localization sites of deposited complement on the bacteria in relation to the capsule are not known. The capsule is believed to form an external shield for C3 entry and for the covalent binding of C3b to underlying nucleophilic targets 5 , 6 , 7 . So far more than 97 capsular polysaccharides have been described, and all except for serotypes 3 and 37, are covalently anchored to the bacterial peptidoglycan via the wzy pathway 6 . The ovoid shape of pneumococci results from a combination of lateral and septal peptidoglycan synthesis. Pneumococcal growth occurs by formation of the lateral wall via a complex protein machinery referred to as the elongasome, while the septal wall is formed by a second machinery, which is assembled at the position for the FtsZ-ring 8 , the divisome 9 . The divisome drives membrane invagination and septal wall formation. To generate a new bacterial pole the septal wall needs to be split, requiring cell wall hydrolytic enzymes 10 , resulting in the formation of two new poles 11 . This process is highly variable leading to pneumococcal populations consisting of single cocci, diplococci and chains of different lengths 12 . The pneumococcal peptidoglycan not only contains covalently anchored capsular polysaccharide, but also a set of surface proteins carrying a LPxTG motif allowing sortase-mediated covalent linkage to lysine residues on the peptidoglycan stem peptides 13 . Another set of pneumococcal surface proteins, the choline-binding proteins, contain choline-binding motifs recognizing choline residues on teichoic acids of which wall teichoic acids (WTA) are covalently anchored to the peptidoglycan, while lipoteichoic acids (LTA) are anchored in the cytoplasmic membrane 14 . It is not known if there are differences in level and composition of cell wall associated macromolecules between the lateral and septal wall in the pneumococcus. The pneumococcal choline-binding protein PspC (CbpA) recruits FH from human serum that selectively inhibits amplification of deposited C3b, and accelerates the decay of the alternative pathway C3 convertase 15 , 16 , 17 . Apart from FH recruitment, different allelic forms of PspC have also been reported to interact with host factors like pIgR, vitronectin and SIgA 18 , 19 , 20 . While most pneumococcal strains carry one gene of pspC , the pneumococcal lineage CC138, successful in causing colonization and invasive disease, contains two closely linked pspC genes, pspC1 encodes a conventional choline-binding PspC (denoted PspC1 in CC138), and pspC2 a cell wall anchored LPxTG version of PspC (denoted PspC2), both able to bind human FH 21 , 22 , 23 . PspC2 lacks the motif responsible for pIgR interaction 24 . In the present study, we combine super-resolution imaging techniques, mutants affecting protein localization, and functional analyses to show that the division septum represents the pneumococcal Achilles heel in its capsular barrier defense against complement C3b deposition. To cope with the low content of capsular polysaccharide at division septa, pneumococci have evolved FH binding proteins localized at division sites, and allow complement entry while division septa are formed at these sites. We show that the spatial positioning of virulence-associated cell wall proteins such as PspC, relative to the division septum and the capsular layer, have profound implications for defence against complement-mediated opsonophagocytosis, and formation of membrane attack complexes (MACs), needed for bacteria in an inflamed environment. Our data also demonstrate that a complement evasive protein, depending on accessibility outside the capsular layer, may mediate bacterial attachment to epithelial cells, potentially favouring healthy colonization. Results C3b deposition occurs at or close to division septa Encapsulated pneumococcal strains of serotypes 4 (TIGR4), 2 (D39), and 6B (BHN418) were incubated with human serum, and deposited complement C3b was monitored by anti-C3b staining (Supplementary Table 1 , Fig. 1a ). Using confocal microscopy, C3b antibodies were in each of the three strains shown to recognize deposited C3b as discrete bands on the cells (Fig. 1a ). TEM images performed on the serotype 6B strain BHN418 revealed bulky complement deposits precisely localized at some but not all division septa (18 out of 80 visible septa), the latter seen as thin electron-dense bands (Fig. 1b ). Deposition of C3b was also confirmed by immunogold staining and TEM (Supplementary Fig 1 ). When the isogenic non-encapsulated mutant BHN418 Δcapsule was examined by TEM, C3b was found to be deposited all around the cells, and immunostaining with C3b antibodies revealed the same uniform staining pattern (Fig. 1c, d ). SEM images of encapsulated BHN418 showed regularly spaced elevations on the bacteria that were absent at some division septa (Fig. 1e , arrows). As these elevations were completely absent in the capsular mutant BHN418 Δcapsule (Fig. 1f ) we suggest that they represent the cell wall associated serotype 6B capsule that appears less abundant at division septa. Fig. 1 Complement C3b deposition occurs at or close to division septa in encapsulated Streptococcus pneumoniae. a Representative immunofluorescence images showing distinct septal localization of C3b on pneumococcal strains TIGR4 (serotype 4), D39 (serotype 2) and BHN418 (serotype 6B). C3b was stained using goat anti-C3 antibody followed by incubation with anti-goat Alexa fluor 488 secondary antibody (green). The capsule was detected using respective rabbit anti-capsule serum and anti-rabbit Alexa fluor 594 antibody (red). Scale bar = 5 µm. b Representative TEM images of wt BHN418 after incubation with 20% normal human serum. C3b deposits are seen as dark rings at division septa. c Representative TEM image of BHN418Δ capsule after incubation with 20% normal human serum. A uniform deposit of C3b is observed. d Representative immunofluorescence images of C3b deposition on BHN418Δ capsule after incubation with 20% normal human serum. C3b was stained using goat anti-C3 antibody followed by incubation with anti-goat Alexa fluor 488 secondary antibody (green). e SEM images of wt BHN418. Arrows indicate two division septa lacking surface humps representing the capsule. f SEM images of BHN418Δ capsule devoid of surface humps. b – f Scale bar = 1 µm Full size image C3b and C5b-9 localize at division septa underneath the capsule To investigate localization patterns of the capsule and C3b in strains TIGR4, D39, and BHN418 we used super-resolution stimulated emission depletion (STED) microscopy and performed double staining. We observed that C3b deposition occurred mainly as distinct bands (rings) at division septa, possibly due to less capsule in these areas as observed using SEM microscopy (Figs. 2a , 1e ). The edges of these C3b bands were localized underneath the capsular layer in all three strains (Fig. 2a ). C3b deposition on bacteria can result in formation of MAC 25 , 26 . We therefore investigated MAC complex formation in BHN418 using a specific antibody, detecting a neo-epitope not present in monomeric C9 and only present in polymeric C9. Immunofluorescence microscopy of C5b-9 (MAC) staining of the bacteria revealed a band pattern in a few bacteria at or close to division septa (Fig. 2b ). While 15% (19 out of 124 cells counted) of BHN418 cells showed a C3b signal, only 5% (8 out of 173 counted) harbored C5b-9. This suggests that formation of MAC complexes does not occur at each division septum where C3b is deposited. C5b-9 (MAC) is the terminal product of the complement cascade, which requires longer stable complement deposition. Probably blockade at most septa must be so efficient that the cascade does not always progress. By comparing STED images of C3b and C5b-9 deposition there was a larger distance between the capsule and the edges of C5b-9 bands than between the capsule and C3b (Fig. 2c, d , Supplementary Fig 2 ). This suggests that the major portion of C3b in wild type pneumococci becomes deposited in or around the septal wall, whereas the C5b-9 MAC complexes are formed in the membrane beneath the wall. Fig. 2 C3b and C5b-9 MAC complexes localize underneath the capsular layer. a Super-resolution STED microscopy images of strains TIGR4, D39, and BHN418 after incubation with 20% normal human serum. Bacteria were sequentially stained with goat anti-C3 antibody followed by Atto647N labeled secondary antibody (green). The capsule was detected using respective rabbit anti-capsule serum and anti-rabbit Alexa fluor 594 antibody (red). Scale bar = 1 µm. b Representative immunofluorescence images of C5b-9 deposition on strain BHN418 after incubation with 20% normal human serum. Bacteria were sequentially stained with mouse anti-C5b-9 antibody and anti-mouse Alexa fluor 488 labeled secondary antibody (green). Bacterial membranes were stained with nile red (red). Scale bar = 5 µm. c Representative super-resolution STED microscopy image of C3b deposition on strain BHN418 after incubation with 20% normal human serum. Bacteria were stained for C3b and the capsule as in a . d Representative super-resolution STED microscopy image of C5b-9 deposition on strain BHN418 after incubation with 20% normal human serum. Bacteria were sequentially stained with mouse anti-C5b-9 antibody followed by Atto647N labeled secondary antibody (green). The capsule was detected using rabbit anti-6B capsule serum and anti-rabbit Alexa fluor 594 antibody (red). c , d Scale bar = 1 µm Full size image Factor H recruitment occurs at distinct sites underneath the capsular layer Pneumococci are known to prevent C3b deposition and amplification by recruitment of Factor H (FH). Exposing encapsulated strains, TIGR4, D39, and BHN418, to human FH and performing double staining with the respective capsular antibody and human FH-antibodies, revealed that FH was deposited at distinct sites in virtually all cells (Fig. 3a ). Furthermore, examining BHN418 cells with STED microscopy suggested that FH was mainly recruited to sites localized underneath the capsular layer (Fig. 3b ). Fig. 3 Factor H (FH) recruitment by encapsulated pneumococci occurs at division sites where FH binding proteins are localized. PspC1 and PspC2 localize differently on the pneumococcal cell. a Representative immunofluorescence images of distinct localization of FH on strains TIGR4, D39, and BHN418. After incubation with pure FH, bacteria were stained with goat anti-FH antibody and FITC-labeled secondary antibody (green). b Super-resolution STED microscopy images of FH (stained green) and capsule (stained red) on strain BHN418. a , b Scale bar = 1 µm. c Representative immunofluorescence images of PspC localization on strains TIGR4, D39, and BHN418. PspC was detected with mouse anti-PspC antiserum (D39) followed by anti-mouse Alexa fluor 488 (green) secondary antibody. Scale bar = 5 µm. d Super-resolution STED microscopy images of PspC1 (green) and the capsule (red) on strain BHN418. e Super-resolution STED microscopy images of different growth phases of strain BHN418 showing the distribution patterns for PspC1 (red) and PspC2 (green) during cell division and cell growth. d – f Scale bar = 1 µm. f Schematic representation of the PspC1 and PspC2 distribution during different stages of cell division and cell growth. g The distances between PspC1 rings, and between PspC1 rings and the closest pole in BHN418 ( n = 166) are shown. Linear fitting curve is shown in the plot Full size image Most clinical isolates of Streptococcus pneumoniae express a FH recruiting protein PspC (CbpA) that is anchored to choline residues on either LTA or WTA 3 . Immunofluorescence images using PspC antibodies revealed that PspC is positioned as one distinct band per single coccus in a chain (Fig. 3c ) in all the three pneumococcal strains examined (TIGR4, D39, and BHN418). For the serotype 6B strain BHN418, we further found by STED microscopy that its choline-binding PspC1 protein preferentially interacts with its antibody underneath the capsular layer (Fig. 3d ). As a control for protein localization relative to the capsule layer we made use of antibodies to RrgB, the major protein of adhesive pilus-1 polymers, and demonstrate that this LPxTG protein can be recognized both outside, within and beneath the capsular layer (Supplementary Fig 3 ). PspC1 localizes at division sites and PspC2 mainly at the poles We studied the interaction between FH and the two PspCs (PspC1 and PspC2) found in the 6B strain BHN418, belonging to the successfully spread clonal lineage CC138 22 (Supplementary Fig 4 and Supplementary Data 1 ). We found that purified recombinant PspC1 and PspC2 both bound human FH with similar binding strengths, as determined by surface plasmon resonance/BIACORE analysis (Supplementary Table 2 , Supplementary Fig 5 ). We also used peptide mapping to localize the interaction regions between FH and the two PspC proteins (Supplementary Fig 6a ). We further performed sequence alignments of the FH binding domains of different PspC proteins (Supplementary Fig 6b ) and showed that PspC1 and PspC2 belong to two different families of PspCs, with PspC2 belonging to the same family A as PspC in TIGR4, previously crystalized in the presence of FH 27 , and PspC1 to family B including strain D39, which has been used in several FH binding studies 28 . We next determined the localization of PspC1 and PspC2 with super-resolution STED microscopy in exponentially growing BHN418 bacteria, using polyclonal antibodies raised against PspC1 (PspC from D39) and PspC2 (from BHN418), that were specific to their respective protein (Supplementary Fig 7 ). In newborn single cocci, PspC1 localized in the form of a dense band (ring) about 200 nm in width (180 nm ± 50 nm, n = 97), which tended to split into two that moved apart during elongation of the bacterium. A third thin PspC1 band could also be observed at the septum of dividing diplococci (Fig. 3e, f ). The distance between a PspC1 band and its older pole was the same, about 0.5 μm (0.50 ± 0.02), irrespective of cell length, suggesting that all cell wall elongation occurs between the PspC1 rings, and that PspC1 localizes at the division site, marking where the next division septum will be formed (Fig. 3g ). PspC2 was not found at the division sites, but localized preferentially at the bacterial poles (Fig. 3e ). Newborn cocci were fully covered by the two proteins as PspC1 decorated the central area not covered by PspC2. In diplococci and chains, however, constriction sites appeared that were not decorated by either of the two proteins. Even though the two PspC proteins were differentially localized, double staining with PspC2 and serotype 6B antibodies suggested that also the major portion of PspC2 molecules localized underneath the capsular layer (Supplementary Fig 8 ). PspC1 is the major contributor to FH recruitment We used strain BHN418 to create deletion mutants generating BHN418Δ pspC1 , BHN418Δ pspC2 , and the double mutant BHN418 ΔpspC1ΔpspC2 . We first analyzed FH binding using flow cytometry (Fig. 4a ). The BHN418 double mutant BHN418Δ pspC1 Δ pspC2 showed no FH binding, demonstrating that these two PspC proteins are the sole contributors to FH binding in strain BHN418. Furthermore, absence of PspC1 had a considerably larger negative effect on FH recruitment than the absence of PspC2 (Fig. 4a ). Co-staining of FH and PspC1 on BHN418 showed that FH deposition preferentially occurs in a banding pattern similar to that of PspC1 (Fig. 4b ), and not where PspC2 is localized (Fig. 4c ). However, co-staining of FH and PspC2 in BHN418Δ pspC1 showed that the FH signal in this mutant fully co-localized with the PspC2 signal, suggesting that in vivo absence of PspC1 leads to that PspC2 becomes fully available for FH recruitment (Fig. 4d ). PspC1 is a choline-binding protein and can therefore be removed by choline chloride treatment, unlike PspC2 that is covalently linked to the cell wall by its LPxTG motif. Choline chloride treatment of BHN418 and its pspC mutants as well as of another encapsulated CC138 isolate (BHN191) made FH in serum available for recruitment by polar located PspC2 (Fig. 4e , Supplementary Fig 9 ). Absence of PspC2 in BHN418Δ pspC2 had no effect on the pattern of FH deposition that like the wild type remained fully co-localized with PspC1 (Fig. 4f ). All strains used in the study showed similar FH binding pattern when 20% normal human serum (NHS) was used as a serum source instead of pure FH (Supplementary Fig 10 ). Images were also analyzed by calculating the co-localization constant between PspC1 and PspC2, and FH, showing low co-localization constants between FH and PspC2 in wild type bacteria, but a higher constant in bacteria lacking PspC1. Also, the co-localization constant was higher for PspC1 and FH in wt BHN418 bacteria (Supplementary Fig 11 ) Fig. 4 Even though PspC1 and PspC2 show similar binding strength to FH in vitro, PspC1 is the major contributor to FH recruitment in vivo. a Representative histogram of FH binding to BHN418 and its isogenic pspC deletion mutants using flow cytometry. Bacteria were incubated with purified human FH and stained as in Fig. 3a . Bacteria incubated without FH were used as a control for each strain. The histogram shown is representative of three independent experiments. b BHN418 was incubated with purified human FH, and FH was detected using a polyclonal goat anti-FH antibody and a FITC-labeled rabbit anti-goat IgG secondary antibody (green). Bacteria were then stained for PspC1 using anti-PspC1 antiserum and anti-mouse Alexa flour 594 secondary antibody (red). c Strain BHN418 was incubated with purified human FH, and FH was detected using a polyclonal goat anti-FH antibody and a FITC-labeled rabbit anti-goat IgG secondary antibody (green). Bacteria were then stained for PspC2 using serum purified rabbit anti-PspC2 polyclonal antibody labeled with Alexa fluor 594 (red). d The deletion mutant of PspC1, BHN418 ΔpspC1 , was stained for bound FH and PspC2 as in c . e Choline chloride treated BHN418 bacteria were incubated with purified human FH. FH and PspC2 were stained as in c . f The mutant strain BHN418 ΔpspC2 was stained for bound FH and PspC1 as in b . b – f Scale bar = 1 µm Full size image PspC1 protects division sites from C3b deposition and MAC complex formation We next used STED microscopy and double staining of the capsule and C3b to visualize the location of the two FH-recruiting PspC proteins (PspC1 and PspC2) in relation to C3b deposition and amplification. Wild type BHN418 and its pspC mutant derivatives were incubated in human serum containing FH and C3 (Fig. 5a ). In the absence of PspC1, considerably more cells (64%) contained deposited C3b (97 positive out of 150 counted) compared to wt BHN418 (16%) (19 positive out of 124 counted). Importantly, a much larger area underneath the capsular layer contained deposited C3b when compared to the wt (Supplementary Fig 12 ). However, lateral C3b amplification within the cell wall did not involve the bacterial poles. In the absence of both PspC1 and PspC2, the bacterial poles also showed C3b deposition, and in some cells the entire area underneath the capsule was covered with C3b. In the presence of PspC1, absence of PspC2 had only a marginal effect on lateral C3b amplification. Together, these data strongly suggest that the initial deposition of C3b in encapsulated pneumococci occurs at bacterial division septa from where C3b amplification can spread laterally within the cell wall, underneath the capsular layer, unless PspC1 prevents this activation step by recruiting FH that likely also enters via the division septa. These results were confirmed by flow cytometry where loss of PspC1 showed enhanced C3b deposition compared to wt BHN418, whereas loss of PspC2 had a considerably smaller impact on C3b deposition (Fig. 5b ). Images were also analyzed by calculating the co-localization constant between the capsular signal and the complement signal, showing a low co-localization constant between the capsule and C3b for all the strains tested (Supplementary Fig 13 ). Fig. 5 PspC1 protects division sites from C3b deposition and MAC complex formation, and prevents lateral C3b amplification within the cell wall. a Representative super-resolution STED microscopy images of C3b deposition on wt BHN418 and its isogenic pspC deletion mutants after incubation with 20% normal human serum. C3b and the capsule were detected as in Fig. 2a . Scale bar = 1 µm. b Representative histogram of C3b deposition on strain BHN418 and on its isogenic pspC deletion mutants after incubation with 20% normal human serum using flow cytometry. Bacteria were sequentially stained with goat anti-C3 antibody followed by FITC-labeled secondary antibody. c Representative immunofluorescence images of C5b-9 deposition on strain BHN418 and on its isogenic pspC deletion mutants after incubation with 20% normal human serum. Bacteria were stained as in Fig. 2b . Scale bar = 1 µm. d Representative histogram of C5b-9 deposition on strain BHN418 and on its isogenic pspC deletion mutants after incubation with 20% normal human serum using flow cytometry. Bacteria were sequentially stained with mouse anti-C5b-9 antibody followed by anti-mouse Alexa fluor 488 labeled secondary antibody Full size image We also compared the formation of MACs in wt BHN418 and in the pspC mutants. Immunofluorescence microscopy of C5b-9 (MAC) staining of the bacteria revealed a band pattern on wt and mutant bacteria (Fig. 5c ). However, in the absence of PspC1 considerably more cells stained positive for C5b-9, 40% (85 positive out of 213 counted), compared to about 5% for the wt (8 positive out of 173 counted) and the pspC 2 mutant (9 positive out of 174 counted). Absence of both pspC1 and pspC2 had no additional effect on MAC-complex-formation. This observation was also confirmed by flow cytometry (Fig. 5d ). Furthermore, we detected higher amounts of C3d deposited on wt BHN418 and on the pspC2 mutant than on the pspC1 mutant using western blot analysis, suggesting that degradation of complement components is more effective in the presence of PspC1 (Supplementary Fig 14 , full blots are shown in Supplementary Fig 24 ). Together these data demonstrate that PspC1 at division sites effectively protects encapsulated S. pneumoniae against C3b deposition, and lateral C3b amplification in the cell wall, as well as prevents MAC complex formation in the cytoplasmic membrane. PspC1 at division sites protects encapsulated pneumococci against opsonophagocytosis The main effect of the complement system in encapsulated pneumococci is to enhance opsonophagocytosis mediated by deposited C3b interacting with the C3b-receptor on phagocytic cells 29 . We therefore followed bacterial uptake by the human-derived macrophage cell line THP-1 in the presence of human serum. Phagocytosis was considerably higher for BHN418 and its mutant derivatives when bacteria were pre-incubated with human serum as compared to untreated bacteria (Fig. 6a ). Strain BHN418Δ pspC1 was phagocytosed better than wt BHN418, which might be explained by the higher number of division septa with C3b deposits, and the higher amounts of deposited C3b in the absence of PspC1. As expected, absence of PspC2 at the poles (BHN418Δ pspC2 ) had no effect on opsonophagocytosis compared to the wt. This difference in phagocytosis between wt BHN418 and the pspC1 mutant was not observed when bacteria were incubated with either pure FH or pure C3b (Supplementary Fig 15 ). Fig. 6 PspC1 at division sites protects encapsulated pneumococci from opsonophagocytosis while PspC2 at the pole mediates cell adhesion. a Uptake of strain BHN418 or its isogenic pspC deletion mutants, with or without human serum, by THP-1 derived macrophages. Graph shows Mean ± SEM of three independent experiments *, p < 0.05. b Representative immunofluorescence images showing bacteria attached to A549 lung epithelial cells via the poles. Phalloidin stained A549 cells (red) were incubated with FITC stained BHN418 bacteria (green). Scale bar = 5 µm. c Quantitative analysis of bacterial adhesion to A549 lung epithelial cells. d Adherence of wild type BHN418, or its isogenic pspC deletion mutants, to human lung epithelial A549 cells. Graph shows mean ± SEM of four independent experiments. *, p < 0.05, **, p < 0.01, ***, p < 0.001. e Schematic presentation of the LPxTG anchoring domain mutant of PspC1. f Representative immunofluorescence images of localization of PspC1 and PspC2 on the BHN418 pspC1LpxTG mutant. Antibody labeled with Alexa fluor 594 (red) was used in combination with mouse anti-PspC1 serum and FITC-labeled secondary antibody (green). Scale bar = 1 µm. g Adherence of the BHN418 pspC1LPxTG mutant to human lung epithelial A549 cells. Graph shows mean ± SEM of three independent experiments. ***, p < 0.001 Full size image PspC2 at the poles mediates adhesion to epithelial cells Surprisingly, the double mutant BHN418Δ pspC1 Δ pspC2 was less phagocytosed as compared to the single mutant BHN418Δ pspC1 (Fig. 6a ). This finding suggested that PspC2 might mediate additional non-C3b dependent interactions with human cells. Pneumococcal CC138 strains of serotype 6B expressing PspC1 and PspC2 have demonstrated high efficiency to colonize the respiratory tract in children 22 suggesting high colonization ability. Moreover, pneumococcal PspC, besides playing a central role in FH recruitment, has also been shown to mediate adhesion to epithelial cells 30 . We therefore examined adhesion of strain BHN418 and its mutant pspC derivatives to human lung derived A549 cells using fluorescence microscopy. BHN418 was found to attach to A549 cells via the poles (Fig. 6b, c ), whereas a random attachment pattern was found for the pspC2 deletion mutant (Fig. 6c ). Absence of PspC2, but not PspC1, resulted in a significant reduction in adhesion (Fig. 6d ). Thus, the primary role of PspC2 might be to act as a polar adhesin. This adhesive function of PspC2 could also explain the lower uptake of the double mutant BHN418Δ pspC1 Δ pspC2 compared to the single mutant BHN418Δ pspC1 . We hypothesized that the localization site of PspC2 relative to the capsular layer within the aging cell wall may allow PspC2 to act both as an adhesin, and to prevent lateral C3b amplification within the wall. As an approach to affect the position of PspC2 within the cell wall we replaced the choline-binding motif of PspC1 with the LPxTG anchoring motif of PspC2 (Fig. 6e ). In strain BHN418 pspC1LPxTG , expressing both PspC1 and PspC2 as LPxTG proteins, we noted a similar presentation of the cell wall anchored PspC1 at division sites as the choline-binding version of the protein, but a more uniform staining pattern of PspC2 with no preference for the poles, and without clear zones at division sites (Fig. 6f ). Whereas C3b binding, FH binding, and opsonophagocytosis were not different from the wt strain BHN418, adhesion to A549 cells was impaired for the mutant, suggesting that the surface presentation of PspC2 molecules relative to the capsule is reduced when a competing protein with the same cell wall sorting sequence is produced by the cell (Fig. 6g ). Protection against opsonophagocytosis is impaired in a signal peptide switched mutant We next asked if we could alter localization and/or surface density of PspC1. Existing literature suggests that localization of surface proteins in Gram positive bacteria can be changed by altering its signal peptide 31 . Although signal peptides of surface proteins in Gram positive bacteria are very conserved in having a YSIRK signal, we found small differences in the sequences between the signal peptides of PspC1 and PspC2 (Supplementary Fig 16 ). A signal peptide (SP) mutant (BHN418 sp2pspC1 ) was therefore generated by replacing the signal peptide of PspC1 with that of PspC2 (Fig. 7a ). STED imaging of BHN418 sp2pspC1 showed that PspC1 in most cells was distributed over a larger area of the bacterium compared to in BHN418 (Fig. 7b ). We analyzed the distribution pattern of PspC1 by using a signal distribution analyzer and writing a script in MATLAB. The result showed a wider peak in mutant bacteria stained for PspC1 in comparison to wt bacteria where the signal was in the form of sharp peaks over the length of the bacteria (Fig. 7b lower panel). We further compiled fluorescence data from a number of bacterial cells of different lengths. Results for higher order analysis of these STED images show that the fluorescence peaks for the wt cells were narrower/sharper and less spread out than for the mutant (Fig. 7c , Supplementary Fig 17 ). As seen in Fig. 7C for both wt and the mutant, the smallest cells possessed two PspC1 band rings close to one another. This distance increased with bacterial length. In the longest cells this distance increased to about 1 μm for both wt and mutant bacteria (Fig. 7c ). However, due to its broader and more diffuse PspC1 bands the area that lacked fluorescence, between the two bands, was ~30% smaller in the mutant as compared to the wt (Supplementary Fig 18 ). Thus, PspC1 in the SP mutant retained its localization at division sites but occupied a larger area in the cell wall around these sites. Fig. 7 A signal peptide switch mutant of PspC1 affects its surface distribution and immunoprotective function. a Schematic presentation of the signal peptide exchange mutant of PspC1 (left panel) and amino acid sequences of the signal peptide of PspC1 and PspC2 (right panel). Arrows represent the signal peptide cleavage site. b Representative STED images of the PspC1 distribution in the signal peptide mutant, BHN418 sp2pspC1 , and in wt BHN 418 bacteria. PspC1 is stained using anti-PspC1 serum and Atto674-N secondary antibody. c The fluorescence intensity distribution pattern along the length of bacteria was analyzed using a MATLAB based script. The x -axis represents the individual bacteria with length increasing from left. The y -axis shows the actual length of bacteria (in µm) and the color map represents the fluorescence intensity along the bacteria (with color scale from black via red to yellow for the highest intensities). b , c Scale bar = 1 µm. d Western blot analysis showing the expression level of PspC1 and PspC2 in wt BHN418 and in the signal peptide switch mutant, BHN418 sp2pspC1 . GAPDH was used as a loading control. Full blots are shown in Supplementary Fig. 23 . e Representative histogram of FH binding to wt BHN418 and to the signal peptide switch mutant, BHN418 sp2pspC1 . FH binding was detected using a polyclonal goat anti-FH antibody and a FITC-labeled rabbit anti-goat IgG secondary antibody. Bacteria incubated without FH were used as control. f Representative histogram of C3 and C5b-9 deposition on BHN418 and on the signal peptide switch mutant, BHN418 sp2pspC1 , after incubation with 20% normal human serum. Bacteria were sequentially stained with goat anti-C3 or mouse anti C5b-9 antibody followed by FITC-labeled secondary antibody. g Uptake of wt BHN418 or its isogenic signal peptide switch mutant, BHN418 sp2pspC1 , by THP-1 derived macrophages in the presence of human serum. Graph shows Mean ± SEM of three independent experiments. * = p < 0.05. e , f Representative histogram of three independent experiments Full size image We then asked whether the altered cell wall distribution of PspC1 in the SP mutant has consequences for bacterial FH binding, and the ability to resist complement-mediated opsonophagocytosis. Even though expression of PspC1 in wt BHN418 and the SP mutant was similar as determined by Western blot analysis (Fig. 7d ), the SP mutant exhibited a lower FH binding than BHN418 as determined by flow cytometry analysis (Fig. 7e ). By performing double staining of BHN418sp2 pspC1 bacteria with FH and PspC1, FH was shown to bind where PspC1 was expressed as in wt BHN418 (Supplementary Fig 19 ). However, FH binding showed more narrow bands than PspC1, suggesting that FH is recruited by PspC1 molecules at the division sites where the density of PspC1 is the highest and likely most accessible for incoming FH. The complement deposition on SP mutant bacteria showed a bimodal pattern with a fraction of bacteria with a considerably higher C3b deposition. However, the bulk of bacteria showed similar C3b deposition as wild type BHN418 (Fig. 7f ). The SP mutant retained the same resistance to MAC complex formation as BHN418. Importantly, opsonophagocytosis was increased in the SP mutant as compared to in wt BHN418 (Fig. 7g ). This was not due to altered presentation and role of PspC2 as a polar adhesin. We therefore suggest that PspC1 needs to recruit FH in sufficient amounts at surface accessible division septa to effectively prevent the building of bulky C3b deposits that otherwise may contact phagocytic C3b receptors on phagocytic cells, enabling phagocytosis. Discussion In this study we show that the coordinated spatial assembly of the pneumococcal cell wall with its associated immuno-protective macromolecules, capsule and FH recruiting proteins, allows pneumococcal evasion against innate immune attack by the human host (Fig. 8 ). We provide evidence that the pneumococcal capsule assembled via the wzy pathway, appears to be lacking at division septa. Thus, STED microscopy revealed no capsular staining with specific antibodies at the division septa, and SEM images showed a lack of regular surface structures at division septa particularly where cell separation had been initiated. These surface structures were present elsewhere on the cell surface in encapsulated, but completely absent in non-encapsulated isogenic mutants. As the bacterial poles were always covered by capsule, we suggest that the capsule becomes covalently associated or exposed to the outer layers of the peptidoglycan following cell separation. Fig. 8 Proposed roles for PspC1 and PspC2 in complement deposition and amplification. In wild type (WT) bacteria, PspC1 is presented at the division sites where it recruits human FH. In contrast, PspC2 is presented at the bacterial poles mediating bacterial adhesion to lung epithelial cells. Single cocci (top panel) are more resistant to complement deposition in comparison to diplococci (lower panel) or chains with the latter showing areas of little or no capsule close to the division septum, a site serving as entry port for complement deposition. In the absence of PspC1, FH can bind to PspC2 at the poles and deposited C3b can amplify laterally below the capsular layer except for to the poles, which are protected by PspC2, leading to increased opsonophagocytosis. When both PspC1 and PspC2 are lacking, bacteria are fully covered with C3b including the poles Full size image As a result of no or less capsule at the surface of newly split septal wall, these sites represent the entry port for serum factors, such as C3, across the capsule barrier. Using TEM, deposits of C3b are readily seen at some division septa, but not elsewhere in contrast to in non-encapsulated mutants where C3b becomes deposited all around the cells. Long chains of pneumococci possess higher number of division sites, thus providing more sites for C3b deposition, as shown in the LytB mutant of BHN418 (Supplementary Fig 20 ), providing a possible explanation for higher complement deposition in bacterial chains compared to diplococci 32 . To evade amplification of C3b, all encapsulated pathogens possess FH binding proteins that inhibit amplification of recruited C3b. We show here that in the pneumococcus, the major FH-recruiting protein PspC, is strategically positioned at each division site. The division site in the pneumococcus is a region in the cell that marks the bacterial cell equator. It was recently discovered that the Mid-cell-anchored protein Z (MapZ) forms ring structures in the membrane at the division site that move apart as new lateral wall becomes inserted between these sites and that MapZ positions the FtsZ ring 8 . As a result the distance between the division site and the bacterial pole remains constant. It is not known whether or not the two division sites each have an elongasome complex locally assembled, producing new lateral wall from each site but in the opposite direction. The localization of PspC at all division sites means that it will become positioned precisely at the equator where the next cell division will become initiated, allowing a spatial closeness between PspC mediated recruitment of FH and C3b deposited at division septa. PspC is a choline-binding protein binding choline residues on teichoic acids that either are covalently bound to N-acetyl-muramic acid residues on the peptidoglycan (WTA) or to the membrane (LTA) 13 . We do not believe that localization of PspC to division sites is exclusively due to binding to LTA, that like MapZ is membrane bound, since when we replaced the choline-binding motif of PspC1 with the LPxTG motif of PspC2 we still observed PspC1 at division sites. Changing the signal peptide of PspC1 to that of PspC2, having a polar location, did not prevent PspC1 localization around the division sites, but broadened the band which also had negative consequences for FH recruitment. Super-resolution microscopy using STED reveals for the first time that PspC at division sites is preferentially localized within the wall underneath the capsular layer. This localization of PspC also allows FH to become recruited to the sites in the cell wall where C3b becomes deposited. STED microscopy reveals that in the absence of PspC and FH recruitment to division sites, C3b amplification at division septa can proceed laterally within the wall and underneath the capsule. Appropriate localization of recruited FH may depend on the precise position, within the roughly 36 nm thick cell wall 33 , for the FH-binding domain of PspC. This position will depend on if the PspC molecules are anchored to LTA or to WTA and in the latter case if WTA is anchored to a young or old cell wall. Localization of PspC molecules to the outer layers of the peptidoglycan may result in surface exposure, explaining why PspC may also act as a cell adhesin 24 . A small percentage of pneumococci express in addition to PspC also PspC2, a LPxTG protein that we here show is preferentially localized to the bacterial poles and appears to be lacking at division sites. Recombinant PspC1 and PspC2 both bind to FH with similar affinities, but in vivo, PspC2 only marginally contributes to FH recruitment. The majority of PspC2 molecules appear to be localized underneath the capsule, and only recruit FH to the poles when PspC1 at division sites is absent, suggesting that not only C3, but also FH enter encapsulated pneumococci at the division septum, thereby being first recruited by PspC1. Only in the absence of PspC1 FH can diffuse within the wall to PspC2 at the poles. The capsule decreases FH binding in wt bacteria, however, also in capsular mutants, PspC1 contributes more to FH binding than PspC2 (Supplementary Fig 21 ). We also show that PspC2, that lacks a pIgR motif, acts as an adhesin both to A549 epithelial cells (lacking pIgR) and to Detroit cells (having pIgR), allowing polar attachment to epithelial cells (Supplementary Fig 22 ). We speculate that at the poles a sufficient number of PspC2 molecules are anchored to the outer layers of the peptidoglycan allowing exposure outside the capsular layer and polar adhesion. This hypothesis is supported by our finding that PspC2 mediated adhesion is completely abolished when bacterial cells in addition express an LPxTG version of PspC at division sites. Competition for sortase A, a membrane bound enzyme localized near the division septum 34 , linking LPxTG proteins to the peptidoglycan 35 , may explain this unexpected phenotype. C3b deposition has two major effects on encapsulated bacteria, promoting opsonophagocytosis by C3b-mediated binding of macrophages, and formation of MAC complexes lysing the cells 25 . We show here that the absence of FH recruitment to division sites increases opsonophagocytosis. We believe that surface exposure of C3b to contact the CR3 receptor, may occur at the division septum. Even laterally amplified C3b in the cell wall underneath the capsule may become exposed at subsequently formed division septa. Also, in the absence of FH recruitment more division septa become decorated by C3b, potentially increasing the number of bacteria accessible to CR3-mediated binding by macrophages. In support of this we note that presentation of PspC over a larger area around the division sites as in the signal peptide switch mutant, results in a much increased opsonophagocytosis. In this mutant a fraction of cells shows an increased C3b deposition compared to the bulk of bacteria, suggesting that it is this fraction, exposing more C3b at division septa, that become oposonophagocytosed. MAC complex formation in encapsulated pneumococci is a rare event that occurs at or near the division septum. Our super-resolution imaging suggest that MAC complexes are not formed in the wall, as has been proposed for Streptococcus pyogenes 26 , but in the membrane, as the distance between the C5b-9 signal to the capsular signal seemed to be larger than for C3b and the capsule. Absence of PspC at division sites increased the number of MAC complexes close to division septa, but no spread of multiple MAC complexes on the same cell was observed. The reason why serum killing is usually not observed in encapsulated pneumococci might primarily be due to the low frequency at which MAC complexes are formed. Our study provides the first evidence of functional and spatial coordination between two major virulence factors, the pneumococcal capsule and FH binding surface proteins. The spatial relationship shown here between the capsule and pneumococcal surface proteins for bacterial complement evasion and adhesive functions may also have implications for the development of new pneumococcal vaccines based on surface proteins, as surface accessibility might be a crucial factor determining their ability to promote antibody-mediated opsonophagocytosis. Methods Bacterial strains and culture conditions Bacterial strains (Supplementary Table 1 ) were grown overnight on blood agar plates at 37 °C and 5% CO 2 and transferred to liquid casitone and yeast extract (C+Y) containing cultures with 5% (wt/vol) dextrose and serum (DS)-containing medium. Unless otherwise stated, strains were grown to mid-log phase (OD620, 0.3–0.4). Mutants of Streptococcus pneumoniae All deletion mutants were constructed using polymerase chain reaction (PCR) ligation mutagenesis. PCR amplicons of fragments upstream and downstream of the target gene and the Erythromycin (ermB), kanamycin (kan) cassette, or tetracycline (tet) cassette were generated with specific primer pairs containing overlapping regions. The ermB cassette, kan cassette, or tet cassette was first fused to the upstream fragment and then the downstream fragment was added by PCR. If amplification led to multiple PCR products, the PCR fragment with the correct size was gel-purified after agarose gel electrophoresis. Finally, the fusion PCR product was used to transform S. pneumoniae . Mutants were selected on blood agar plates containing erythromycin (1 μg/mL), kanamycin (400 μg/mL) or both. Tetracycline resistance was used to create capsule deletion mutant. The correct insertion was confirmed by PCR and sequencing. The primers used for each mutant are stated in Supplementary Table 3 . Expression and purification of 6×His-PspC PspC (D39), PspC1 (BHN418), and PspC2 (BHN418) were cloned into the NdeI/XhoI or NdeI/BamHI restriction site of pET28a vector (Novagen) and transformed into E. coli Rosetta (DE3) cells (Novagen). Recombinant protein was expressed as 6×His-tagged N-terminal fusion protein in logarithmically growing cultures with IPTG (Sigma). Bacteria were resuspended in buffer containing 50 mM Tris·HCl, 50 mM NaCl, and 5% glycerol and disrupted using a Stansted cell disrupter. 6×His-PspC was purified from the soluble fraction by affinity chromatography by using Ni-NTA His Bind resin (Novagen) according to the manufacturer’s guidelines. Purified 6×His-PspC2 was incubated rotating with 100 U/mL thrombin (Sigma) for 2 h at room temperature. To remove uncut protein and His-tag, PspC2 was passed over Ni-NTA resin and further purified by size exclusion chromatography by using Superdex 75 gel filtration columns (GE Healthcare). Purity was assessed by electrophoresis on a SDS/PAGE. Antibody production directed to PspC2 Recombinant PspC2 protein lacking the LPxTG motif and the first 37 aa, which constitute a signal peptide region, was expressed and purified in E. coli (see above). Purified PspC2 was sent to Innovagen AB for rabbit immunization and production of polyclonal antibodies. Polyclonal anti-PspC2 IgG were purified by protein G affinity chromatography according to the manufacturer’s manual using HiTrap protein G-sepharose columns (GE Healthcare). Choline chloride treatment of the bacteria Bacteria were grown in C+Y media till OD 0.4, washed with PBS and incubated in 140 mM choline chloride solution for 30 min at 37 °C. After PBS wash bacteria were resuspended in PBS for further antibody incubation. Peptide mapping An array of 48 overlapping peptides was synthesized by Genescript custom peptide synthesis (USA). Each spot contained a 15 amino acids long peptide covalently spotted onto a cellulose membrane with three overlapping amino acids to the next spot and covering the N-terminal domain of PspC1 and PspC2. The peptide array membrane was incubated with PspC1 (a generous gift from Novartis Vaccines, Siena, Italy) and PspC2 antibodies respectively (1:1000) followed by incubation with the HRP conjugated secondary antibody (anti-mouse for PspC1 and anti-rabbit for PspC2) for detection (GE healthcare, NXA931, RPN4301, 1:10,000) (Supplementary Fig 6 ). Factor H binding to pneumococci Bacteria were grown at 37 °C in C+Y medium, washed and resuspended in PBS to a concentration of approximately 1 × 10 8 bacteria/ml. 100 µl of the bacterial suspension was incubated in 2 µg purified FH (Calbiochem341274,) at 37 °C for 30 min and after three washes, FH binding was detected using a polyclonal goat anti-FH antibody (Calbiochem, 341276, 1:100) and a fluorescein isothiocyanate (FITC)-labeled rabbit anti-goat IgG secondary antibody (Life Technologies, A16143, 1:200). Each incubation step was followed by three washes in PBS (centrifugation at 1500 × g for 5 min). Bacteria were fixed in 4% paraformaldehyde, washed, and visualized by fluorescence microscopy (Leica or Delta Vision) or analyzed quantitatively by flow cytometry using a Gallios™ flow cytometer (Beckman coulter). Surface plasmon resonance/BIACORE analysis of PspC–FH interaction The two variants of PspC, PspC1 (BHN418), PspC2 (BHN418), and PspC (D39) were used 22 to analyze PspC–FH interaction by the surface plasmon resonance technique using Biacore X100 equipment (Supplementary Fig 5 ) ( ) 36 . Briefly, FH was coupled to the CM4 chip in active and reference flow cell, and recombinant PspCs were injected into the flowcell. Molar ratio for interaction was considered 1:1 for this interaction. Background measurements of the uncoupled flow cell were subtracted. Commercial PBS-P+buffer (GE healthcare) was used for flow. Capture buffer (0.5–1 µg/ml in 10 mM Acetate buffer, pH 4.5) and regeneration buffer (3 M NaCl, 10 mM acetate buffer, pH 4.6) were obtained from GE healthcare. Deposition of C3b and C5b-9 (MAC) Bacteria were incubated with 20% NHS (Sigma) in PBS for 30 min at 37 °C and washed three times with PBS. Staining was performed sequentially with goat anti-C3 antibody (Calbiochem, 204869, 1:200), recognizing C3b, followed by Alexa fluor 488 labeled Donkey anti-goat antibody diluted (Invitrogen, A11055, 1:200), each incubated for 30 min followed by washing. Bacteria were fixed in 4% paraformaldehyde and visualized by fluorescence microscopy (Leica or Delta Vision) using 100× objective or analyzed quantitatively by flow cytometry using a Gallios™ flow cytometer (Beckman coulter). For C5b-9 staining exponentially growing bacterial cultures were incubated with 20% NHS (Sigma) followed by incubation with anti-C5b-9 antibody (αE11), (Santa cruz biotechnology, sc-58935, 1:50). Alexa fluor 488 labeled secondary antibody (Life Technologies, A11001, 1:100) was used for further detection. Membrane lipids were stained with Nile red (sigma, N3013). After every incubation bacteria were washed twice with 1X HBSS. Samples were fixed and stained with Nile red for 5 min before spreading the samples on glass slides. Samples were visualized using Leica microscope with 100× objective. STED imaging STED imaging was performed with an instrument from Abberior Instruments (Göttingen, Germany), built on a stand from Olympus (IX83), with a four-mirror beam scanner (Quad scanner, Abberior Instruments), and modified for two-color STED imaging. Two fiber-coupled, pulsed (20 MHz) diode lasers emitting at 637 nm (LDH-D-C, PicoQuant AG, Berlin) and 594 nm (Abberior Instruments) are used for excitation (alternating mode, with the excitation pulses of the two lasers out of phase with each other to minimize cross-talk). The beam of a pulsed fiber laser (MPB, Canada, model PFL-P-30-775-B1R, 775 nm emission, 40 MHz repetition rate, 1,2 ns pulse width, 1,2 W maximum average power, 30 nJ pulse energy) is reshaped by a phase plate (VPP-1c, RPC Photonics) into a donut profile and then used for stimulated emission. The three laser beams are overlapped and then focused by an oil immersion objective (Olympus, UPLSAPO 100XO, NA 1,4) into the sample. The fluorescence is collected through the same objective, separated from the excitation path via a dichroic mirror, passed through a motorized confocal pinhole (MPH16, Thorlabs, set at 50 µm diameter) in the image plane, split by a dichroic mirror and then detected by two single photon counting detectors (Excelitas Technologies, SPCM-AQRH-13), equipped with separate emission filters (FF01-615/20 and FF02-685/40–25, Semrock) and a common IR-filter (FF01-775/SP-25, Semrock) to suppress any scattered light from the STED laser. In this study, a spatial resolution (FWHM) of about 25 nm could be reached. Image acquisition, including laser timing/triggering and detector gating is controlled via an FPGA-card and by the Imspector software (Abberior Instruments). Capsule staining was performed using rabbit anti-capsule serum for serotype 6B, serotype 2 and serotype 4 (Statens Serum Institute, Denmark) and anti-rabbit Atto647N secondary antibody (Sigma, 4039, 1:100). STED image analysis The projected fluorescence intensity distribution along the main symmetry axis of the bacteria was analyzed by use of higher order moments µ m , defined as $${\mathrm{\mu }}_{m} = \frac{1}{{N}}\mathop {\sum }\limits_{{{k}} = 1}^{{N}} \left( {\frac{{I_k - 〈I_k 〉}}{{〈I_k 〉}}} \right)^{m},$$ (1) where m = 2,3,4… is the order of the moment. I k is the summed fluorescence intensity over the width of the bacteria, at a pixel k along the length symmetry axis of the bacteria. \({\langle I_k \rangle}\) is the mean I k , averaged over the whole bacteria, i.e., over all N pixels along the full length axis of the bacteria. If I k is close to \({\langle I_k \rangle}\) , then \((I_k - 〈I_k 〉)/〈I_k 〉\) will approach zero as the power m increases. This corresponds to the case when the projected intensity trace along the length axis of the bacteria ( I k for k = 1,…, N ) is more evenly distributed, with only minor deviations from \({\langle I_k \rangle}\) . On the other hand, if the projected intensity trace along the length axis includes high and sharp peaks that markedly deviate from \({\langle I_k \rangle}\) , then µ m will not decrease to the same extent with increasing m , if at all. Therefore, in bacteria with PspC preferably localized in defined ring structures, in planes perpendicular to the length axis of the bacteria, rather sharp and narrow peaks in the projected intensity traces along these bacteria are expected. The higher order moments, µ m , calculated from these bacteria will be markedly higher than those from bacteria where PspC is more evenly spread over the whole bacteria, and we therefore used µ m to quantify this difference. MATLAB analysis Analysis of STED images was carried out using custom written code in MATLAB R2013b. Basically the algorithm worked as follows: In each image bacteria to be analyzed were selected manually by clicking at both ends of the bacteria. MATLAB then calculated a line in between those two points where the line is represented by an array in which each element is the total fluorescent signal integrated orthogonal to the line and over the bacteria’s width. In this way the total fluorescence distribution along the length of the bacteria can be stored and analyzed as a fluorescence trace for each selected bacteria. For area estimation, we summed together the patches along the symmetry line where the fluorescence distribution was non-zero. The result was taken in relation to the total length of the bacteria as an estimation of the relative area of the bacteria covered by complement. The fluorescence intensity traces were also ordered after the length of the bacteria in order to create a map of the fluorescence intensity profile for all bacteria, see Fig. 7b and c. Co-localization analysis The co-localization calculations were carried out using custom written code in MATLAB R2013b. The calculations, analysis and thresholding of images followed the outline described in detail by Xu et al 37 . The co-localization coefficients were based on image cross-correlation spectroscopy (ICCS) 38 as well as Pearsons correlation 39 . By ICCS, co-localization coefficients are obtained for both the red and the green channel. These coefficients are more suited for co-localization analysis when there is an excess of one of the labeled species compared to the other 36 . In the recorded STED images in this study, there was almost always more red than green signal, and we therefore used the amplitude of the ICCS function when correlating the green signal with the red signal (ICCS GREEN ), as a measure for co-localization. The ICCS GREEN coefficient takes on values between 0 and 1, where 0 is no co-localization and 1 is perfect co-localization between the channels. Images were processed by applying a smoothing Gaussian filter in order to reduce noise before the co-localization analysis. For statistical significance we used a two sided student t-test to test the null-hypotheses that the co-localization coefficients from two different samples have the same mean. This test yields a p -value giving the probability to for the null-hypothesis being true. If the p -value <0.05 the null-hypothesis is rejected with a *significance, with a p -value <0.01 the null-hypothesis is rejected with a **significance, and if the p -value <0.001 the null-hypothesis is rejected with a ***significance. Adhesion assays to human A549 or Detroit 562 cells Human A549 cells (lung adenocarcinoma epithelial cell line) (ATCC CCL-185) or human Detroit 562 cells (ATCC CCL-138) were cultured in RPMI 1640 (Invitrogen) supplemented with 10% FBS and 2 mM L-glutamine at 37 °C and 5% CO 2 . Cells were seeded in 24-well plates (5 × 10 5 cells per well) and grown to confluent monolayers. Cells were washed thoroughly and infected with pneumococci for 1 h in 500 µl RMPI medium without FBS at 37 °C using a multiplicity of infection of 20. After infection, cells were washed extensively to remove unbound bacteria and lysed for 12 min at 37 °C in 1% saponin (Sigma). Serial dilutions were plated on blood agar plates and incubated over night at 37 °C and 5% CO 2 to determine the number of adhered bacteria. To assess adhesion by microscopy, A549 cells were seeded in a density of 5 × 10 5 per well in a chamber slide and grown to approximately 60% confluency. Cells were further stained with phalloidin (red) while mid-log phase bacteria were stained with FITC (green), and cells were infected with bacteria for 1 h at 37 °C. Samples were fixed with 2% PFA before visualization with microscopy (Delta vision). Transmission electron microscopy (TEM) S. pneumoniae BHN418 or BHN418 Δcapsule were grown at 37 °C in C+Y medium until OD620 = 0.15. 20 min post induction cells were centrifuged for 15 min at 5000 × g , 4 °C. The pellet was resuspended in 20% NHS (Sigma) and incubated at 37 °C with shaking for 45 min. Bacteria were washed thoroughly and resuspended in 80 µL of phosphate buffered saline (PBS). Drops of 10 µL were placed for 2 min on carbon coated grids (Oxford Instruments, UK). For immunogold staining, grids were fixed with 10 µL of 0.2% glutaraldehyde for 2 min and the reaction was stopped with 10 µL of 1% glycine for 15 min. The grids were then incubated with anti C3 antibody (Calbiochem, USA) diluted 1:100 for 1 h, washed three times with PBS, incubated with donkey anti-goat antibody (18 nm gold particles—Abcam, UK) diluted 1:250 for 1 h. Finally, grids were washed three times with PBS. All grids were negatively stained with 2% uranyl acetate in water. Specimens were examined in a Tecnai 12 Spirit Bio TWIN transmission electron microscope (FEI Company, Eindhoven, Netherlands) operated at 100 kV. Digital images were recorded using a Veleta camera (Olympus Soft Imaging Solutions, GmbH, Münster, Germany). Scanning electron microscopy (SEM) Bacteria were grown at 37 °C and 5% CO2 in C+Y medium until OD620 = 0.2. They were washed once in PBS and fixed by immersion in 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4). The specimens were then transferred to a pre-sputtered filter (Polyamide, NL 16, GE Healthcare UK Limited, Buckinghamshire, UK), rinsed in distilled water and placed in 70% ethanol for 10 min, 95% ethanol for 10 min and absolute ethanol for 15 min all at refrigerator temperature and then into acetone. Specimens were then dried using a critical point dryer (Balzer, CPD 010, Lichtenstein) using carbon dioxide. After drying, filter was mounted on an aluminum stub and coated with Platinum (Q150T ES, West Sussex, UK). The specimens were analyzed in an Ultra 55 field emission scanning electron microscope (Zeiss, Oberkochen, Germany) at 5 kV. Western blot BHN418 and BHN418s p2pspc1 (Fig. 7d ) were grown to mid log phase in C+Y medium and harvested by centrifugation at 3500 × g for 5 min. Bacteria were lysed in a buffer containing 1% triton X100. Ten microgram of protein was used for SDS-PAGE (polyacrylamid electrophoresis) and samples were transferred on Bio-Rad nitrocellulose membrane. After blocking the membrane was incubated with anti PspC1 or anti GAPDH antibody (1:1000). Signal was detected using secondary antibody conjugated with HRP (1:10,000). For detection of C3d (Supplementary Fig 14 ), BHN418 and isogenic pspC mutants were grown to mid log phase in C+Y medium and harvested by centrifugation at 3500 × g for 5 min. Bacteria were incubated with 20% NHS (Sigma) in PBS for 60 min at 37 °C and washed three times with PBS. Ten microgram of protein was loaded on SDS-PAGE and samples were transferred on Bio-Rad nitrocellulose membrane. After blocking the membrane was incubated with anti-C3d antibody (Abcam,ab17453,1:2000). Signal was detected using secondary antibody conjugated with HRP (1:10,000). For detection of the specificity of anti PspC1 and anti PspC2 antibody (Supplementary Fig 7 ), BHN418 was grown to mid log phase in C+Y medium and harvested by centrifugation at 3500 × g for 5 min. Bacteria were lysed in a buffer containing 1% triton X100. Ten microgram of protein was loaded on SDS-PAGE and samples were transferred on Bio-Rad nitrocellulose membrane. After blocking the membrane was incubated with mouse anti-PspC1 or rabbit anti-PspC2 antibody (1:4000). Signal was detected using secondary antibody conjugated with HRP (GE healthcare) (1:10,000). Phagocytosis assay Human monocytic leukemia THP-1 cells (ATCC TIB-202, Manassas, VA) were cultivated in RPMI 1640 (Invitrogen) supplemented with 10% FBS and 2 mM L-glutamine at 37 °C and 5% CO 2 . THP-1 cells were seeded in 24-well plates (5 × 10 5 cells per well) and differentiated for 48 h with 20 ng/ml of phorbol myristate acetate (PMA) (Sigma). Bacteria were incubated with 20% NHS (Sigma) in PBS for 30 min at 37 °C for opsonization and washed three times with PBS. Cells were washed with PBS and infected at a MOI of 60 with opsonized or non-opsonized pneumococci and resuspended in RMPI medium without FBS. After 1.5 h of incubation, cells were washed three times with PBS, and incubated for a further 1 h in the presence of 300 μg/ml gentamycin to kill extracellular bacteria. The cells were subsequently washed three times with PBS and lysed for 12 min at 37 °C with 1% saponin in PBS. Serial dilutions were plated on blood agar plates and incubated over night at 37 °C and 5% CO 2 to determine the number of phagocytosed bacteria. For phagocytosis assay in the presence of pure FH and pure C3b, bacteria were grown at 37 °C in C+Y medium, washed and resuspended in PBS to a concentration of approximately 1 × 10 8 bacteria/ml. 200 µl of the bacterial suspension was incubated in either 2 µg purified FH (Calbiochem) or 2 µg purified C3b (Calbiochem, 204860) at 37 °C for 30 min and after three washes with PBS, bacteria were used in the uptake assay described above. Statistical analysis Results are expressed as mean ± SEM or SD. Statistical significance was assessed using one-way Anova and Bonferroni posttest unless otherwise specified. All analyses were performed using GraphPad Prism ® version 5.04 or Python documentation. P -values <0.05 were considered significant. Code availability The custom codes used in this study are available at . Data availability The data that support the findings of this study are available in this article and its Supplementary Information files, or from the corresponding author upon reasonable request. Change history 09 September 2018 This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
Successful pathogenic strains of pneumococci have two proteins that, owing to their location on the surface of the bacteria, enhance their survival and ability to cause disease, according to a study from Karolinska Institutet in Sweden published in Nature Communications . Pneumococcal infections are one of the most common causes of disease and death in the world. One reason for the pathogenic potential of these bacteria is that they produce a sugar casing. This capsule prevents the important immune component C3b from attaching to and attacking the bacteria. Researchers at Karolinska Institutet and the Royal Institute of Technology in Sweden have now studied in detail how pneumococci interact with the part of the immune system called the complement system, which includes C3b. The complement system often works as the first line of defence against foreign substances and cells, triggering a number of immune reactions in the body. The researchers show that the capsule is weak at the bacteria's point of division, which therefore presents an opening for C3b. By using super-resolution microscopy (STED) they found that C3b accumulates under the capsule primarily at the division sites. This accumulation can continue and cover the entire bacteria unless the pneumococcus can find a strategy to prevent it happening. The study also shows that a common surface protein on pneumococci called PspC1 is located right at the division site, where it recruits another protein called Factor H, which negatively regulates the complement system by, amongst other mechanisms, inactivating C3b. Some especially successful and pathogenic pneumococcal strains also express a closely related protein, PspC2, which is mainly localised at the bacterial poles. This separate location on the surface of the bacteria affects the two surface proteins' functions. Unlike PspC1, which binds Factor H, PspC2 affects the bacteria's ability to adhere to epithelial cells, which can be found in the respiratory tract, in mucus membranes and elsewhere. "Our study shows that the precise localisation of bacterial surface proteins in relation to the capsule layer affects the role they will have in the disease development," says Birgitta Henriques-Normark, professor at the Department of Microbiology, Tumour and Cell Biology, Karolinska Institutet. "This is an important piece of the puzzle to understand how pneumococci avoid the immune system and cause everything from otitis and sinusitis to severe pneumonia and septicaemia."
10.1038/s41467-018-05494-w
Space
Winters on Mars are shaping the Red Planet's landscape
L. E. Mc Keown et al, Experiments On Sublimating Carbon Dioxide Ice And Implications For Contemporary Surface Processes On Mars, Scientific Reports (2017). DOI: 10.1038/s41598-017-14132-2 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-14132-2
https://phys.org/news/2017-10-winters-mars-red-planet-landscape.html
Abstract Carbon dioxide is Mars’ primary atmospheric constituent and is an active driver of Martian surface evolution. CO 2 ice sublimation mechanisms have been proposed for a host of features that form in the contemporary Martian climate. However, there has been very little experimental work or quantitative modelling to test the validity of these hypotheses. Here we present the results of the first laboratory experiments undertaken to investigate if the interaction between sublimating CO 2 ice blocks and a warm, porous, mobile regolith can generate features similar in morphology to those forming on Martian dunes today. We find that CO 2 sublimation can mobilise grains to form (i) pits and (ii) furrows. We have documented new detached pits at the termini of linear gullies on Martian dunes. Based on their geomorphic similarity to the features observed in our laboratory experiments, and on scaling arguments, we propose a new hypothesis that detached pits are formed by the impact of granular jets generated by sublimating CO 2 . We also study the erosion patterns formed underneath a sublimating block of CO 2 ice and demonstrate that these resemble furrow patterns on Mars, suggesting similar formation mechanisms. Introduction The Martian atmosphere, which is comprised of over 95% CO 2 1 , interacts seasonally with the planetary surface. As temperatures fall between late autumn and early winter, a CO 2 deposit covers the surface 2 in thicknesses ranging from around a metre in the polar regions 1 , 3 to a few millimetres towards the equator 4 . The distribution of this dry ice is governed primarily by solar insolation 5 . In the early spring, as insolation increases, the dry ice begins to sublimate. This process is now recognised as an important agent in the formation of contemporary surface features on Mars. These features include the dendritic araneiform terrain of the south polar cryptic region 6 , 7 , 8 , linear gullies, their associated pits 9 , 10 , and sand furrows 11 . In this study we focus on linear gully pits and sand furrows; both active features which are observed to form and evolve on dunes under the current Martian climate. These features have no Earth analogues and the specific mechanisms responsible for their formation have yet to be fully understood and quantified. Furrows Furrows are shallow (~0.25 m) and narrow (~1.5 m) negative relief 11 , 12 , features which are observed on over 95% of the northern polar dunes 11 and are also found between 40°S and 72°S 13 . Their network patterns range from rectilinear, to dendritic and radial, tributary and distributary and their planforms can be linear or sinuous 12 (Fig. 1 ), though it is unclear what factors control this variety of patterns. Appearing to “defy gravity”, furrows extend upslope and transect existing aeolian ripples, and so while their patterns may resemble those eroded by fluvial activity on Earth, gravity-driven processes are unlikely to form them 12 . Figure 1 Examples of sand furrows on Martian Dunes. ( a ) Linear furrow (blue arrow) on sand dune in Chasma Boreale (all Chasma Boreale furrows are taken from HiRISE image ESP_026851_2590, latitude 78.65°, longitude 308.494°, solar incidence angle 56° with the Sun ~34° above the horizon). ( b ) sinuous (black arrow) furrow on sand dune in Chasma Boreale, ( c ) dendritic furrows (white arrows) on sand dunes at latitude −67.607°, 185.343° in the southern hemisphere (HiRISE image ESP_023270_1120. Solar incidence angle is 60° with the sun ~30° above the horizon). Yellow arrows point to boulders, red arrows point to dark fans and highlight their proximity to the furrows. ( d ) radial (green arrow) furrow on sand dune in Chasma Boreale. ( e ) rectilinear (purple arrow) and radial (green arrow) furrows on sand dune in Chasma Boreale. Images have been stretched to improve contrast as furrows are narrow, shallow and thus subtle features. HiRISE image credit: NASA/JPL/University of Arizona. Full size image Furrow Formation Hypotheses Furrows frequently form after the appearance of polygonal cracks in CO 2 ice. This led Bourke 12 to hypothesise that they were caused by cryoventing; that is, basal sublimation of CO 2 and consequent erosion. This is similar to Kieffer’s model for the dark spots and fans in the southern hemisphere 14 , but applied in the context of dunes. The cryoventing model proposes that in the spring, gas pressures increase at the ice-substrate interface until the overlying ice cracks. Gas rapidly exits, eroding material below the ice sheet to form furrows. A plume then deposits the sediment to form fans on the surface of the seasonal ice. The thickness of the overlying ice and dune topography are thought to have a strong influence on this process 15 . Close spacing of vents has also been noted to reduce cryoventing efficacy, with fewer furrows forming in locations where ice cracks are closer together 12 . Recent work has drawn a distinction between furrows in the northern hemisphere and “dendritic troughs” in the southern hemisphere 16 . While both show similar morphologies, furrows in the northern hemisphere are ephemeral features which disappear every year 17 . The dendritic troughs on south polar dunes endure, with new tributaries adding to the networks annually 16 . This study 16 has drawn a potential link in formation process between the highly dendritic araneiform terrain of the south polar cryptic region and furrows/dendritic troughs. It has been suggested that araneiform terrain (dubbed “spiders” in the literature) may develop over many years by a gradual connection of dendritic networks into a radial network, rather than forming in one event 16 . Noting the difference in environmental conditions between the hemispheres, it has been argued that furrows in the northern hemisphere do not develop into dendrites because of the high mobility of the material into which they are eroded 16 . However, the size distribution of the granular substrate in both hemispheres is poorly constrained. For the purpose of our purely morphological laboratory study we refer to all network types as furrows. Recent experimental work has demonstrated that insolation-driven dust ejecta from within CO 2 ice is a viable process 18 and that pressure driven processes can form dendritic patterns under certain conditions 19 . However, it has not yet been demonstrated that sublimating CO 2 in contact with porous substrate can transport underlying material and create the complex furrow morphologies seen on Mars. Until now, this geomorphic process has been framed in a purely theoretical context. Linear Gully Pits Linear gullies (Fig. 2 ) form on Martian dunes under current climatic conditions 10 , 20 , 21 , 22 . Recent mapping has shown that they occur between 36.3°S and 54.3°S and from 64.6°S to 70.4°S 22 . They are characterised by long (100–2,500 m), narrow grooves, are bounded by levées and have relatively small source areas. Forming exclusively on south polar and mid latitudinal intra-crater dunes, their activity is seasonal, and has been found to be concurrent with the final stages of sublimation of the CO 2 ice deposit at the end of winter and beginning of spring 22 , rendering a CO 2 sublimation formation mechanism likely. These eponymous features are mostly linear; though they sometimes adopt sinuous patterns and taper down-slope. Figure 2 Terminal and detached linear gully pit morphologies. ( a ) Terminal pits (white arrows), and detached pits (red arrows) at linear gully termini on Russell Crater megadune (HiRISE image ESP_020784_1255, image centre at −54.3°, 12.9°, taken at L s 209.5°). Black arrows point to inset pits within terminal pits. Green arrow indicates a high albedo block which is likely to be a CO 2 block within a pre-existing terminal pit. ( b ) Terminal pits (white arrows) and detached pits (red arrows) on Russell Crater megadune (HiRISE image PSP_007018_1255 taken at L s 22.5°, image centre at −54.3°, 13° which was used as part of a stereo pair to create the DTM, DTEEC_007018_1255_007229_1255). ( c ) Example of highly sinuous linear gullies on the west side of Matara Crater dunefield (HiRISE image ESP_030528_1300, image centre at −49.5°, 77.2°, taken at L s 254.8°). Red arrows point to detached pits, white arrows show terminal pits. ( d ) “Tadpole” terminal pits in Proctor Crater dunefield (HiRISE image PSP_003800_1325, image centre at −46.9°, 30.7°, taken at L s 240.9°). These pits (white arrows) are much wider in diameter than their corresponding channels and are surrounded by detached pits (red arrows). ( e ) Russell Crater megadune (HiRISE image PSP_007018_1255). White boxes indicate locations of (a) and (b) . ( f ) Matara Crater dunefield. White box shows location of ( c ). ( g ) Proctor Crater dunefield (HiRISE image PSP_003800_1325). White box shows location of ( d ). HiRISE image credit: NASA/JPL/University of Arizona. Full size image Terminal pits Unlike terrestrial gullies, which are nearly always formed by liquid water, linear gullies do not have associated debris aprons. Instead, their channel termini are invariably punctuated by one (or more) terminal pit(s) (Fig. 2a–d ). Located at the end of the linear gully channel, the terminal pit is usually slightly wider than the channel 10 . Terminal pits are defined by a roughly circular depression encircled by levées. Most linear gully channels end in a single pit, herein referred to as the primary terminal pit. However, some terminal pits occur as part of a dyadic to triadic sequence known as “pit strings”. Secondary and tertiary terminal pits are defined as terminal pits that are in a linear sequence with the primary terminal pit, and are always similar in size to the primary terminal pit. Many linear gullies, particularly the largest in scale, have lower albedo circular depressions within the distal region of their channels and/or their primary terminal pits. We refer to these as inset pits. Detached pits Some terminal pits or terminal pit strings are surrounded by multiple detached pits (Fig. 2a–d ). These pits are not connected to (or in line with) channels and are usually considerably smaller than their associated terminal pit(s). Detached pits are always exclusive to the distal region of the channel and are not found upslope. Pit Formation Hypotheses Linear gully pits have no terrestrial analogues, and so it is likely that a geomorphic agent that does not occur naturally on Earth may be instrumental in their formation. Originally, terminal pits were proposed to be older flow fronts formed by successive debris flows 20 , however this is unlikely as linear gullies are forming today, in a climate which does not currently support liquid water 10 . There are two main schools of thought on the type of CO 2 sublimation activity which might form linear gullies: the CO 2 basal sublimation triggered debris flow hypothesis 23 and the sliding CO 2 block hypothesis 21 . The CO 2 basal sublimation triggered debris flow hypothesis 23 suggests a solar-insolation-driven dry debris flow beneath seasonal CO 2 ice on dunes. Successive events have been suggested to carve gully channels in Russell Crater 23 . However, while basal sublimation — driven debris flows are a plausible mechanism for channel formation, such a process would also deposit material at channel termini. Additionally, the model does not account for the presence of terminal pits or detached pits. The sliding CO 2 block hypothesis 21 is currently the only process that can account for the formation of channels and terminal pits which distinguish linear gullies from other gully morphologies, making this class of gullies exclusive to Mars. The hypothesis proposes that in spring, ice blocks detach from dune brinks. The blocks then descend dune slopes supported by a lubricating layer of CO 2 gas. This gas layer is engendered by a mechanism similar to the Leidenfrost Effect 24 , which occurs when a liquid or solid is in thermal contact with a surface that is at a temperature far beyond its boiling or sublimation point. Upon surface contact, the substance (in this case, CO 2 ice), will levitate on a cushion of gas, because the total force exerted by the gas pressure under the block exceeds its weight. This lubricating gas layer eliminates, or reduces, frictional forces so that blocks may move down even very gentle slopes of a few degrees. As the block moves it will carve out channels and deposit lateral levées, until it eventually stops to sublimate at the terminus. This ultimate sublimation explains the formation of a terminal pit. The block is proposed to rest at the base of the channel while sublimating, eroding a depression beneath it and transporting material to the side to form levées. This protracted stationary sublimation may explain why pits are commonly slightly wider than their corresponding channels, which are proposed to form by the rapid movement of blocks downslope 21 . Thus far, alcove, channel, levée and shallow pit morphologies have been observed to form when CO 2 ice blocks were slid onto sand dunes in Utah and Arizona 25 , 26 . Further evidence consistent with this hypothesis was the detection of high albedo blocks within linear gully channels on Mars. These are most likely CO 2 ice blocks “caught in the act”, of sliding through pre-existing linear gully channels 10 , 21 . It is possible that secondary and tertiary terminal pits may form by blocks that bounce at the channel terminus in the manner observed during field tests on sand dunes in Utah 26 . It has also been proposed that the multiple detached pits which surround many linear gully termini may be formed by smaller parts of blocks which have broken up at the terminus, scattering ice fragments into their surroundings 21 . However, the formation mechanism responsible for pronounced terminal pits and detached pits remains uncertain. It is not known if a stationary CO 2 block can erode and deposit material to form the full range of pits identified here. Why detached pits are exclusive to a small fraction of linear gullies and why they are restricted to the lower part of the channel is also not yet clear. If detached pits were formed by broken blocks, it is conceivable that blocks would also break at locations such as sharp bends in the channel, areas of dune surface undulation and locations where channels merge. These are all regions where a block would be subjected to stress and yet detached pits are only found at channel termini. We hypothesise that if stationary sublimating CO 2 blocks have the geomorphic agency to excavate and deposit material to form pits and circular levées, they must undergo a rapid sublimation process to do so. We reason therefore that blocks which reach the terminus may emit CO 2 jets as they sublimate. If these jets entrain sufficient granular material 27 they may then be capable of eroding detached pits when they return to the surface. Prior to this study, measurements of linear gullies has concentrated on geographical trends 22 and the linear gully system as a whole 21 , 26 . Comprehending pit formation is crucial to understanding the formation mechanism responsible for this extra–terrestrial gully type. Therefore, it is important that we test whether sublimating CO 2 ice blocks can form pit morphologies. Methods Objectives To test the CO 2 block hypothesis, in the context of pit formation, we first conducted a short survey of Martian linear gully terminal pits and detached pits on dunes in Russell, Proctor and Matara Craters. The objectives of this study were to (i) characterise the different terminal pit morphometries, (ii) investigate if there was any evidence of a link between these terminal pit morphometries and detached pits and (iii) investigate the range of block sizes which may be reactivating and widening linear gully channels seasonally. To further test the CO 2 hypothesis, in the context of pit formation, we performed experiments to examine whether sliding and hence partially buried CO 2 ice blocks would form morphologies comparable to Martian linear gully primary terminal pits and detached pits. To test the cryoventing hypothesis, in the context of furrow formation, we performed a second set of experiments with two aims: (i) to study the erosion patterns resulting from the interaction between a gently placed CO 2 block and granular substrate and compare these features to a pre-existing Martian furrow network classification 12 ; (ii) to study the factors which constrain furrow pattern type and furrow density. Survey of Linear Gully Pits in Proctor, Russell and Matara Craters High Resolution Imaging Science Experiment (HiRISE) Digital Terrain Models (DTMs) DTEEC_003080_1325_004077 and DTEEC_007018_1255_007229 and corresponding orthophotos were employed to survey linear gully pits in Proctor Crater and Russell Crater, respectively. The polygon tool in ArcMap 10.4 was used to measure the area of individual terminal pits (along the levée apex), taking the diameter of this circle as the pit width. Terminal pit area was calculated by squaring the pit radius and multiplying by π . Pit depths were measured using the Interpolate Line tool, by drawing a line across the pit centre and extending this to the surrounding dune surface. Depth was estimated as the difference in elevation between a pit floor and the surrounding dune surface on each side and averaging these values. The Russell Crater DTM (DTEEC_007018_1255_007229) has an estimated vertical precision of 1.2 m 28 . The relatively low estimated vertical precision is attributed to a low convergence angle (or sum of emission angles) between the stereo pair used to develop the DTM, which reduces vertical precision. Horizontal accuracy of this DTM was given by post spacing, which was 1 m/pixel. The Proctor Crater DTM (DTEEC_003080_1325_004077) has an estimated vertical precision of <0.5 m 28 and a horizontal accuracy of 1 m/pixel. Pit widths in both locations were generally much wider than 1 m and so horizontal accuracy should not significantly affect our measurements. Both sites were dark dunes and so noise in the DTMs may have affected our measurements to a small degree. We have estimated an upper limit of this effect by taking 10 linear cross sections close to pit locations in both DTMs. We detrended these cross sections, averaged them and calculated the standard deviation as an estimate of noise 29 . In Russell Crater this value was 1.7 m and in Proctor Crater, the value was 1.08 m and so depths <1.7 m were not reported in Russell Crater and depths less than 1.08 m were not reported in Proctor Crater. Horizontal measurements from orthophotos at both locations may have been affected by atmospheric dust and detector noise which was at the pixel level and would affect our horizontal measurements by one pixel at most. A time-series of HiRISE images was used to determine whether there may be a link between detached pit and terminal pit formation. We propose that larger pit sizes (that indicate larger ice blocks) have a higher probability of generating detached pits. Terminal pit areas were measured as outlined above and the number of detached pits surrounding them were counted. Detached pits were identified as low albedo depressions. Negative topography was confirmed using the Interpolate Line tool to generate topographic profile data across the feature in the corresponding DTM where possible. Smaller features that may have been artefacts of dune surroundings ( e.g . shadows in ripples), were not included. HiRISE images taken at the same location (−54.3°, 12.9–13°) on the Russell megadune and Matara Crater dunes (−49.5°, 34.7° and −49.5°, 34.6°) allowed us to identify new or widened terminal pits and new detached pits. Suitable data were not available for Proctor Crater. The Russell Crater observations ranged over 4 Mars years (MY) between MY 29 and 32. The images used were ESP_012213_1255, ESP_020784_1255, ESP_029764_1255 and ESP_039153_1255 for MY 29, 30, 31 and 32 respectively. These images have emission angles of 8.2°, 5.1°, 3.8° and 3.8° respectively. Emission angle is the angle between the HiRISE camera and a normal drawn perpendicular to the surface, where 0° is known as nadir ). Roll directions (obtained by comparing image centre longitude and subspacecraft longitude) were from west, east, east and west, respectively. For Matara Crater the survey extended over 2 MY between MY 30 and 31 (HiRISE images ESP_020414_1300, ESP_029750_1305 for sites at −49.5°, 34.7°. These images have emission angles of 4.7° looking from west and 0.4° looking from east respectively. ESP_021759_1300 and ESP_030528_1300 were examined for sites at −49.5°, 34.6°. These images had emission angles of 9.7° looking from east and 12.2° looking from east, respectively. Differences in lighting were accommodated for by adjusting contrast and brightness in the overlapping images. New detached pits were identified as circular depressions of low albedo that were not in the previous MY image and which surrounded a terminal pit. The extent to which pits were widened (if any) was measured by fitting a circular polygon to the same terminal pit for two consecutive Mars years, calculating area as outlined above, and differencing these data. Early spring images were examined for each location in order to measure high albedo features thought to be CO 2 blocks within channels. This was done by zooming in to optimal pixel resolution and using the Measure tool to record their width and length. The images were taken at similar L s , or solar longitude (the Mars/Sun angle, measured from the northern hemisphere spring equinox where L s = 0, a measurement used to quantify Martian seasons). Images were also selected based on the emission angle. To minimise the effect of geometric distortions, single colour RDR images were used in each case. These are radiometrically-corrected images which are map projected. The radiometric correction adjusts for instrument offset, dark current and gain and then converts pixel values to \(\frac{Intensity}{Flux}\) reflectance. Geometric processing corrects for optical distortion and projects the image from spacecraft viewing to a map coordinate system. The MOLA (Mars Orbiter Laser Altimeter) DTM is used to improve the camera pointing intercept position on the Martian surface. Orthorectification corrects for distortions that may occur in off-nadir images where the spacecraft roll angle causes pixel foreshortening in the cross-track direction 30 . The images we used were not orthorectified, and so disparities may occur when comparing images with different observation geometry. To minimise this effect, images close to nadir were chosen and care was taken to select images with less than a 5° difference in emission angle. Because such differences are small, we can neglect parallax distortions 31 . A correction was made for any minor deviations however by dividing x -direction measurements by the cosine of the emission angle 30 . In each case the distortions are within tens of centimetres and thus fall within errorbars for our measurements. Experimental Setup In order to investigate the CO 2 block hypothesis for pit formation 21 and the cryoventing hypothesis for furrow formation 11 we performed laboratory experiments. Initial pilot work, under ambient terrestrial conditions, revealed that water in the atmosphere had a significant effect as it formed frost on the surface of the block and on the bed. This affects the heat budget, the permeability of the bed and the mobility of the grains and must be avoided. Additionally, this frost would later melt and erase the surface microtopography. Therefore, we performed our experiments in a low humidity chamber. The chamber was erected on a level surface in a constant temperature (Δ T ≈ 3 K) laboratory. A plastic container (460 × 675 × 400 mm) was filled with dehumidifying silica gel beads. A smaller plastic container (300 × 520 × 370 mm) was placed inside, forming a silica gel bead moat which surrounded the interior container. A perspex lid was fitted on top of the chamber and vacuum bagging gum sealant tape was added at the interface between the container and lid to ensure the chamber was air-tight. A sealed trap door was constructed within the perspex lid in order to minimise exposure to atmospheric water vapour when placing blocks inside. Prior to each experimental run, three CO 2 ice blocks were placed upon the silica bead moat and were given time to sublimate. These generated dense CO 2 gas which flushed out the original gaseous content, thus removing any water vapour. This reduced the relative humidity sufficiently so that there was no noticeable frost formation during the experiments on the ~−80 °C CO 2 ice blocks. Though grain sizes at linear gully and furrow locations on Mars have not been constrained, we used the preliminary data collected by the Curiosity Rover in the Bagnold dune setting on Mars to optimise the range of grain sizes employed in our experiments. We estimated a scale factor of 0.61 (see Supplementary Material, Experimental Scaling ) by which to reduce grain size to compensate for the disparity in gravity between Mars and Earth. Grain sizes detected in the Bagnold dunes ranged from fine to coarse sand 32 , with many passing through a <150 μm sieve. The average grain size detected in the Bagnold dunes was between 200 and 300 μm 32 . When scaled, these ranges fall between <90 μm and 122–183 μm , respectively. Therefore, Guyson Honite Glass Spheres of four grain diameter ranges (4–45 μm , 45–90 μm , 75–150 μm and 160–212 μm ) were used for sixteen separate experimental runs. Experimental Protocol In the low humidity environment, pure CO 2 ice blocks were slid onto beds of glass spheres of each grain size range. We define “sliding” as a gentle motion nudging the block onto the bed surface — sliding is a motion which has enough force to slightly disrupt the granular surface. Sublimation then transports the grains underneath and near the edge of the block. We used Structure from Motion 33 (SfM) to build Digital Elevation Models (DEMs) of the resultant morphologies from each experiment. We then compared these morphologies and morphometries with Martian terminal and detached pits measured using HiRISE images and DTMs. A second set of experiments was designed to study the formation of furrows. The blocks were placed as gently as possible on a flat granular bed in order to generate CO 2 gas flow beneath the block. The aim was to investigate whether such a layer of gas at the interface between CO 2 ice and a granular substrate could form furrow networks on an initially smooth and level bed. SfM was again employed to build high resolution DEMs of the features produced and the resulting furrow morphologies were compared with the well-characterised furrow networks on Martian dunes 11 , 12 . Each granular sample was dried and sieved to disaggregate material prior to each experiment. The sample was then poured into the inner container and levelled using a spirit level. A time-lapse camera was positioned inside the chamber to record sublimation rate. A digital hygrometer placed on top of the bed indicated depression of relative humidity in real-time. Once relative humidity decreased sufficiently, the trap door was opened and a CO 2 ice block of mass <1 kg (Table 1 ) was either placed or slid onto the bed. The chamber was immediately sealed and the block in each case was allowed to sublimate and interact with the granular substrate. This sublimation process lasted ~7–11 hours for each block depending on its mass and whether it burrowed. Videos of the initial sublimation dynamics in each case were recorded with an iPhone 6S 12 megapixel camera from outside the chamber, in order to avoid accumulation of grains on the lens which would hamper video quality. Table 1 Laboratory experiment controlled and measured parameters: Full size table Digital Elevation Model Development All features resulting from CO 2 sublimation were modelled in three dimensions by SfM 33 using Agisoft Photoscan. SfM is a technique for reconstructing three dimensional structures from two dimensional image sequences. Agisoft Photoscan is commercially available software which can photogrammetrically process digital images to create 3D spatial data. Each feature produced was imaged at many overlapping positions. In order to establish scale in the DEMs, coded markers were placed within the scene. Agisoft Photoscan finds the exact centre of coded markers enabling the production of highly accurate DEMs and the accurate measurement of features in the scene. Agisoft recommend that three or more scale bars are optimal. Therefore, a local coordinate system composed of three coded markers at known distances apart from one another, was used for scale definition 34 to develop our 3D models. This local coordinate system was composed of three black and white circular 12-bit coded markers which were printed on three 6.8 × 8 cm sheets of paper. The centres of these markers were positioned on a flat wooden triangle (of 75 cm 2 area) and the markers themselves were laminated with a thin layer of plastic 34 . The ( x , y , z ) coordinates of the marker centres were carefully measured with an Engineer’s Scale prior to placement in the scene 34 and these were later entered in Agisoft Photoscan to develop scale bars for reference within the models. These coordinates (in metres) were: (0, 0, 0), (0.064, 0.115, 0) and (0.131, 0, 0) and these have accuracy <1 mm. The scale was in a constant location relative to the experimental chamber in each case, the centre of the nearest target on the scale was 15 cm from the chamber. This was close enough so that it could be seen in multiple overlapping images. This served as a reference for scale definition and also helped the processing tool to align images accurately. Constancy was assured by the remote nature of the laboratory — external vibrations were minimised. Care was taken when moving around the region of interest not to cause vibrational disturbances. In order to ensure a vertical orientation of the z -plane, the local coordinate system was placed flat on the laboratory bench during each experiment. The planar arrangement of the coded markers was confirmed using a spirit level to ensure the bench was level and the laminate nature of the markers ensured they did not bend. The images were captured at a maximum distance of ~1 m from the bed surface and minimum distance of ~0.05 m at a variety of angles with respect to the image subject in each case. Camera positions were not recorded, as Agisoft Photoscan can compute accurate estimates. The focal length on the camera and aperture were fixed at 4.15 mm and f2.2 respectively and otherwise, the camera was not calibrated. Between 41 and 100 images were captured for each experiment, depending on whether fine detail such as furrow patterns were to be captured, or whether primary pit dimensions alone would be measured. The images were then aligned and referenced in Agisoft Photoscan, to build a point cloud, mesh and generate a DEM and corresponding orthophoto of each feature (resolutions in Table 1 ). The dimensions of each pit were then measured in ArcMap 10.4, using the DEM and orthophoto. Differencing before and after DEMs in order to estimate pit depth was not possible due to the high albedo of the initial flat bed. We approximated the initial level surface by taking an average of 5 linear cross sections of the flat bed surrounding (but farthest from) the pit in ArcMap 10.4. In each case brighter regions where distortions were expected were avoided when taking these transects. A line was interpolated across the primary pit to these locations and the difference in height between the original bed surface and the depression formed by the CO 2 ice block was determined. An average and standard deviation of these values were taken in each case and standard deviations fall within the uncertainty margins outlined in Supplementary Material, Digital Elevation Model Uncertainty Estimates . Levées were measured in a similar manner by taking average values of the difference between the average flat surface and levée height. Maximum and minimum levée height were recorded in each case. The area of furrows produced on each bed surface was recorded using polygonal mapping in ArcMap 10.4. The area of the flat pit floor in each case was determined by zooming in to optimal pixel resolution on the orthophoto overlain above the DEM and using the free-hand tool to mark the line where the inner slope of levées ended and the flat pit floor began. The pit floor is defined as the reasonably flat area directly below where the incident block was for which the perimeter is identified as the line between where the inner levée slope ended at 1 pixel resolution. The area of the space between furrow networks was determined by zooming in to optimal pixel resolution and mapping the outer edge of each furrow. This total area was differenced from the total pit floor area to get the area covered by furrow networks. The area covered by furrows was then expressed as a percentage of the total pit floor area. A complete discussion of the DEM uncertainties presented in this paper is available in Supplementary Material . Data Availability The datasets generated and analysed during the current study are available from the corresponding author on request. Results and Discussion Primary Pit and Levée Formation via CO 2 Ice Block Sublimation Pits, which are visually comparable to Martian linear gully primary terminal pits, were observed to form via both the (i) stationary and (ii) sliding interaction between sublimating CO 2 ice and substrate of all grain sizes. The primary terminal pits which punctuate linear gully channels on Mars are characterised by depressions in the substrate encircled by positive relief levées. The pits observed in our laboratory experiments were negative relief features surrounded by positive relief levées which formed primarily by excavation of material via sublimation. Pits were deepest for finer grain sizes (Table 1 ). In the cases where material was not transported on top of the block, the resulting pits adopted the shape of the block that formed them (Fig. 3a,d ). In our experiments, blocks were rectangular. In other cases, the block burrowed into the bed and superposed material sunk as the block moved downward. In all cases, the primary pit which formed was wider than the block forming it. Figure 3 Primary pits and detached impact pits formed during our experiments by the interaction between sublimating CO 2 ice and granular substrate. (a) Primary pit formed by block placement on a bed of 45–90 μm grains. Levées are denoted by blue arrows. Yellow arrow points to a preserved vent aperture. Purple arrow indicates a zone of slumping . (b,c) Detached impact pits (black arrows) formed by return of granular clusters to a 4–45 μm bed surface as a sliding block burrowed beneath the bed and exhibited jet activity from the subsurface. Red arrow indicates a linear string of impact pits. These impact pits were ubiquitous over the primary pit surface (seen in e ). (d) Primary pit formed in another instance of block placement on a bed of 45-90 µm grains. Blue arrows point to levées. ( e ) Wider view of the system formed when a block was slid onto a bed of 4–45 μm grains. Orange arrows identify collapsed pit morphologies which signify locations where jets emanated from the subsurface. White boxes show locations of (b) and (c) . Full size image Upon initial contact with the substrate which was roughly at room temperature, the block in each case underwent rapid sublimation, thus levitating it and mobilising material within its surroundings. This interaction was interpreted by observation to be brought about by the Leidenfrost Effect, as considered in the CO 2 block hypothesis. The force of the escaping gas on the grains, transported grains within the sublimating gas from beneath the block to the side, forming levées. During this initial stage of sublimation, vent apertures developed at gaps between the block and the inner slope of levées, from which jets of gas were observed to expel granular material to the main granular sheet (Supplementary movie 1 ). Some of these vent apertures were preserved after the sublimation process, while others were destroyed by gravitational collapse. After the initial rapid sublimation stage, sediment transport ceased and the block sublimated slowly for ~7–11 hours. It is thought that at this stage, the temperature difference between the block and the granular substrate decreased below the Leidenfrost point, thus ending the levitation and sediment transport phase. When a block was slid onto the bed surface, a greater range of dynamic responses was observed (Supplementary movie 3 ). In these cases, particularly in the 4–45 μm and 45–90 μm grain cases, sublimation dynamics mobilised grains on top of the block. This increased thermal contact between the substrate and the surface area of the block and subjected more grains to the Leidenfrost Effect and the force of rapid sublimation. Pits were generally deeper in these instances, compared to when a block was placed on a bed of the same grain size range (Table 1 ). Levées that formed during each experiment increased in maximum height with decreasing grain size (Table 1 ). This is expected, since decreasing grain size increases grain mobility as the Shields parameter (a non-dimensional number used to calculate the initiation of motion of sediment in a fluid flow 35 ), increases. Levée morphologies were comparable to those encircling the terminal pits of linear gullies on Mars. These were raised, positive relief features which surrounded the pit in each case. The relationship between levées and terrestrial debris flows combined with sinuosity has previously been invoked to support a debris flow genesis for gullies (including linear gullies) on Mars 36 . Our experiments highlight the equifinality of the granular response to the movement of dry and wet fluids. Levées formed without the need for downslope block movement and granular material was transported to the side under the influence of pressurised gas alone. This is consistent with the sliding CO 2 hypothesis which suggests that stationary blocks at linear gully termini are capable of excavating and transporting sediment to form pits and surrounding levées. Furrow Formation via CO 2 Sublimation Our laboratory experiments are the first to show that features similar to the sand furrows on Mars can form by sublimating CO 2 . Furrows formed under the CO 2 blocks and closely resembled Martian furrows — they were negative relief features similar in pattern and planform though on a smaller scale. The furrows observed displayed a range of patterns (linear, sinuous, dendritic and curvilinear). Martian furrows have been spatially correlated with dark fans, where vents in the ice are posited to form. These fans are thought to be composed of sand transported from beneath the ice by the cryoventing mechanism 12 . Furrow networks observed in the laboratory terminated at vents that developed at the boundary between the ice block and the pit walls (Fig. 4a ). When a block was placed on the bed surface and carefully lifted as observed in Supplementary Movie 2 , it was discovered that furrow mouths were located at regions where vents formed. Jets at vent locations were observed to transport grains from beneath the block. From these observations, we deduce that it is possible for furrows to be formed by pressurised gas that escapes through vents. Figure 4 Furrow patterns and networks observed in the laboratory. ( a ) Before (left panel) and after (right panel) sublimation for an experiment on a bed of 45–90 μm grains. Left panel: sublimating block. Arrows point to vents where gas escapes from beneath the block. Right panel: primary pit formed on the same bed by the block shown in the left panel, containing linear and sinuous furrows. The direction of gas flow is marked by the convergence of furrows towards vent locations (blue). ( b ) Dendritic network of furrows (white arrows), formed in <1 minute when a block was placed on a bed of 75–150 μm grains. This block was lifted and then removed to demonstrate that dendritic patterns can rapidly form. ( c ) Primary pit containing linear and sinuous furrows extending across a pit floor on a bed of 4–45 μm grains. Vent apertures were not preserved, however by observation it was noted that vents operated on each side of the block. ( d ) Close-up of dendritic furrow network (white arrows) on a bed of 45–90 μm grains. Full size image Furrow abundance increased with decreasing grain size (Fig. 5c ). Considering also the general increase in pit depth (Fig. 5b ) and maximum levée height with decreasing grain size (Table 1 ), we report a feedback between grain size and feature formation via CO 2 sublimation. When grain size distributions on Martian dunes are better constrained, this observation might shed light on why furrows are restricted to certain locations in the southern hemisphere 13 . However, we report that furrow pattern and network type are independent from grain size. Using the furrow pattern and furrow network classification derived from a survey of Martian dunes 12 , we classified the different network types formed by CO 2 sublimation according to planform. The following classes were delineated: linear (straight negative relief lines in the substrate), curvilinear (curved negative relief lines in the substrate showing one inflection point), sinuous (highly curved negative relief lines in the substrate displaying more than one inflection point) and dendritic (branching negative relief features in the substrate) 12 . Figure 5 ( a ) Martian Detached pit abundance versus area of terminal pits in Russell and Matara Crater. Plot indicates an increase in the number of new detached pits each spring with increasing linear gully terminal pit area in Russell and Matara Crater. Error bars were calculated by taking 2 × the pixel width for each site and propagating the error on πr 2 for each main (terminal) pit area). The R 2 value for the linear fit is 0.72 and the p value is <0.01. ( b ) Laboratory primary pit depth eroded per block thickness. Plot indicating that pits observed in the laboratory were generally deeper on beds of the finest grain sizes. The CO 2 block placed on a bed of 45-90 grains in experiment 2 partially burrowed and therefore is not included here. Error bars denote vertical error (z error) on each DEM. This was calculated by taking 10 vertical sections of a ruler where available, or known measurement in the scene of the DEM, averaging them and differencing this value from the actual measurement. ( c ) Furrow abundance observed in the laboratory versus grain size plot. Plot shows furrow abundance observed in the laboratory decreased with increasing grain size. Error bars were calculated by defining horizontal uncertainty as better than 1 mm, defining furrows as negative relief erosional lines in the substrate, defining the pit floor as the section at 1 pixel resolution where the inner slope of the surrounding levée ended and relatively flat surface beneath where the block was began, and allowing for error in digitisation decisions when marking the pit floor and furrow outlines in ArcGIS. Considering these factors, uncertainty on measurements was estimated to be 10 orthophoto pixels in each case and their volumes were propagated within the furrow percentage expression to obtain an estimate of uncertainty on furrow abundance. Full size image On Mars, the specific location and patterns of furrows on dunes changed between Mars years following emplacement and sublimation of the seasonal CO 2 ice deposit. We can report a similar outcome for our experiments where furrow location and pattern changes for similar grain sizes between experiments. In some cases, a variety of networks and patterns developed on beds of the same grain size range. We identified dendritic and curvilinear furrow networks on the 4–45 μm bed pit floor (Fig. 4d ) while only linear and sinuous patterns resulted from a second experiment of block placement on a 4–45 μm bed (Fig. 4c ). Dendritic networks were observed in certain experiments across all grain sizes and these developed almost instantaneously as observed in Supplementary Movie 2 , rather than via a network growth over time. However, furrows detected in our experiments formed via the contact between a ~−80 °C CO 2 ice block and substrate which was roughly at room temperature and our incident ice was on a much smaller scale to that which seasonally covers Martian dunes. Our initial conditions were very different to the gradual basal heating of the CO 2 ice sheet and consequent ice rupture proposed in the cryoventing model. However, we have demonstrated that dendritic patterns can form in a single cryoventing event, as was proposed for Mars 12 . Dendritic patterns tended to form when there were few vents, and when vents were furthest apart. We note also that when vents were located directly across from one another at each side of the block, linear and sinuous furrows formed instead. This is consistent with the observation that cryoventing efficacy on Mars is constrained by vent spacing 12 . We interpret this laboratory observation as an indication that cryoventing is likely to be limited by a pressure gradient. High pressure gas at the centre of the ice/substrate interface will be attracted towards an area of low pressure — such as that provided by a vent. If there is a roughly equal pressure gradient between the block centre and two lateral vents opposite one another, gas will escape at a similar rate towards either side and this will be reflected in the presence of linear or sinuous patterns that extend and connect across the base of the pit floor. We propose that Martian furrow formation and network type may be influenced by a pressure gradient, supplied either by inhomogeneous ice thickness on dune slopes and/or stress locations such as dune brinks. Further work may clarify the extent to which ice thickness and surface topography influence pressure gradients at furrow and dendritic trough locations on Mars. Impact and Collapsed Pit Formation In addition to primary pit formation, the cases where a block was slid onto grains of 4–45 μm and 45–90 μm revealed two additional pit types (Fig. 3b,c,e ). These instances involved very different fluid dynamics to those observed in all other cases. Grain mobilisation induced by increased thermal contact between ice surface area and the warmer granular material, enabled the block to move downward, and the bed appeared fluidised (Supplementary movie 3 ). This process lasted ~1 minute in each case before the block was observed to submerge fully within the granular bed. CO 2 gas jets escaped from the subsurface (Supplementary movie 3 ), excavating the material in the space surrounding their path and providing room for adjacent material to sink, forming collapsed pits (Fig. 3e ). These endogenic pits were characterised by concentric tensional fissures and did not have raised rims. Finally, a third pit type was observed to form via the return of jetted sediment to the steady bed surface. These impact pits (Fig. 3b,c ) were smaller in diameter than the primary pit and varied little in size (up to 6 mm in diameter, to below our measurement accuracy). Some occurred in strings and all possessed raised rims, suggesting surface material was ejected in their formation. Following the cessation of sublimation, impact pits were ubiquitous across the primary pit, levées and surrounding bed surface (Fig. 6 ). Figure 6 Conceptual model of 3 pit formation modes. ( a ) Block is slid onto grains (beige), increasing thermal contact. ( b ) Grains are mobilised on top of the block via escaping pressurised gas, increasing CO 2 ice exposure to the Leidenfrost Effect. ( c ) Block submerges; gas jets escape and surface material sinks forming collapsed pits. ( d ) Impact pits form by substrate return via fluid instability such as inelastic collapse of gas jets or granular clustering within jets; returned grains impact the surface forming shallow, rimmed impact pits. ( e ) 3 resultant pit morphologies; primary pit, collapsed pits, impact pits (within main pit and surrounding bed surface) and cracks. ( f ) Legend. Full size image Pit Measurements in Proctor, Russell and Matara Craters For morphometric measurements, we surveyed 60 primary terminal pits on Russell Crater megadune and 135 primary terminal pits at 13 sites in Proctor Crater. This preliminary study was carried out purely to demonstrate an initial trend that may help us to assess our laboratory data in the context of Martian dunes and is not intended as an exhaustive survey of Martian linear gully pits. All terminal pits were characterised by a roughly circular depression surrounded by circular levées, were shallow (<1 m–2.5 m) and varied greatly in diameter, from <1 m to as much as 19 m. Pits were generally wider than their accompanying channels. This is consistent with our laboratory observations that pits were wider in diameter than the sublimating blocks which formed them. We propose that blocks translating down-slope do so quite rapidly and so the width of the channel carved corresponds with block width, while the stationary block may sublimate at the terminus, eroding and transporting material for longer at the site. In Proctor Crater, the average primary terminal pit width was 3.2 ± 0.5 m and the majority of pits (83.7% of those sampled) were less than 4 m in diameter, with the minimum pit diameter being 1.4 m. The remaining 16.3% of pits were between 4 m and 8 m in diameter (maximum 7.4 m in diameter). Only 16 terminal pits were deeper than the vertical uncertainty of 1.08 m on the Proctor Crater DTM used. The aspect ratios of terminal pits ranged between 0.8 and 1.7. The Russell Crater terminal pits are larger. Only 11.7% of terminal pits sampled were 4 m or less in diameter, while 46.7% of terminal pits were between 4 and 8 m in diameter and 41.6% were between 8 m and 19 m in diameter. The minimum primary terminal pit width on Russell Crater megadune was 2.7 m and the maximum primary terminal pit width was 18.6 m. Only 3 primary terminal pits were deeper than the upper limit vertical uncertainty of 1.7 m on the Russell Crater DTM, and for these we determined an aspect ratio range between 3.6 and 8.3. Detached pits on Russell Crater megadune of average diameter 2.5 m were found predominantly in the vicinity of the largest primary terminal pits. Other large primary terminal pits were not accompanied by detached pits. However, these particular linear gullies indicated higher probability of block energy loss such as high sinuosity and lateral small channels. These activities may have slowed blocks down or broken them up before reaching the termini. We observed a trend in Russell Crater and Matara Crater detached pit data (Fig. 5a ), which suggests that the abundance of these multiple detached pits at linear gully termini increases with the surface area of the newest/widened terminal pit. Should terminal pits be formed via the sublimation of CO 2 ice blocks, the data suggest that a greater block size (and hence greater surface area exposed to sublimation) produces more detached pits. This is in agreement with our laboratory observations that “detached” impact pits formed on finer grain sizes when a greater amount of CO 2 ice surface area was undergoing the initial rapid sublimation dynamics of the Leidenfrost Effect, allowing it to burrow. We hypothesise that the impact pits observed in the laboratory were formed through a clustering 27 instability in the granular jets (Fig. 6 ). This lends credence to our hypothesis that the multiple detached pits surrounding many Martian linear gully terminal pits (Fig. 2a–d ) may be formed by a similar mechanism involving granular jets. We reason that at linear gully termini, the sublimating CO 2 may become unstable as blocks reach a terminal velocity (and greater drag force) at the lower dune slipface, which is reflected in the confinement of these detached pits to the lower ~200 m of the channel vicinity (Fig. 2a–d ). Granular clusters may become entrained within sublimating CO 2 gas jets as the block sublimates. We propose that these are splayed out radially from the terminal pit to form smaller, shallower detached pits upon return impact to the sandy surface, in a similar manner to the mechanism by which impact pits were observed to form in our laboratory experiments (Supplementary movie 4 ). However, our laboratory experiments required jetting of an endogenic origin which resulted in terraced collapsed pits seen in Fig. 3e . Similar morphologies have yet to be identified on Mars, but it is possible that inset pits seen inside terminal pits may denote locations where blocks have burrowed beneath the surface. Alternatively, inset pits could indicate new, inner paths of blocks within pre-existing channels. However, the ratio of collapsed to impact pits in our laboratory experiments ranged from 5.3–10 and the ratios of inset pits to smallest detached pits in Russell Crater ranged from 4.3–8.1 and so certainly for the smaller detached pits in this setting, our proposed process is plausible. A detailed numerical mode of jetting activity under Martian conditions would allow us to understand the effect of scaling on the proposed process. In one instance, a high albedo block was measured within a pit on Russell Crater megadune (Fig. 2a ) during MY 30, which disappeared the following year and hence was likely to be CO 2 10 . The pit was observed to widen by 2.1 m in MY 31 during the disappearance of the block. We measured the block in MY 30 to be 2.4 m wide, giving a pit to block widening factor of 0.9. This is consistent with the degree to which we observed pits to erode in the laboratory setting. The pit to block width ratio observed in our laboratory experiments ranged from 1.08 to 1.72. Considering a horizontal uncertainty in this HiRISE image of 0.25 cm/pixel, this observed pit widening by a high albedo block is consistent with a CO 2 block hypothesis. Considering our scaling arguments, it is likely that the high albedo object was a CO 2 block which eroded its surroundings to widen a pre-existing pit. The wide variation in aspect ratio found at both locations is consistent with a dry ice block hypothesis. The blocks that naturally fall from over steepened cornices or steep alcove walls will have a range of sizes and will not be symmetric, thus they will form a wide variety of pit dimensions. Considering the average grain size range reported for the Bagnold dunes on Mars (200–300 μm ) 32 , we used the values in Table 1 and our calculated scale factor of 0.61 (see Experimental Scaling) to estimate the block sizes needed to form our range of pits. Using our upper grain size ranges of 75–150 μm and 160–212 μm , we estimate that the block sizes needed to form the pits we have surveyed range from 1.2 m–6.5 m in Proctor Crater and 2.4 m–16.3 m in Russell Crater. High albedo blocks of dimensions between 1.6 and 6 m have been measured in our survey of Russell Crater, however larger blocks have yet to be identified. The lower range of normalised pit to block width ratios is consistent with dimensions we have observed on Mars using HiRISE images in which high albedo blocks have been identified in linear gully channels in Matara Crater 21 and in channels and pits in Russell Crater 10 . The ratio of channel width to block width in Matara Crater ranges from 1.9 to 1.1 and in Russell Crater, observed ratios were 1.7 to 2.0. Allowing for measurement uncertainty of one pixel width (0.25 m), these values fall within our estimated range of 1.08–1.72. Although these channels have clearly been eroded prior to the observations of these blocks within them, we can speculate that blocks of similar size to those observed eroded them. Should further data be collected on block sizes at these locations, our laboratory data can be drawn upon to assess whether pit dimensions in the locality are likely to be formed by these CO 2 ice blocks and perhaps, by how much pit dimensions might grow seasonally. Conclusion Our experiments suggest that furrows and pits can be formed by the sublimation of CO 2 ice blocks. The differences in temperature, atmospheric density and pressure, and gravity can be accounted for in a scaling analysis by using different sized grains, but the difference in scale cannot easily be dealt with, thus though our evidence is suggestive it is not conclusive. The CO 2 block hypothesis 21 is the only current hypothesis that can explain present-day linear gully pit formation. We have shown that stationary sublimating CO 2 ice blocks in contact with porous, mobile material are capable of transporting sediment to form primary pits and surrounding levées. Our scaling arguments and morphological observations suggest that primary pits may be analogous with Martian primary terminal pits in Russell, Matara and Proctor Craters. In particular, our observation of a high albedo block within a terminal pit and subsequent widening consistent with the ratios predicted from our laboratory data, suggest that linear gully pits may be formed and widened by CO 2 blocks. Additionally, we have observed that collapsed pits and detached impact pits; two ancillary pit types, can form by sublimating CO 2 ice blocks and subsurface jetting. We have presented a new hypothesis for detached pit formation at linear gully termini. This hypothesis is consistent with the observed relationship between terminal pit area and number of associated new detached pits forming seasonally in Russell and Matara Crater and is supported by the laboratory observation that jetting activity can result in impact pits. Further work is required to appropriately address size scaling, and a detailed numerical model may give further insight. Cryoventing is the only hypothesis that has been offered for sand furrows on Mars, but prior to this study there has been no physical evidence that a pressurised gas layer at a CO 2 ice and sandy interface can form the complex patterns similar to those observed on Martian dunes. Our results suggest that sublimating CO 2 generates gas flows sufficiently powerful to mobilise grains on top of the incident ice and add to the granular sheet via venting — a process which is hypothesised to form the dark fans accompanying sand furrows 12 . We have shown for the first time, that a range of furrow morphologies can form by escaping pressurised gas at the interface between a granular surface and a CO 2 ice overburden, and that furrow pattern type is independent from the grain size distribution (within our grain size range) of the material into which it is eroded. The data suggest that cryoventing efficacy and furrow pattern type are limited by a pressure gradient provided by vent spacing. Additional data is required to assess the role of ice thickness and vent geometry in furrow formation. Our study has delivered for the first time, physical evidence that (1) linear gully pits and (2) furrows; both active features observed to fade, extend and form in the contemporary Martian climate, may be forming via the action of sublimating CO 2 ice.
Researchers based millions of kilometres from Mars have unveiled new evidence for how contemporary features are formed on the Red Planet. Their innovative lab-based experiments on carbon dioxide (CO2) sublimation - the process by which a substance changes from a solid to a gas without an intermediate liquid phase - suggest the same process is responsible for altering the appearance of sand dunes on Mars. The research was led by a Trinity College Dublin team comprising PhD candidate in the School of Natural Sciences, Lauren Mc Keown, and Dr Mary Bourke, along with Professor Jim McElwaine of Durham University. Their work, which describes phenomena unlike anything seen on Earth, has just been published in the Scientific Reports. Lauren Mc Keown said: "We've all heard the exciting news snippets about the evidence for water on Mars. However, the current Martian climate does not frequently support water in its liquid state—so it is important that we understand the role of other volatiles that are likely modifying Mars today." "Mars' atmosphere is composed of over 95% CO2, yet we know little about how it interacts with the surface of the planet. Mars has seasons, just like Earth, which means that in winter, a lot of the CO2 in the atmosphere changes state from a gas to a solid and is deposited onto the surface in that form. The process is then reversed in the spring, as the ice sublimates, and this seasonal interplay may be a really important geomorphic process." Dr Bourke added: "Several years ago I discovered unique markings on the surface of Martian sand dunes. I called them Sand Furrows as they were elongated shallow, networked features that formed and disappeared seasonally on Martian dunes. What was unusual about them was that they appeared to trend both up and down the dune slopes, which ruled out liquid water as the cause." Linear gullies on a dune in Matara Crater, Mars, Red and white arrows point to pits. Credit: NASA/JPL/University of Arizona "At that time I proposed that they had been formed by cryo-venting—a process whereby pressurised CO2 gas beneath the seasonal ice deposit erodes complex patterns on the dune surface when the ice fractures and releases the gas in towering dust and gas geysers. I was delighted when Lauren joined the Earth and Planetary Surface Process Group in the Department of Geography to work on this phenomenon with Jim and myself. What was required was a demonstration of how sand would respond to sublimation of CO2 ice, and this published work is an important step in providing that required proof." The researchers designed and built a low humidity chamber and placed CO2 blocks on the granular surface. The experiments revealed that sublimating CO2 can form a range of furrow morphologies that are similar to those observed on Mars. Linear gullies are another example of active Martian features not found on Earth. They are long, sometimes sinuous, narrow carvings thought to form by CO2 ice blocks which fall from dune brinks and 'glide' downslope. Lauren Mc Keown said: "The difference in temperature between the sandy surface and the CO2 block will generate a vapor layer beneath the block, allowing it to levitate and maneuver downslope, in a similar manner to how pucks glide on an ice-hockey table, carving a channel in its wake. At the terminus, the block will sublimate and erode a pit. It will then disappear without a trace other than the roughly circular depression beneath it." "While gullies on Earth are commonly formed by liquid water, they almost always terminate in debris aprons and not pits. The presence of pits therefore provides more support for a hypothesis whereby CO2 blocks are responsible for linear gullies." Dendritic furrows formed by basal sublimation of a CO2 ice block in contact with a granular surface. Credit: Lauren Mc Keown and Dr Mary Bourke, Trinity College Dublin By sliding dry ice blocks onto the sand bed in the low humidity chamber, the group showed that stationary blocks could erode negative topography in the form of pits and deposit lateral levees. In some cases, blocks sublimated so rapidly that they burrowed beneath the subsurface and were swallowed up by the sand in under 60 seconds. Professor McElwaine said: "This process is really unlike anything seen to occur naturally on Earth - the bed appears fluidised and sand is kicked up in every direction. When we first observed this particular effect, it was a really exciting moment." By generating 3-D models of the modified bed in each case, pit dimensions could be used to predict the range of block sizes that would erode the pits seen on Mars, which vary in diameter from 1 m to up to 19 m. A pit on Russell Crater megadune on Mars was observed to grow within one Mars Year to an extent predicted by these calculations, following the observation of a block within it the previous year. The next phase of work, supported by Europlanet Horizon 2020 funding, will see the team head to the Open University Mars Chamber to assess the influence of Martian atmospheric conditions on these new geomorphic processes and test a numerical model developed by Professor McElwaine.
10.1038/s41598-017-14132-2
Earth
Scientists detect new landslides on U.S. West Coast
Yuankun Xu et al, Geologic controls of slow-moving landslides near the US West Coast, Landslides (2021). DOI: 10.1007/s10346-021-01732-3
http://dx.doi.org/10.1007/s10346-021-01732-3
https://phys.org/news/2021-09-scientists-landslides-west-coast.html
Abstract Slow-moving landslides, often with nearly imperceptible creeping motion, are an important landscape shaper and a dangerous natural hazard across the globe, yet their spatial distribution and geologic controls are still poorly known owing to a paucity of detailed, large-area observations. Here, we use interferometry of L-band satellite radar images to reveal 617 spatially large (4 \(\times\) 10 4 – 13 \(\times\) 10 6 m 2 ) and presently active (2007 – 2019) slow-moving landslides near the populous US West Coast (only 4.6% of these slides were previously known) and provide evidence for their fundamental controls by bedrock lithology and vertical land motion. We found that slow-moving landslides are generally larger and more spatially frequent in homogeneous bedrock with low rock strength, and they are preferentially located on hillslopes with geologically recent uplift. Notably, landslide size and spatial density in the relatively weak metamorphic rocks and mélange (due to pervasive tectonically sheared discontinuities, foliation, and abundant clay minerals) were two times larger than those in sedimentary and igneous rocks, and the hillslopes with landslides were found to be uplifting approximately three times faster than the average for the whole region. These results suggest that slow-moving landslides can be effectively uncovered by satellite radar imagery and their occurrence and character may be anticipated from vertical land uplift and bedrock lithology. Hence, our study provides understanding critical for reducing landslide hazards and quantifying landslide impacts on landscape change. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes Introduction Landslides are a geologic process crucial for landscape evolution (Burbank et al. 1996 ; Kelsey and Bockheim 1994 ; Roering et al. 2009 ; Simoni et al. 2013 ), and as a natural hazard, landslides annually cause 3.5 billion dollars of property loss and 25–50 casualties in the USA alone (Spiker and Gori 2003 ). Locating presently active landslides is a critical step towards preventing their future hazards and forecasting their impact on the landscape. However, conventional landslide-identifying approaches that rely on geologic maps and citizen-reported events (Guzzetti et al. 2012 ; Highland and Bobrowsky 2008 ; Jones et al. 2019 ) could easily miss numerous active yet slowly moving slides that lack readily identifiable features (e.g., fresh headscarps) or occur in rarely accessed lands. Slow-moving landslides persistently damage infrastructure and imply a force imbalance of the hillslope (Highland and Bobrowsky 2008 ). Additional forces such as earthquake shaking, coastal and stream erosion, intense rainfall, and other natural or anthropogenic disturbance could shift their present creeping behavior into rapid movement and cause catastrophic damages (e.g., Intrieri et al. 2018 ; Kilburn and Petley 2003 ; Schulz and Wang 2014 ; Xu et al. 2020b ). Discovering presently slow-moving landslides for future hazard prevention particularly requires approaches with high measurement accuracy and wide spatial coverage. However, few tools were available until the InSAR (Interferometric Synthetic Aperture Radar) method evolved into an effective means in the last two decades (Handwerger et al. 2019 ; Intrieri et al. 2018 ; Squarzoni et al. 2003 ; Xu et al. 2019 , 2020b ; Ye et al. 2004 ). InSAR utilizes interferometry of satellite-captured radar images (frequent repeated acquisitions since 1992) to achieve maximal millimeter-level measurements of ground displacement along the radar line-of-sight (LOS) direction (Ferretti et al. 2007 ; Nishiguchi et al. 2017 ). Multiple studies have focused on the precipitation-driven short-timescale dynamics of presently active, slow-moving landslides (e.g., Bennett et al. 2016b ; Handwerger et al. 2019 ; Kang et al. 2021 ; Mackey et al. 2009 ; Squarzoni et al. 2003 ; Xu et al. 2020b ; Ye et al. 2004 ); however, knowledge of their geologic controls is still poorly known owing to a lack of detailed, large-scale evidence, but such knowledge is essential for deciphering their characteristics and for preventing future hazards. Spatially large, slow-moving landslides are generally deep-seated (meters to hundreds of meters) (Bonzanigo et al. 2007 ; Highland and Bobrowsky 2008 ; Larsen et al. 2010 ) and may have been active for hundreds to thousands of years (Bonzanigo et al. 2007 ; Bovis and Jones 1992 ; Kelsey and Bockheim 1994 ; Mackey et al. 2009 ; Varnes and Savage 1996 ). Hence, their occurrence could be controlled by the lithology and structure of the underlying bedrock and by geologic processes (Clarke and Burbank 2010 ; Cruden and Varnes 1996 ; Lambe and Whitman 1969 ; Roering et al. 2005 ). In addition, vertical uplift (i.e., any upward movement of the land surface) in a geologic timescale (10 3 –10 5 years) could deliberately alter the force balance of hillslopes and regulate the denudation process (Burbank et al. 1996 ; Bennett et al. 2016a ; Larsen and Montgomery 2012 ; Roering et al. 2015 ), thereby potentially modulating occurrence and kinematics of long-term creeping landslides. Here, we apply the high-accuracy InSAR method over the entire US West Coast states (~ 8.6 × 10 5 km 2 ) to discover large, presently active landslides in both the high mountains and coastal neighborhoods inhabited by 47.8 million people (2019 census; USCB 2019 ). Based on the large-scale observations, we tested our hypotheses that the spatial density and size of slow-moving landslides are significantly controlled by bedrock type and that their occurrence and persistent motion reflect long-term land uplift. Materials and methods SAR interferogram generation and unwrapping We used radar interferometry of both the ALOS PALSAR (Advanced Land Observing Satellite–Phased Array type L-band Synthetic Aperture Radar) images from 2007 to 2011 and ALOS-2 PALSAR-2 images from 2015 to 2019 for identifying landslides near the US West Coast. The L-band SAR images were primarily utilized over the relatively densely vegetated US West Coast because of L-band sensor’s capability in vegetation penetration. SAR interferograms were generated by differencing the phase measurements of two SAR images. For each SAR interferogram, the interferometric phase of a SAR resolution element, \(\phi\) , is composed of multiple independent components: $$\phi =W\left\{{\phi }_{\mathrm{def}}+{\phi }_{\mathrm{dem}}+{\phi }_{\mathrm{orb}}+{\phi }_{\mathrm{atm}}+{\phi }_{\mathrm{n}}\right\}$$ (1) where \({\phi }_{\mathrm{def}}\) is the phase change due to movement of the pixel in the satellite radar line-of-sight direction; \({\phi }_{\mathrm{dem}}\) is the DEM (digital elevation model) error sourcing from the difference between the DEM height and the elevation of average scatterers in the resolution element; \({\phi }_{\mathrm{orb}}\) is the residual phase due to orbit errors; \({\phi }_{\mathrm{atm}}\) is the difference in atmospheric phase delay between passes; \({\phi }_{\mathrm{n}}\) is the phase noise due to both temporal variability in scattering and thermal noise; and W {⋅} is the wrapping operator that drops whole phase cycles (2 \(\mathrm\pi\) ), as only a fractional part of a cycle can be measured with SAR interferometry.In order to obtain \({\phi }_{\mathrm{def}}\) , other contributing terms including \({\phi }_{\mathrm{dem}}\) , \({\phi }_{\mathrm{orb}}\) , \({\phi }_{\mathrm{atm}}\) , and \({\phi }_{\mathrm{n}}\) must be removed or reduced. We set multi-looking factors of 3 \(\times\) 7 (range by azimuth) and 2 \(\times\) 4 for ALOS PALSAR and ALOS-2 PALSAR-2 images, respectively, in order to reduce the approximately Gaussian-distributed data noise \({\phi }_{\mathrm{n}}\) (Hanssen 2001 ). We also minimized the DEM contributions \({\phi }_{\mathrm{dem}}\) by using the 1-arcsec SRTM DEMs (Farr et al. 2007 ) and reduced the orbit-related artifacts \({\phi }_{\mathrm{orb}}\) using the quadratic fitting (Fattahi and Amelung 2014 ). The stratified atmospheric artifacts related to regional topography were reduced by using a linear fitting, and other large-spatial-scale phase artifacts such as tropospheric noises were largely reduced by selecting localized, stable, and highly coherent reference regions near the landslides. We unwrapped the SAR interferograms through the minimum cost flow approach (Costantini 1998 ) using the GAMMA software (Werner et al. 2000 ) and set a coherence threshold of 0.4 for both ALOS and ALOS-2 interferograms. Accuracy of the InSAR measurement can be quantified based on the Cramer-Rao bound (Rodriguez and Martin 1992 ): $$\sigma =\frac{\lambda }{4\uppi }\sqrt{\frac{1}{2NM}\frac{\left(1-{\gamma }^{2}\right)}{{\gamma }^{2}}}$$ (2) where \(\sigma\) is the uncertainty of InSAR measurements, \(\lambda\) the radar wavelength, \(N\) and \(M\) the window sizes for the coherence estimation, and \(\gamma\) the coherence. We used a 32 × 32 moving window for the coherence estimation, which consequently corresponds to a minimum measurement uncertainty of ~1.4 mm for both ALOS and ALOS-2 data. Identification of active landslides Active landslides were identified based on the ground motion captured by SAR interferograms. All of the interferograms with good coherence (greater than 0.4) and with various temporal (maximum timespan of 2 years) and perpendicular baselines were utilized to cross-validate the identified landslides. We also used the 10-m-resolution DEMs (USGS 2020b ) from the US Geological Survey and the high-resolution true color image time series from Google Earth to exclude non-landsliding displacement signals dominated by processes such as vegetation regrowth after clear cut, water level change in wetlands, underground mining and oil exploitation, and urban construction. Note that rapid landslides such as rock avalanches and debris flows that alter original ground features significantly leading to complete coherence loss are not identifiable from SAR interferograms. The active landslide boundaries were firstly outlined through thresholding radar LOS deformation (greater than 5 mm), and then manually revised by integrating information from Google Earth optical images and the10-m-resolution DEMs. Bedrock of the landslides Bedrock formations of the identified landslides were derived from the 1:50,000 to 1:1,000,000 scale State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (ver. 1.1, August 2017) (Horton et al. 2017 ). We combined the results for essentially repeated geologic formations (e.g., multiple “basalts”) and used adjacent hillslope material to revise the formation for eleven landslides that were supposedly in alluvium but actually appeared to have been deposited on alluvium (Supplementary Material Table S1 ). Landslide area-volume scaling and average slope angle A power-law relationship between landslide volume, \(V\) , and landslide surface area, \(A\) , was used to estimate landslide volumes in varied bedrock (Larsen et al. 2010 ): $$V\propto {A}^{\gamma }$$ (3) where \(\gamma\) is a scaling exponent. We used \(\gamma =1.6\) for the identified large, deep-seated landslides (Larsen et al. 2010 ). Surface areas of landslides were computed from the landslide boundary (see data release in Xu et al. 2020b ) outlined from SAR interferograms. The slope angle of each DEM cell element was derived from the 10-m-resolution DEMs. Each identified landslide spatially covers multiple DEM cell elements, and we define the average slope angle of a landslide as the arithmetic mean of all cell elements within the landslide boundary. Land uplift rate Land uplift rates over the US West Coast were obtained by evaluating published literature on geologically and historically recent vertical land movement. This literature (Table 1 ) includes studies emphasizing land surface surveying (Amos et al. 2014 ; Hammond et al. 2016 ; Levy 2019 ; Yousefi et al. 2020 ) during recent times, or geologic studies generally extending from recent times into the Quaternary and Neogene Periods (Amos et al. 2014 ; Anderson 2008 ; Barth and May 1992 ; Bennett et al. 2016a ; Hellwig 2010 ; House 1999 ; Jones 1987 ; Kelsey and Bockheim 1994 ; Kobor and Roering 2004 ; Levy 2019 ; Lock et al. 2006 ; Machette et al. 2008 ; Muhs et al. 1992 ; Pazzaglia and Brandon 2001 ; Penserini et al. 2017 ; Reiners et al. 2002 ; Schweickert 2009 ; Spotila et al. 1998 ; Unruh 1991 ; Yousefi et al. 2020 ). Longer-term studies emphasized fluvial and coastal geomorphology, often with cosmogenic nuclide and/or radionuclide dating, thermochronology, and modeling. We interpolated these pointwise uplift data into a gridded raster file using inverse distance weighting, and we clipped the gridded data to within 100 km of the points, to land sloped more steeply than 5°, and by the geographical boundary of the US West Coast states. Table 1 Source literature of the uplift data for the US West Coast Full size table Results Discovery of actively slow-moving landslides We processed 6589 scenes of ascending ALOS PALSAR images acquired between 2007 and 2011, and 484 scenes of ALOS-2 PALSAR-2 images acquired between 2015 and 2019 using the InSAR method to discover large, active landslides over the entire US West Coast states (Fig. 1 a). Active landslides during the observation period were identified from deformation signals captured by the differential InSAR interferograms, assisted by 10-m-resolution DEMs (digital elevation models) (USGS 2020b ) and high-resolution optical satellite images. Fig. 1 SAR imagery coverage and comparison of InSAR-captured landslides and the national landslide inventory. ( a ) Gray-shaded rectangles illustrate spatial extent of the ascending ALOS PALSAR images used (2007–2011), and the white-shaded rectangles represent spatial coverage of the ALOS-2 PALSAR-2 images (2015–2019). The ALOS images spatially cover the entire US West Coast, and the ALOS-2 images are primarily distributed over the western regions and cover 97.6% of the identified landslides. ( b ) The InSAR-captured landslides denote active landslides detected by ALOS (2007–2011) and/or ALOS-2 (2015–2019) radar images. The landslide inventory (Jones et al. 2019 ) was compiled from multiple sources and includes landslides recorded between 1932 and 2018, but only as point locations Full size image We identified 617 landslides in total, of which 375 were active between 2007 and 2011, 471 were active between 2015 and 2019, and 229 were active during both the 2007–2011 and 2015–2019 periods (the exact active areas might slightly vary) (Figs. 1 and 2 ). Spatially, the landslides are spread out over the US West Coast states, with concentrations in mountain ranges of western Washington, southwestern Oregon, and northwestern California (Fig. 2 b). Multiple towns and roads in especially northern Washington, northwestern California, and the vicinity of the coastline are within 0.5–5 km to the identified landslides (Figs. 1 and 2 ), and could be threatened by future failure events that initiate rapid slides and flows to travel kilometers (Legros 2002 ) downslope/downstream. Moreover, comparison with Google Earth optical images reveals numerous infrastructures which are located on the identified active landslides. In addition to the 617 landslides, we also identified 89 active rock glaciers that are predominantly distributed along the high mountain ridges in eastern California (Fig. 2 ). Overall, these InSAR-captured active landslides are spatially large and some are on relatively steep slopes, which imply high hazard potentials to the vicinity during possible future runout events. Spatial sizes of the identified landslides range from 4 × 10 4 m 2 to 13 × 10 6 m 2 , and 88.7% are larger than 10 5 m 2 . The majority of the landslides (97.1%) have slope angles between 5 and 30°, and 16.8% (106 slides) are steeper than 20° (Fig. 3 ). Fig. 2 Active landslides detected by radar satellites. ( a ) Spatial distribution of the detected landslides and towns in the US West Coast. The states are annotated as WA, Washington; OR, Oregon; and CA, California. Geographical locations of towns were obtained from US Census Bureau (2017 census; USCB 2020 ). ( b ) Hillshade map produced from the 10-m DEMs (USGS 2020b ). ( c ) Generalized geological map produced from the SGMC geodatabase (Horton et al. 2017 ) Full size image Fig. 3 Surface geometry of the identified landslides. ( a ) Probability distribution of active areas of the 617 slow-moving landslides. The figure in the upper-right corner is an enlarged illustration of landslides larger than 4 km 2 . ( b ) Probability distribution of average slope angles (derived from 10-m DEMs) of the identified landslides Full size image Of the 617 detected landslides, only 29 (comprising 4.7%) are included in the national landslide geodatabase (Jones et al. 2019 ), which is a compilation of currently existing, non-systematically mapped global, national, and regional-level landslide inventories (Fig. 1 b). The 89 active rock glaciers were also absent. A key reason that most of our identified landslides are missing from the geodatabase is that many of these landslide inventories source from human-reported events and geologic maps (Jones et al. 2019 ), yet only landslides with historical failures or obvious geomorphic signatures would typically have been noticed and reported. Consequently, long-term, slow or creeping landslide movement are less readily recognized (Highland and Bobrowsky 2008 ; Keefer and Johnson 1983 ) so are relatively infrequently discovered. Indeed, our results show that many landslides that we discovered are nearly indistinguishable from their neighboring stable hillslopes on the high-resolution optical images, but their active slow motions (4–17 cm/year along radar line-of-sight direction) were clearly captured and measured by the InSAR interferograms (e.g., Fig. 4 ). Downslope movement rates of the identified landslides range from millimeters to several meters per year depending on the exact location within a landslide, which correspond to the categories spanning from extremely slow to slow landslides (less than 5 \(\times\) 10 −3 m/s) as defined in Cruden and Varnes 1996 . Note that the free and frequently acquired SAR datasets (3–60 repeated acquisitions per year since 1992) also allow identifying the presently active section of a landslide, which is less achievable from LiDAR hillshade maps. In addition, many landslides recorded in the existing geodatabase (Jones et al. 2019 ) since 1932 were one-time failures such as flows and avalanches that will probably not recur (Cruden and Varnes 1996 ), while the InSAR-captured large, slow-moving slides are likely to remain active in the near future (Bovis and Jones 1992 ; Kelsey and Bockhein 1994 ; Mackey et al. 2009 ; Varnes and Savage 1996 ) and pose continued threats. Fig. 4 Examples of active landslides discovered by SAR interferograms. This figure illustrates ten exemplary pairs of presently active slow-moving landslides that were generally unidentifiable from submeter-resolution optical images (columns 1 and 3) but were clearly revealed by SAR interferograms (columns 2 and 4). These ten landslides were distributed over Washington, Oregon, and California (geographical coordinates are shown in degrees beside each landslide). Red polygons outline the landslide extents, and white arrows mark the downslope directions. All the optical images were acquired in 2019 and accessed from Google Earth. All the SAR interferograms were produced from ALOS-2 SAR images acquired between May 2018 and August 2019. One fringe (changes from − \(\uppi\) to \(\uppi\) ) on the SAR interferograms represents a line-of-sight movement of 12.1 cm Full size image Bedrock control of slow-moving landslides Using the SGMC (State Geologic Map Compilation) geodatabase of the conterminous United States (Horton et al. 2017 ), we statistically analyzed the bedrock underlying the identified slow-moving landslides. Over the entire study area, 102 out of the total 398 bedrock formations contain landslides, and 16 formations contain more than 10 landslides. Statistically, these 16 formations harbor 484 of the identified 617 landslides (78.4%). We selected only these 16 formations for detailed statistical analyses and categorized them into four distinct types: metamorphic rocks, mélange, sedimentary rocks, and igneous rocks (including volcanic flows) (Fig. 2 c). Note that in the analysis, we utilized adjacent hillslope material to revise the formation for 11 landslides that were supposedly in unconsolidated materials (see section “ Bedrock of the landslides ”). Particularly, we investigated the spatial density and spatial size of the landslides with regard to various lithology. Here, spatial density is defined as the ratio of landslide area overlaying a specific bedrock formation by the total area of the bedrock formation. Our results demonstrate that both spatial density and size of the identified slow-moving landslides were strongly controlled by their lithology. By bedrock type, the greatest spatial density was found in metamorphic rocks (15,300 m 2 /km 2 ), followed by mélange (5400 m 2 /km 2 ), sedimentary rocks (3200 m 2 /km 2 ), and igneous rocks (1300 m 2 /km 2 ) (Fig. 5 ). Similar trends were also found in their spatial sizes. The largest mean size was in metamorphic rocks (1.52 km 2 ), then mélange (0.6 km 2 ), and similar in sedimentary (0.44 km 2 ) and igneous rocks (0.43 km 2 ) (Fig. 6 ). Overall, landslides were largest and most frequent in metamorphic rocks followed by mélange, and the spatial density and mean size were 3 to 12 times greater in metamorphic rocks than in sedimentary and igneous rocks. The results also indicate that these presently active landslides are presenting hazards from and modifying landscapes of metamorphic and mélange bedrocks to the greatest extent. Assuming similar area-volume scaling (Larsen et al. 2010 ) for landslides in each of the bedrock types, the results indicate that slow-moving landslides in mélange have mobilized 1.4, 8.6, and 10.6 times the sediment of landslides in metamorphic, igneous, and sedimentary rocks, respectively. Fig. 5 Landslide spatial density by bedrock. ( a ) Average landslide spatial densities by the 16 different formations on which more than ten landslides were identified. For descriptions of the bedrock formations, refer to Supplementary Material Table S1 . ( b ) Average landslide spatial densities by the four general bedrock types Full size image Fig. 6 Landslide size by bedrock. ( a ) Average landslide size by the 16 formations. For descriptions of the formations, refer to Supplementary Material Table S1 . ( b ) Average landslide size by the four general bedrock types Full size image The greater size and density of slow-moving landslides in metamorphic rocks and mélange compared to igneous and sedimentary rocks may partly result from generally lower rock mass strength due to pervasive discontinuities in foliated and tectonically sheared metamorphic rocks and mélange (Cruden and Varnes 1996 ), as well as the relatively high abundance of clay minerals in these altered rocks (Lambe and Whitman 1969 ; Schmidt and Montgomery 1995 ). In addition, igneous rocks in which landslides were identified were mostly andesite and basalt flows (Supplementary Material Table S1 ). Volcanic flows and sedimentary rocks are likely to have spatially extensive discontinuities between beds and flow units, and relatively high anisotropy of material properties because of their layered nature (Jaeger et al. 2007 ). Such discontinuities and anisotropy are relatively lacking from most metamorphic and mélange rock formations (Jaeger et al. 2007 ). Shallower and therefore smaller landslides are more likely in materials with such anisotropy (Cruden and Varnes 1996 ), whereas deeper and therefore larger landslides are more likely in more isotropic materials (Cruden and Varnes 1996 ), such as mélange and metamorphic rocks. Landsliding contributed by land uplift We investigated how vertical land motion may relate to the identified slow-moving landslides by incorporating vertical motion data for the study area from radioisotope dating, modeling, and recent GPS observations (Table 1 ). We expect that land uplift results in and sustains continuous landsliding because uplift creates topographic relief resulting in stream downcutting and hillslope instability (Bennett et al. 2016a ; Burbank et al. 1996 ; Cruden and Varnes 1996 ; Lambe and Whitman 1969 ; Roering et al. 2005 ; Larsen and Montgomery 2012 ). Slow-moving landslides identifiable from InSAR may be very long lived (10 2 –10 4 years) (Bovis and Jones 1992 ; Keefer and Johnson 1983 ; Mackey et al. 2009 ; Varnes and Savage 1996 ) and usually continue moving during dry periods or reactivate thereafter (Bennett et al. 2016b ; Bovis and Jones 1992 ; Coe 2012 ; Skempton et al. 1989 ); thus, their occurrence and persistent long-term creeping motions most likely have been greatly contributed to and/or sustained by geologically recent (10 3 –10 5 years) uplift. However, although less sensitive to recent rainfall than small landslides, large landslides are strongly modulated by precipitation on a short timescale such as seasonal movement (Bennett et al. 2016b ; Coe 2012 ). Here, we only focus on the potential contributions from long-term land uplift, and the short-timescale hydrological contributions are detailed in the “ Discussion ” section. Land surface uplift measurements from a total of 79 sites over the study area were converted to gridded data using the inverse distance weighted interpolation in order to compare uplift rates at landslide locations to those at stable regions. We excluded the regions with slope angle less than 5° in the analysis as our observations show that landslides rarely occur in such flat terrain (Fig. 2 ). Our analyses reveal that the 617 landslides and the 89 rock glaciers were geographically related to geologic uplift. Overall, the rapidly uplifting northwestern Washington, southwestern Oregon, northwestern California, coastal regions of southern California, and the Sierra Nevada of middle-east California all saw a great number of active landslides or rock glaciers, while the subsiding middle-west Oregon, middle-west California, and the southern end of the Sierra Nevada (middle-east California) were barely involved with any identified landslides (Fig. 7 ). Quantitatively, the uplift rates at the active landslides and rock glaciers average 0.83 mm/year, three times higher than the mean rate of 0.27 mm/year for the whole region (Table 2 ). The results are also insensitive to the excluded flat regions: thresholding slope angles at 0°, 10°, and 16° would yield mean uplift rates of 0.79 mm/year over 0.12 mm/year (landslides versus the whole region), 0.83 mm/year over 0.30 mm/year, and 0.82 mm/year over 0.32 mm/year, respectively (Table 2 ). All of the results provide evidence that the identified slow-moving landslides were preferentially located in areas with accelerated geologically recent uplift. We expect that rapid and/or small landslides similarly collocate with accelerated uplift, but InSAR does not well resolve rapid and/or small landslides. Fig. 7 Vertical land motions near the US West Coast. ( a ) Vertical uplift (green circles) and subsidence (red diamonds) rates of the 79 sites. ( b ) An interpolated map produced from the point-wise measurements. Only hillslopes steeper than 5° and within 100 km distant from the measurement sites are shown in the figure Full size image Table 2 Uplift rates by excluding flat regions. Regions with slope less steep than the slope angle threshold (the first column of the table) were excluded in the corresponding analyses Full size table Discussion Landslide identification using radar interferometry Despite the high efficiency and effectiveness, landslide mapping using InSAR also faces a few challenges. First, InSAR is relatively less sensitive to landslide motions that are oriented perpendicular to the radar look direction. Mountain ranges near the US West Coast are dominantly north–south orientated and have formed landslides which are mostly visible from the approximately west/east looking radar sensor. We also utilized SAR interferograms spanning as long as 2 years for landslide identification, and such long timespan allows landslides to accumulate a large displacement to be more clearly identifiable on SAR interferograms. Second, coherence loss in particularly densely forested regions may hinder InSAR’s capability to reveal active landslides because of the induced high background noise level. In general, the longer-wavelength L-band imagery we utilized in this study allows better vegetation penetration than the X- and C-band data and therefore was able to produce less noisy SAR interferograms for landslide identification over the relatively densely vegetated US West Coast (e.g., Xu et al. 2021 ). Note that the short-wavelength X- and C-band SAR images possess better sensitivity to ground deformation than L-band data and hence may perform well in urban environments. Third, small and/or catastrophic landslides are highly challenging to be captured by InSAR. Small landslides occupy only a few pixels on SAR interferograms and may produce deformation signals indistinguishable from localized background noises. Catastrophic landslides such as debris flows often alter landslide surface considerably leading to severe coherence loss and hence cannot be measured by InSAR. However, large catastrophic landslides may be mapped with SAR intensity or coherence images (e.g., Jung and Yun 2020 ; Plank et al. 2016 ; Xu et al. 2021 ). In this study, we focused on large, slow-moving landslides, and therefore the potentially undetected small or catastrophic landslides by InSAR over the eastern mountains of the study region were not specifically considered. Comparison with the US Geological Survey (USGS) landslide inventory (Fig. 1 ) shows that our InSAR-based mapping captured much fewer active landslides in northwestern Oregon and the Sierra Nevada in California. One most likely reason is that the USGS inventory comprises many small and/or catastrophic landslides which are challenging to be detected by InSAR. Another potential reason is that some landslides were no longer active during our InSAR observation periods from 2007 to 2011 and from 2015 to 2019. The data gap between 2011 and 2015 results from a lack of free L-band SAR data. Additionally, our InSAR observations only mapped large, active rock glaciers in the study region, and a more complete global rock glacier database can refer to GLIMS and NSIDC 2018 . Most rock glaciers are identifiable from optical imagery based on their distinctive geomorphological features. Geologic impacts on landslide character and kinematics We found that bedrock lithology exerts significant control on both the spatial density and size of the slow-moving landslides. Metamorphic rocks and mélange that have relatively homogeneous composition, discontinuity distribution, and high clay content, while also having relatively low shear strength, are most likely to harbor widespread, deep, and large slow-moving landslides. In contrast, sedimentary and igneous flow rocks that have strength and hydrologic anisotropy and relatively high shear strength tend to produce relatively sparse, shallow, and small slow-moving landslides. In general, bedrock weathering and fracturing also contribute to landslide occurrence by reducing rock mass strength. Our observations also provide evidence that geologic uplift is a crucial contributor to the occurrence and long-term creeping behavior of the slow-moving landslides. Both the identified active rock glaciers and slow-moving landslides are predominantly distributed over hillslopes with geologically recent (10 3 –10 5 years), accelerated uplift, but barely observed in geologically subsiding terrains, implying a fundamental control from vertical land motion. The contributions from land uplift are a gradually cumulative effect, and such signal could be overwhelmed and clouded by other short-timescale factors (particularly precipitation). Long-term land uplift creates mountains resulting in hillslope instability, and landslide is the process to restabilize a hillslope. Hence, land uplift essentially results in mountain landslides, though precipitation is often seen as the “trigger” for landslide initiation and seasonal acceleration. In addition, uneven land uplift rates may create geological structures as faults and folds, which could also affect landslide occurrence. Rock glaciers often contain ice cores and their current activities are potentially dominated by air temperature change; however, land uplift over a geological timescale may also have contributed to their movement, because land uplift and stream and glacial erosions together created steepened hillslopes where rock glaciers are more likely to move. Hydrological impacts on landslide motion On an annual scale, precipitation is widely recognized as the driver for seasonal acceleration and deceleration of slow-moving landslides (e.g. Bennett et al. 2016b ; Handwerger et al. 2019 ; Squarzoni et al. 2003 ; Xu et al. 2019 ; Xu et al. 2020a , b ; Ye et al. 2004 ). However, precipitation may not be the only reason to initiate a landslide or keep a slow-moving landslide constantly active for hundreds of years (Bonzanigo et al. 2007 ; Kelsey and Bockheim 1994 ; Roering et al. 2015 ). We compared 30-year average precipitation (1981–2010) relative to observed landslide locations (Fig. 8 a) and found that the precipitation amount at those locations is highly variable. Overall, 75% of the identified large and slowly moving landslides are located in the mountain ranges that receive relatively rich rainfall ( \(\ge\) 2000 mm). However, numerous exceptions were found in central Washington and southwestern California, where relatively dry lands (approximately 400-mm annual rainfall) produced about 90 landslides. Moreover, the rainfall-abundant (over 2500 mm) southern Cascade Range and northern Coastal Ranges of Oregon only included 12 landslides, far fewer than the northwestern California where 1800 mm of annual rainfall produced 484 landslides (Fig. 8 ). In addition, we compared the identified landslides with the average excess precipitation between 2016 and 2019 (Fig. 8 b). Here, excess precipitation is defined as the difference between annual precipitation and the 30-year average. The results show that numerous landslides particularly in southern Washington and southwestern Oregon were captured active during even the historically dry years between 2016 and 2019. Consequently, precipitation alone cannot well explain the spatial distribution of the identified slow-moving landslides. Fig. 8 Comparison of landslide distribution and precipitation near the US West Coast. ( a ) 30-year average precipitation from 1981 to 2010 (PRISM Climate Group, 2021 ). The red polygons depict active landslides captured by ALOS and ALOS-2 images, and the magenta polygons depict active rock glaciers captured by ALOS imagery. ( b ) Excess precipitation (average annual precipitation subtracts the 30-year average) from 2017 to 2019. The green polygons only depict landslides that were active between 2017 and 2019 Full size image The precipitation distribution near the US West Coast is not independent from land uplift. In fact, annual precipitation positively correlates with elevation (Daly et al. 2017 ) because the warm air coming from the Pacific Ocean condenses to form cloud droplets while climbing up the high mountains and produces precipitation. As evidenced in Fig. 8 , heavy precipitation dominantly falls on the high mountains of the Coastal Ranges, Cascade Range, and Sierra Nevada. Consequently, land uplift not only leads to landsliding by creating high relief but also contributes to hillslope instability by increasing precipitation over a geological timescale. In addition, the precipitation-elevation relationship indicates that landslide locations’ correlation with precipitation may in part result from the correlation with mountain topography, where relatively steep hillslopes reside (see Fig. 2 b). Implication on landslide and geomorphic studies Failure events initiated from slow-moving landslides have caused considerable socioeconomic loss globally in recent decades (Froude and Petley 2018 ; Intrieri et al. 2018 ; Kilburn and Petley 2003 ; Schuster and Highland 2001 ; Xu et al. 2020b ), and many damages (especially casualties) could have been avoided if the precursory slow motions were revealed prior to the catastrophes. The routinely acquired (optimal every 6 days; ESA 2020 ) and globally covered satellite radar images could prove valuable in uncovering such presently active landslides for mitigating future hazards, especially in response to the predicted increasingly frequent landslide activities owing to global climate change and expanding anthropogenic activities (Froude and Petley 2018 ; Gariano and Guzzetti 2016 ). In addition, our finding of fundamental controls of slow-moving landslides by bedrock and vertical land motion could offer novel insights into landslide susceptibility forecasting and landform evolution studies. Globally, the geologically recent uplifts in the Himalayan mountains (Asia) (Ader et al. 2012 ), Alps mountains (Europe) (Sternai 2019 ), Pacific West Coast (North America) (Muhs et al. 1992 ), and Andes mountains (South America) (Armijo et al. 2015 ) are expected to fuel continued landslide hazards and intensify geomorphological change. However, regional tectonic subsidence within these mountain ranges may conversely attenuate local landslide activities. Conclusions We discovered 617 active, large, potentially dangerous landslides over the US West Coast states, 588 of which are missing from existing landslide inventories that source from non-systematically mapped and compiled geologic maps, documentation of precipitation events, and citizen reports (Jones et al. 2019 ). We found that the high-accuracy InSAR tool could be effective in uncovering their locations, boundaries, and motions. Our study also suggests that bedrock types exert fundamental control on landslide size and spatial density in general. The relatively weak and homogenous metamorphic rocks and mélange are more susceptible to large, slow-moving landslides than sedimentary and igneous rocks. In addition, regions with rapid land uplift over a geological scale are more likely to experience landslide activities. Hence, vertical landslide motion rates may be an effective indicator for forecasting landslide susceptibility around the globe. Precipitation strongly affects the spatial distribution of landslides over the US West Coast. However, over a large spatial scale, a single fixed precipitation threshold may not be applicable to variable regions for forecasting landslide occurrence. Regions with less rainfall may experience more landslide activities partly depending on other factors such as bedrock formation, land uplift, and tectonic activity. Data and materials availability The ALOS PALSAR data are freely accessible from the Alaska Satellite Facility ( ), and the ALOS-2 PALSAR-2 data are obtainable from Japan Aerospace Exploration Agency ( ). The 10-m-resolution DEMs covering the US West Coast states are freely available from the US Geological Survey ( ), and the high-resolution true color images are available from Google Earth. Shapefiles of towns and state boundaries of the US West Coast states are downloadable from the US Census Bureau ( ). The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States is available from the US Geological Survey (USGS 2020a ). The PRISM precipitation data are freely available from the PRISM Climate Group ( ). Shapefiles depicting landslides identified during this study are available from the US Geological Survey ScienceBase repository (Xu et al. 2020a ). Change history 21 December 2021 A Correction to this paper has been published:
SMU geophysicists have used satellite imagery to identify more than 600 slow-moving landslides occurring near the U.S. West Coast Fewer than 5% of these landslides in California, Oregon and Washington state had been previously identified. Geophysics professor Zhong Lu and his team at SMU (Southern Methodist University) were awarded nearly $1 million over the past 4 years from the NASA Interdisciplinary Research in Earth Science Program and the NASA Earth Surface and Interior Focus Area to study landslides on the West Coast. Most of the large landslides they found were in the mountain ranges of western Washington, southwestern Oregon and northwestern California. In some cases, the identified landslides were within 0.5 to 5 kilometers of multiple towns and roads. "These landslides are currently moving slowly. But they're already in a state of force imbalance. So some other external forces, like earthquakes or rainfall, could shift them into a disaster," said Yuankun Xu, a postdoctoral researcher who works in Lu's SMU Radar Laboratory and lead author of a study published in the journal Landslides. Co-author Lu, Shuler-Foscue Chair at SMU's Roy M. Huffington Department of Earth Sciences, said, "We don't want to give the impression that these landslides are in trouble tomorrow. No, these landslides have a life expectancy ranging from years to a thousand years." Still, the researchers urged policymakers in these western states to monitor the movement of the now-identified landslides so they can prevent a catastrophe from happening. "I would be very concerned if living, working or commuting upon or near any of the landslides," said study co-author William H. Schulz, a research geologist in the USGS' Landslide Hazards Program. "However, humans can and have successfully dealt with individual landslides and potentially unstable slopes in the past. Detailed studies performed by professionals involving engineering geologic characterization and modeling are needed for any landslide to accurately estimate and mitigate potential future hazards." Other scientists who helped with this study were Jinwoo Kim, SAR/InSAR Research Scientist at the SMU Radar Laboratory and Kelli Baxstrom, a research geologist in the USGS Landslide Hazards Program. Landslides kill thousands of people every year worldwide Landslides occur when masses of rock, soil or earth fall down a slope because of gravity. They cause thousands of deaths each year around the world, and in the United States alone, damage exceeds $2 billion annually from these slides. Yet, landslides can be hard to spot before they become a danger, when heavy rainfall suddenly causes the land to shift quickly. Of the 617 landslides detected in western US states, only 29 of them were already included in the national landslide database. These landslides are typically found through human-reported events and geological maps. "The landslides that we previously knew about are ones that people can easily spot from the highway or in city areas," Lu said. "Those are very rapid-moving landslides." Other landslides, however, are harder to identify due to tree cover or because there is no obvious crack in the topography, he explained. Xu, Lu and the rest of the research team used radar satellite images to unravel previously unidentified landslides from space. These images, taken from 2007 to 2011 and 2015 to 2019, came from radar instruments called Phased Array type L-band Synthetic Aperture Radar (PALSAR) mounted on the Japan Aerospace Exploration Agency's Advanced Land Observing Satellites. With this interferometric synthetic aperture radar technology (called InSAR, for short) the satellite images allow scientists to detect changes that aren't visible to the naked eye. The satellite technology can capture ground motion with a precision of sub-inches or better, at a spatial resolution of a few yards over thousands of miles, say the researchers. Essentially, any movement of the ground surface toward or away from the satellite can be measured and depicted as a "picture." This picture shows how much the surface has moved or deformed during the time between images. Lu, a leading scientist in InSAR applications, used the same method to reveal in 2018 that sinkholes are expanding and forming in oilfield-dominated West Texas at a startling rate. In this current study, the geophysicist team collected a total of 7,073 images of the western US states from 2007 to 2011 and from 2015 to 2019 to see whether the land had shifted from previous images. The team focused on finding large, slow-moving landslides because these had the most potential to cause significant damage. They found that 70 percent of the landslides they identified moved at a consistent pace, sliding further down a slope from where they had been the year before. These landslides moved at rates of tens of centimeters to a few meters per year on average, Lu said. But Lu noted that climate change could accelerate how quickly these landslides become catastrophic, as "climate change is producing abnormal climate situations." For instance, it's possible that record rainfall, similar to what was seen in Europe and China this year, could make some of the landslides on the West Coast worse. Those landslides ranged in size from the equivalent of 7 to 2,400 football fields. Though InSAR has been highly effective at detecting landslides, Lu said there are likely still more unidentified slow-moving landslides on the U.S. West Coast because extremely dense forests may hinder InSAR's capability to spot them. The InSAR satellite images are also less able to reveal landslide motions that are oriented perpendicular to the radar sensor's "line of sight." SMU's high performance computer was critical to this study Xu said SMU's supercomputer was essential to analyzing at high speed the mammoth amount of data inherent in using the InSAR technique. "It's the 'unsung hero,'" Lu said. "Without it, we wouldn't have been able to do this research." You can see where the slow-moving landslides were found here.
10.1007/s10346-021-01732-3
Physics
Smooth, versatile on-chip light manipulation is now possible with supersymmetry
Jieun Yim et al, Broadband continuous supersymmetric transformation: a new paradigm for transformation optics, eLight (2022). DOI: 10.1186/s43593-022-00023-1
https://dx.doi.org/10.1186/s43593-022-00023-1
https://phys.org/news/2022-09-smooth-versatile-on-chip-supersymmetry.html
Abstract Transformation optics has formulated a versatile framework to mold the flow of light and tailor its spatial characteristics at will. Despite its huge success in bringing scientific fiction (such as invisibility cloaking) into reality, the coordinate transformation often yields extreme material parameters unfeasible even with metamaterials. Here, we demonstrate a new transformation paradigm based upon the invariance of the eigenspectra of the Hamiltonian of a physical system, enabled by supersymmetry. By creating a gradient-index metamaterial to control the local index variation in a family of isospectral optical potentials, we demonstrate broadband continuous supersymmetric transformation in optics, on a silicon chip, to simultaneously transform the transverse spatial characteristics of multiple optical states for arbitrary steering and switching of light flows. Through a novel synergy of symmetry physics and metamaterials, our work provides an adaptable strategy to conveniently tame the flow of light with full exploitation of its spatial degree of freedom. 1 Introduction Our attempts at bending light on demand and arbitrarily transforming its spatial characteristics are rooted in the fundamentals of electromagnetics. The form-invariance of Maxwell’s equations under coordinate transformations led to the formulation of transformation optics [ 1 , 2 ]—the correspondence between the coordinate system and material parameters. Their equivalence allows electromagnetic field in a given coordinate system to be rearranged by designing the medium with the corresponding, spatially dependent dielectric permittivity and magnetic permeability, having consequently opened avenues to a series of intriguing functionality such as invisibility cloaking [ 3 , 4 , 5 , 6 , 7 , 8 ], illusion optics [ 9 ], etc. Nevertheless, although the excellent design flexibility provided by metamaterials [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 ] enables a wide range of inhomogeneous and anisotropic optical properties, experimental realization of transformation optics, especially in the optical regime, has been in a stalemate for a decade because of optical extremity and singularity often resulting from the transformation. Additionally, in the original resonant meta-atom-based implementation, transformation optics is confined to narrowband operation [ 4 , 5 ]. Therefore, new schemes towards transformation optics with broadband parameter values within their achievable limits have become necessary. For example, conformal mapping [ 18 , 19 ] with spatially varying the local index of refraction has been demonstrated to perform the coordinate transformation using inhomogeneous Si nanostructures [ 20 , 21 ], yielding delicate phase-front control for multicolor carpet cloaking. This approach elucidated the possibility of the exploitation of gradient-index (GRIN) [ 22 , 23 ] to warp the space, but to achieve richer functionality other than bending the trajectories, a paradigmatic shift beyond traditional coordinate transformation is further required. Another intrinsic principle to formulate the transformation of a physical system is observing its Hamiltonian under transformation. For example, the invariance of the Hamiltonian under symmetry operation [ 24 , 25 , 26 ] endows us with insights into how a system can be transformed with a conserved quantity. In particular, supersymmetry (SUSY) [ 27 ], which originated from the description of the transformation between bosons and fermions [ 28 ], features the degenerate eigenenergy spectra between two distinct Hamiltonians, which has facilitated advanced control of the spatial characteristics of light [ 29 , 30 , 31 ]. For example, designed by unbroken SUSY under which the unpaired ground state exists between the original and the superpartner Hamiltonians, strategic coupling between the original optical system and its dissipative superpartner has triggered intriguing applications such as high-radiance single-mode microlaser arrays [ 32 , 33 ] and mode division multiplexing [ 34 ]. These previous experimental studies are based on lattice Hamiltonians, which can be factorized via matrix operation, and hence they constructed systems composed of a number of coupled discrete elements corresponding to coupled waveguides or resonators. In contrast, the extended method of SUSY that can generate an infinite number of strictly isospectral potentials has remained experimentally unexplored, since it requires an intrinsically different approach to realize arbitrary potentials, while its mathematical framework turns out to be ideal for the continuous Hamiltonian transformation to enable a distinct scenario for transformation optics [ 35 , 36 ] other than the traditional coordinate transformation. Here, we report the first experimental demonstration of continuous SUSY transformation by designing a novel GRIN metamaterial on a Si platform. We utilize the synergy of supersymmetry and the metamaterial to design spatially varying dielectric permittivity, which constitutes a two-dimensional map where arbitrary transformations are prescribed simultaneously to multiple optical states for routing, switching, and spatial mode shaping, while strictly maintaining their original propagation constants. Our result features broadband continuous SUSY transformation in optics, illuminating a novel path to fully utilizing the spatial degrees of freedom on a chip for versatile photonic functionalities. 2 Results and discussion Owing to the mathematical correspondence between the Schrödinger equation and the Helmholtz equation, we formulate the SUSY transformation by describing the potential of the Hamiltonian using the inhomogeneously distributed refractive index n(x) n(x) in the transverse dimension of an optical system. Likewise, the eigenvalue spectrum of the Hamiltonian is represented by the spectrum of the propagation constants calculated from n(x) n(x) (See Methods ). For an original optical potential {n}_{0}(x) {n}_{0}(x) , SUSY transformation [ 28 , 37 , 38 ] leads to a family of isospectral optical potentials {n}_{f}(x) {n}_{f}(x) with a free parameter {\alpha }_{i} {\alpha }_{i} : {n}_{f}^{2}\left(x;{\alpha }_{i}\right)={n}_{0}^{2}\left(x\right)+\frac{2}{{k}_{0}^{2}}\frac{d}{dx}\left(\frac{1}{{I}_{m}\left(x\right)}\frac{d{I}_{m}\left(x\right)}{dx}\right) {n}_{f}^{2}\left(x;{\alpha }_{i}\right)={n}_{0}^{2}\left(x\right)+\frac{2}{{k}_{0}^{2}}\frac{d}{dx}\left(\frac{1}{{I}_{m}\left(x\right)}\frac{d{I}_{m}\left(x\right)}{dx}\right) (1) where {I}_{m}\left(x\right)=\underset{-\infty }{\overset{x}{\int }}{\psi }_{m}^{2}\left({x}^{^{\prime}}\right)d{x}^{^{\prime}}+{\alpha }_{i} {I}_{m}\left(x\right)=\underset{-\infty }{\overset{x}{\int }}{\psi }_{m}^{2}\left({x}^{^{\prime}}\right)d{x}^{^{\prime}}+{\alpha }_{i} , and {\psi }_{m}\left(x\right) {\psi }_{m}\left(x\right) is the {m}^{th} {m}^{th} eigenstate of {n}_{0}(x) {n}_{0}(x) . By this means, one can conveniently delete the eigenstate {\psi }_{m} {\psi }_{m} and reinstate a new {\psi }_{m} {\psi }_{m} with the same propagation constant but different spatial characteristics of light depending on parameter {\alpha }_{i} {\alpha }_{i} . Hence, this mathematical operation can be understood as a reshaping process of {n}_{0}(x) {n}_{0}(x) , enabling the isospectral transformation of light by continuously varying the parameter {\alpha }_{i} {\alpha }_{i} (and thus {n}_{f}(x;{\alpha }_{i}) {n}_{f}(x;{\alpha }_{i}) ) in the z z direction (Fig. 1 a). Since the choice of {\psi }_{m} {\psi }_{m} is associated with {\alpha }_{i} {\alpha }_{i} , Eq. 1 can be iterated n n times with different {\psi }_{m} {\psi }_{m} and {\alpha }_{i} {\alpha }_{i} to generate a n n -parameter (i.e., \{{\alpha }_{1},{\alpha }_{2}, \dots , {\alpha }_{n}\} \{{\alpha }_{1},{\alpha }_{2}, \dots , {\alpha }_{n}\} ) family of isospectral potentials, which offers sufficient design flexibility appropriate for transformation optics. Note that for a normalizable {\psi }_{m} {\psi }_{m} , {n}_{f}(x;{\alpha }_{i}) {n}_{f}(x;{\alpha }_{i}) is guaranteed to be non-singular as long as {\alpha }_{i}<-1 {\alpha }_{i}<-1 or {\alpha }_{i}>0 {\alpha }_{i}>0 . Fig. 1 Continuous SUSY transformation optics and its demonstration on a Si chip. a Two-dimensional refractive index distribution n(x,z) n(x,z) designed by SUSY transformation with continuously varying parameter {\alpha }_{i} {\alpha }_{i} in Eq. 1 . n(x) n(x) is shown at five different z coordinates: z=0, 0.25L, 0.5L, 0.75L z=0, 0.25L, 0.5L, 0.75L , and L L , where L=150 L=150 µm, where the index profile is discretized in the x x direction for the design implementation. b GRIN metamaterial designed to substantiate n(x,z) n(x,z) , where different optical states (blue, red, green) with different propagation constants can propagate with spatial characteristics and directions dictated by the SUSY transformation. c Schematic of the on-chip subwavelength Si metamaterial for local index tuning, with physical parameters: h=300 h=300 nm and p=225 p=225 nm, and its numerically simulated relationship between the effective refractive index ( {n}_{eff} {n}_{eff} ) and the size of the air gap ( g) g) at the wavelength of 1550 nm (Supplementary Additional file 1 : Fig. S2). d Scanning electron microscope images of the fabricated Si GRIN metamaterial and its prescribed gap distribution at three different z z coordinates: z=0.25L, 0.5L, z=0.25L, 0.5L, and 0.75L 0.75L , corresponding to n(x,z) n(x,z) in a Full size image Here, based on a Si waveguide system where the eigenstates bound in the transverse plane propagate along the z z direction, we design a full 2D map of spatially dependent refractive index n(x,z) n(x,z) (Fig. 1 a) in which the trajectories and the transverse mode profiles of three guided eigenstates are controlled, by substituting {n}_{0}^{2}\left(x\right)=a+b{e}^{-{\left(\frac{x-d}{c\lambda }\right)}^{2}} {n}_{0}^{2}\left(x\right)=a+b{e}^{-{\left(\frac{x-d}{c\lambda }\right)}^{2}} with a=2.31, b=1.4, c=0.8 a=2.31, b=1.4, c=0.8 , and d=6.1875 \mu m d=6.1875 \mu m , and the wavelength of light \lambda = \lambda = 1550 nm into Eq. 1 . Since we consider three eigenstates well guided in the given {n}_{0}(x) {n}_{0}(x) , SUSY transformation is applied to {n}_{0}(x) {n}_{0}(x) at each z z coordinate twice: the first with {\psi }_{m} {\psi }_{m} and parameter {\alpha }_{1} {\alpha }_{1} , and the second with {\psi }_{n} {\psi }_{n} and {\alpha }_{2} {\alpha }_{2} , where m and n denote the order numbers of the selected eigenstates (See Methods ). After such two-parameter transformation, {n}_{0}(x) {n}_{0}(x) is transformed to {n}_{f}(x) {n}_{f}(x) with respect to {\psi }_{m} {\psi }_{m} and {\psi }_{n} {\psi }_{n} as well as {\alpha }_{1} {\alpha }_{1} and {\alpha }_{2} {\alpha }_{2} . In practice, the index distribution at z=0.5L z=0.5L is set to be {n}_{0}\left(x\right) {n}_{0}\left(x\right) (i.e., {n}_{f}\left(x\right)\to {n}_{0}\left(x\right) {n}_{f}\left(x\right)\to {n}_{0}\left(x\right) as \left|{\alpha }_{i}\right| \left|{\alpha }_{i}\right| \to \infty \to \infty ), where all three eigenstates are guided at the center. From z=0.5L z=0.5L towards either z=0 z=0 or z=L z=L , |{\alpha }_{i}| |{\alpha }_{i}| reduces, and {n}_{f}(x) {n}_{f}(x) consequently features high-index regions newly emerging at both sides of the remaining high-index region in the center, such that two optical states are separately guided away from the center (Fig. 1 b). In this scenario, a series of {n}_{f}(x) {n}_{f}(x) with continuously changing {\alpha }_{1}(z) {\alpha }_{1}(z) and {\alpha }_{2}(z) {\alpha }_{2}(z) are connected along the propagation direction z z to form the 2D map of n(x,z) n(x,z) (Additional file 1 : Fig. S1). If the transformation of {\alpha }_{i}(z) {\alpha }_{i}(z) is adiabatic, the guided eigenstates can propagate through the variations of {n}_{f}(x;{\alpha }_{i}) {n}_{f}(x;{\alpha }_{i}) in a lossless manner because of their identical spectra of propagation constants guaranteed by SUSY. Therefore, by tagging an optical state with its propagation constant, the continuous SUSY transformation facilitates arbitrary routing and switching of the optical state, even allowing it to cross the lightpaths of other states without any perturbation to their original properties regardless of the number of optical states intersecting one another. In order to realize the spatially inhomogeneous index map of n(x,z) n(x,z) , we devise a GRIN metamaterial made of deep subwavelength Si nanostructures with a period of 225 nm in the x direction (Figs. 1 a and b). The transverse index profile n(x) n(x) is discretized with the same period into the local indices, and the corresponding GRIN metamaterial is tailored by adjusting the filling ratio of Si (i.e., the size of the air gap g ) that is related to the local effective index {n}_{eff} {n}_{eff} for the fixed height h h and period p p (Fig. 1 c). Note that n(x,z) n(x,z) in the design implementation represents the local {n}_{eff}(x) {n}_{eff}(x) of quasi-TM modes of the Si GRIN metamaterial in the xy xy plane at a given z z , and nonetheless, the SUSY transformation remains valid by replacing n(x) n(x) with {n}_{eff}(x) {n}_{eff}(x) (See Methods). With optimized physical parameters to satisfy the required maximum index contrast (i.e., {n}_{max}-{n}_{min} {n}_{max}-{n}_{min} ), the metamaterial that features air gaps locally tailored in the xz xz plane to emulate a GRIN medium undergoing the continuous SUSY transformation is fabricated on a Si-on-insulator platform with a cladding layer of air (Fig. 1 d and See Methods ). In this case, three distinct quasi-TM modes are guided in the on-chip GRIN metamaterial, subject to the continuous SUSY transformation during the propagation. Although exact isospectral transformation only occurs at the wavelength of 1550 nm, approximately the same performance of the SUSY transformation optics is anticipated in the broad range of input wavelengths, as estimated by calculating the effective mode indices at different wavelengths for the refractive index potential transformed for the wavelength of 1550 nm (Additional file 1 : Fig. S3). In the wavelength range of interest from 1460 to 1570 nm, less than 0.6% variation of the modal indices is expected between z=0 z=0 and z=0.5L z=0.5L in our design of n(x,z) n(x,z) . Such a quasi-flat dispersion originates from the small variation of the potentials after the SUSY transformation when the wavelength used for the transformation is varied within the range of 110 nm. Intrinsic material dispersion and waveguide dispersion can be effectively minimized with the right material choice and optimized physical parameters of the metamaterial. The virtue of SUSY transformation in optics is the controllability of light with increased spatial degrees of freedom in association with the eigenstates (i.e., optical modes), which allows them to be arbitrarily routed, switched, and spatially evolve while preserving their propagation constants secured by SUSY. In this regard, a complicated spatial distribution of light linked to a specific eigenstate, which is difficult in reality to excite directly by using an external light source, can be achieved by first exciting an eigenstate with a fundamental-mode profile and transforming its spatial profile in a metamaterial designed by SUSY transformation. For example, all the eigenstates displayed at z=0 z=0 (Figs. 2 a and c) are fundamental modes, but they are transformed along the z z direction to possess the spatial characteristics of the high-order modes at z=0.5L z=0.5L , while their propagation constants remain identical to those of the corresponding fundamental modes at z=0 z=0 , respectively. At z=L z=L , the eigenstates revert back to fundamental modes, but they are directed toward different x x coordinates, indicating that their trajectories can also be arbitrarily maneuvered. Such continuous transformation is validated by numerical simulations in which light propagates through the GRIN medium, realized by varying the gap distribution along the z z direction (Figs. 2 b and d). The simulated intensity map explicitly reveals that the SUSY transformation is a powerful toolbox to conveniently steer the light and transform its transverse spatial profile. One thing to note is that depending on the n\left(x\right){\left.\right|}_{z=0.5L} n\left(x\right){\left.\right|}_{z=0.5L} , more complex spatial characteristics of light are attainable in this manner. Additionally, the asymmetric (Figs. 2 a and b) and symmetric (Figs. 2 c and d) designs exemplify the versatile reconfigurability of the SUSY transformation, enabling spatial switching of the different optical states while keeping individual propagation constants intact by SUSY. Therefore, even though two designs differ in the region from z=0.5L z=0.5L to z=L z=L since different combinations of {\psi }_{m} {\psi }_{m} and {\psi }_{n} {\psi }_{n} are applied in two SUSY transformations, the invariance of the eigenspectra of the Hamiltonian always holds: the effective indices of the three different eigenstates, which are denoted by the offset of each eigenstate (Figs. 2 a and c), remain the same throughout the entire z z dimension. Furthermore, since our system is a linear system, and the eigenstates are orthogonal, if all of three states are excited at z=0 z=0 , each of them will be separately and simultaneously steered to its designated x x coordinate at z=L z=L to respect the isospectrality. Fig. 2 Numerical simulation for asymmetric and symmetric configurations of SUSY transformation optics. a , c The optical potentials and the corresponding mode profiles of three eigenstates (blue, red, green) at five different z z coordinates at z=0, 0.25L, 0.5L, 0.75L, z=0, 0.25L, 0.5L, 0.75L, and L L , for asymmetric a and symmetric c designs. The offsets of the mode profiles indicate their effective mode indices from high, middle to low: {n}_{blue}=1.868 {n}_{blue}=1.868 , {n}_{red}=1.755 {n}_{red}=1.755 , and {n}_{green}=1.656 {n}_{green}=1.656 . b , d Simulated electric field intensity in the GRIN metamaterials with prescribed gap distributions for asymmetric b and symmetric d designs. From the top, center, to bottom, different optical states are excited at z=0 z=0 , as denoted by the colors of the arrows, corresponding to the blue, red, and green eigenstates, respectively, in a and c . It is evident that fundamental mode excitations from different input ports transform to the fundamental, first-order, and second-order modes from z=0 z=0 to z=0.5L z=0.5L , consistent with the design in a and c Full size image Broadband light steering and spatial switching through SUSY transformations were experimentally validated by characterizing the normalized transmission spectra of the two Si GRIN metamaterials corresponding to the asymmetric and symmetric designs illustrated in Fig. 2 , respectively. The transmission spectra were measured at three outputs at z=L z=L (O1, O2, O3) for three different inputs at z=0 z=0 (I1, I2, I3) using a tunable laser with a wavelength range from 1460 to 1570 nm (Figs. 3 a and c). In experiments, three single-mode waveguides were connected to the GRIN metamaterial at z=0 z=0 to launch the fundamental mode excitations from the inputs; similarly, three single-mode waveguides were also attached at z=L z=L to direct the output signals for detection. The propagation constants of the connected single-mode waveguides were designed to closely match those of the three eigenstates steered in the GRIN metamaterial, thereby minimizing mode mismatch and maximizing the coupling efficiencies. As a result, the characterized spectra present near-unity broadband transmission from inputs to their corresponding outputs over the wavelength range of 110 nm, in both asymmetric and symmetric designs. Specifically, the transmission matrix can be retrieved, where each element M ji denotes normalized transmitted power from input i to output j (Figs. 3 b and d), showing high transmission of 0.94 on average with negligible crosstalk between neighboring channels at the wavelength of 1550 nm in both asymmetric and symmetric designs. Fig. 3 Experimental characterization of broadband on-chip continuous SUSY transformation. a , c Normalized transmittance spectra of the asymmetric a and symmetric c Si GRIN metamaterials. Top, middle, and bottom panels denote three single-mode inputs from top to bottom: I1, I2, and I3, respectively. The transmission spectra are measured at three outputs: O1, O2, and O3. Note that in both asymmetric a and symmetric c cases, three output intensities are normalized to their sum. The colors of the line plots and the arrows of the insets (blue, red, green) coincide with the colors of the eigenstates in Fig. 2 , according to their corresponding optical modes. b , d Measured transmission ( M) matrices where M ji stands for power transmission from input i to output j at the wavelength of 1550 nm, retrieved from a and c , respectively Full size image Broadband transformation of spatial characteristics of light enabled by the SUSY transformation was verified by imaging the intensity profile of guided light at the midpoint along z z of the GRIN metamaterial ( z=0.5L z=0.5L ) over the entire range of the tunable laser (Fig. 4 a). Light coupled into the metamaterial as a fundamental mode with a specific propagation constant is spatially transformed to the optical state with an identical propagation constant but a different field profile. In this scenario, excitations at three different input ports take different transformation routes, such that their spatial characteristics evolve differently (Figs. 4 b–d) while their propagation constants remain unchanged during the evolution by SUSY. All three spatial modes are the eigenstates of the GRIN metamaterial at z=0.5L z=0.5L , so they are simultaneously supported without crosstalk because of their inherent orthogonality. The clear nodes observed in the images agree well with the calculated modal field profiles at z=0.5L z=0.5L as shown in Fig. 2 , definitively confirming the efficacy of isospectral SUSY transformation despite inevitable perturbation caused by imperfect fabrication leading to the asymmetry of the mode profile. Moreover, the far-field diffraction patterns of optical states reassure that the lobes present in each state contain the same relative phase information as theoretically predicted (Additional file 1 : Fig. S5). Fig. 4 Experimental characterization of transverse spatial characteristics of light achieved by broadband SUSY transformation. a Schematic of the imaging system to characterize the transverse field intensity at the midpoint along z in the GRIN metamaterial (i.e., z=0.5L z=0.5L ). A 4 f imaging is performed for appropriate magnification. b, c, d Measured intensity profiles in the xy xy plane when the light was launched at different inputs from top to bottom: I1, I2, and I3, in a broad wavelength range from 1460 nm (left column), 1515 nm (middle column), to 1570 nm (right column). The results experimentally confirm that fundamental mode excitations from different input ports (I1, I2, and I3) transform to the fundamental, first-order, and second-order modes as they propagate from z=0 z=0 to z=0.5L z=0.5L , respectively, consistent with the theoretical design in Fig. 2 Full size image 3 Conclusion Our work opens up possibilities of bringing the symmetry of Hamiltonian into the realm of optics by constructing a metamaterial that can emulate arbitrary potentials to achieve an advanced control of light through transforming the optical media. Especially, the interplay of supersymmetry and a metamaterial demonstrated in this study can contribute to the increase of the spatial degree of freedom in integrated photonics by facilitating arbitrary light steering, switching, and sophisticated spatial mode shaping of light without introducing any perturbation to the propagation constants of all the eigenstates. Our continuous SUSY transformation approach is scalable to a higher number of eigenstates and free parameters, and also applicable to more complicated index distribution, thereby creating an ideal platform for on-chip space-division multiplexing in information technologies. Additionally, further extending the SUSY transformation into higher dimensions [ 33 ] may provide a design strategy to exploit the full potential of metamaterials in the three-dimensional space. 4 Methods 4.1 Isospectral supersymmetric transformation [ 28 , 39 ] For a 1D system, the Schrödinger equation can be written as: \left(-\frac{{d}^{2}}{d{x}^{2}}+V\_\left(x\right)\right){\psi }_{m}(x)={E}_{m}{\psi }_{m}(x) \left(-\frac{{d}^{2}}{d{x}^{2}}+V\_\left(x\right)\right){\psi }_{m}(x)={E}_{m}{\psi }_{m}(x) (2) where {\psi }_{m} {\psi }_{m} is the {m}^{th} {m}^{th} eigenfunction with eigenenergy {E}_{m} {E}_{m} for the given potential V\_ V\_ . The Hamiltonian H= H= -\frac{{d}^{2}}{d{x}^{2}}+V\_\left(x\right) -\frac{{d}^{2}}{d{x}^{2}}+V\_\left(x\right) can be factorized as H = A^{\dag } A H = A^{\dag } A , where A=\frac{d}{dx}+W A=\frac{d}{dx}+W and A^{\dag } =- \frac{d}{dx} + W A^{\dag } =- \frac{d}{dx} + W . Here, W W is the superpotential defined as -\frac{{\psi }_{0}^{^{\prime}}}{{\psi }_{0}} -\frac{{\psi }_{0}^{^{\prime}}}{{\psi }_{0}} , and {\psi }_{0} {\psi }_{0} is the ground state eigenfunction. Consequently, the original potential ( {V}_{-} {V}_{-} ) and the supersymmetric potential ( {V}_{+} {V}_{+} ) are {V}_{\pm }={W}^{2}\pm {W}^{^{\prime}}. {V}_{\pm }={W}^{2}\pm {W}^{^{\prime}}. (3) This is the case of the first-order unbroken SUSY, under which the original Hamiltonian (H = A^{\dag } A) (H = A^{\dag } A) and the supersymmetric Hamiltonian (H^{\dag } = AA^{\dag } ) (H^{\dag } = AA^{\dag } ) share the identical eigenvalue spectra except for the ground state. To extend the SUSY transformation beyond the unbroken case, here we find a generalized form of the superpotential \widehat{W} \widehat{W} which satisfies {V}_{+}={\widehat{W}}^{2}+{\widehat{W}}^{^{\prime}}={W}^{2}+{W}^{^{\prime}} {V}_{+}={\widehat{W}}^{2}+{\widehat{W}}^{^{\prime}}={W}^{2}+{W}^{^{\prime}} (4) We solve this differential equation for \widehat{W} \widehat{W} and obtain \widehat{W}\left(x;{\alpha }_{i}\right)=W(x)+\frac{d}{dx}\mathrm{ln}({I}_{m}\left(x\right)) \widehat{W}\left(x;{\alpha }_{i}\right)=W(x)+\frac{d}{dx}\mathrm{ln}({I}_{m}\left(x\right)) (5) where {I}_{m}\left(x\right)=\underset{-\infty }{\overset{x}{\int }}{\psi }_{m}^{2}\left({x}^{^{\prime}}\right)d{x}^{^{\prime}}+{\alpha }_{i} {I}_{m}\left(x\right)=\underset{-\infty }{\overset{x}{\int }}{\psi }_{m}^{2}\left({x}^{^{\prime}}\right)d{x}^{^{\prime}}+{\alpha }_{i} , and {\alpha }_{i} {\alpha }_{i} is a free parameter that can be continuously varied. Here, by using this generalized superpotential, we define {\widehat{V}}_{f}={\widehat{W}}^{2}-{\widehat{W}}^{^{\prime}} {\widehat{V}}_{f}={\widehat{W}}^{2}-{\widehat{W}}^{^{\prime}} , and by using Eq. 3 , Eq. 4 , and Eq. 5 , we obtain. {{\widehat{V}}_{f}=V}_{-}-2\frac{d}{dx}\left(\frac{1}{{I}_{m}\left(x\right)}\frac{d{I}_{m}\left(x\right)}{dx}\right) {{\widehat{V}}_{f}=V}_{-}-2\frac{d}{dx}\left(\frac{1}{{I}_{m}\left(x\right)}\frac{d{I}_{m}\left(x\right)}{dx}\right) (6) where {\widehat{V}}_{f} {\widehat{V}}_{f} represents a family of isospectral potentials and the original potential {V}_{-} {V}_{-} belongs to the isospectral family for {\alpha }_{i}\to \infty {\alpha }_{i}\to \infty . In our work, we apply the effective index method to realize a 1D map of transverse refractive index n(x) n(x) by extracting the indices of the quasi-TM mode in 2D geometries in the xy xy plane. The electric field of the quasi-TM mode propagating along the z z direction in a dielectric waveguide is described as \overrightarrow{E}=({\overrightarrow{E}}_{y}+{\overrightarrow{E}}_{z}){\mathrm{e}}^{i{k}_{z}z} \overrightarrow{E}=({\overrightarrow{E}}_{y}+{\overrightarrow{E}}_{z}){\mathrm{e}}^{i{k}_{z}z} where the dominant y y component can be written with separable variables. {\overrightarrow{E}}_{y}=\widehat{y}\psi \left(x\right)\Phi \left(y;x\right){\mathrm{e}}^{i{k}_{z}z} {\overrightarrow{E}}_{y}=\widehat{y}\psi \left(x\right)\Phi \left(y;x\right){\mathrm{e}}^{i{k}_{z}z} (7) Here, \psi \left(x\right) \psi \left(x\right) and \Phi \left(y;x\right) \Phi \left(y;x\right) are two different amplitudes associated with the field confinement in the x x and y y directions, and {k}_{z} {k}_{z} is real, thus equal to the propagation constant. We assume that \Phi \left(y;x\right) \Phi \left(y;x\right) is a slowly varying function along x x ( \frac{\delta\Phi }{\delta x}\simeq 0 \frac{\delta\Phi }{\delta x}\simeq 0 ). By solving the Helmholtz equation, we deduce: \frac{{d}^{2}\Phi \left(y;x\right)}{d{y}^{2}}+{k}_{0}^{2}{n}^{2}\left(x,y\right)\Phi \left(y;x\right)={{k}_{0}^{2}n}_{eff}^{2}\left(x\right)\Phi \left(y;x\right) \frac{{d}^{2}\Phi \left(y;x\right)}{d{y}^{2}}+{k}_{0}^{2}{n}^{2}\left(x,y\right)\Phi \left(y;x\right)={{k}_{0}^{2}n}_{eff}^{2}\left(x\right)\Phi \left(y;x\right) (8) \frac{{d}^{2}\psi (x)}{d{x}^{2}}+{k}_{0}^{2}{n}_{eff}^{2}(x)\psi (x)={k}_{z}^{2}\psi (x) \frac{{d}^{2}\psi (x)}{d{x}^{2}}+{k}_{0}^{2}{n}_{eff}^{2}(x)\psi (x)={k}_{z}^{2}\psi (x) (9) where {k}_{0}=\frac{2\pi }{{\lambda }_{0}} {k}_{0}=\frac{2\pi }{{\lambda }_{0}} with the free space wavelength of {\lambda }_{0}. {\lambda }_{0}. Therefore, once we have the distribution of the local effective index {n}_{eff}\left(x\right) {n}_{eff}\left(x\right) from Eq. 8 , we can solve Eq. 9 to obtain the mode index {n}_{mode}=\frac{{k}_{z}}{{k}_{0}} {n}_{mode}=\frac{{k}_{z}}{{k}_{0}} of the entire 2D geometry. More importantly, the comparison between Eq. 2 and Eq. 9 evidently exhibits the mathematical correspondence: V\_\left(x\right) V\_\left(x\right) in quantum mechanics can be described by {{k}_{0}^{2}n}_{eff}^{2}(x) {{k}_{0}^{2}n}_{eff}^{2}(x) , and eigenenergy {E}_{m} {E}_{m} can be replaced by {k}_{z}^{2} {k}_{z}^{2} . Therefore, we can bring the mathematical framework of SUSY into the realm of optics, which leads to Eq. 1 in the main text. From there, we can calculate numerous potentials of {n}_{eff}^{2}(x) {n}_{eff}^{2}(x) that yield the same eigenvalue spectra of {k}_{z}^{2} {k}_{z}^{2} , with respect to the continuously varying free parameter {\alpha }_{i} {\alpha }_{i} . 4.2 Generation of refractive index distribution by SUSY transformation The free parameter {\alpha }_{i} {\alpha }_{i} is an integral constant, which arises when the generalized superpotential \widehat{W} \widehat{W} is obtained from the differential equation Eq. 4 . As a result, a different value of {\alpha }_{i} {\alpha }_{i} generates a different superpotential \widehat{W} \widehat{W} , which eventually leads to a different optical potential, while the isospectrality still remains, following the mathematical framework of supersymmetry. In this regard, owing to the mathematical relation between {n}_{f}\left(x;{\alpha }_{i}\right) {n}_{f}\left(x;{\alpha }_{i}\right) and {\alpha }_{i} {\alpha }_{i} in Eq. 1 , {n}_{f}\left(x;{\alpha }_{i}\right) {n}_{f}\left(x;{\alpha }_{i}\right) is nearly the same as {n}_{0}(x) {n}_{0}(x) when \left|{\alpha }_{i}\right| \left|{\alpha }_{i}\right| is large. In contrast, as the value of {\alpha }_{i} {\alpha }_{i} approaches from -∞ to -1 -1 or from ∞ to 0, {n}_{f}\left(x;{\alpha }_{i}\right) {n}_{f}\left(x;{\alpha }_{i}\right) becomes more distinguishable from {n}_{0}(x) {n}_{0}(x) . This transformation occurs such that the selected eigenstate {\psi }_{m}\left(x\right) {\psi }_{m}\left(x\right) for the transformation moves away farther from the original potential. At the limit of {\alpha }_{i}\to -{1}^{-} {\alpha }_{i}\to -{1}^{-} or {\alpha }_{i}\to {0}^{+} {\alpha }_{i}\to {0}^{+} , singularity of the potential and {\psi }_{m}\left(x\right) {\psi }_{m}\left(x\right) after the transformation occurs at an infinite value of x x (i.e. x= x= -∞ or ∞), and hence {\psi }_{m}\left(x\right) {\psi }_{m}\left(x\right) is no longer square integrable. In this case, the transformation forces the bound state {\psi }_{m}\left(x\right) {\psi }_{m}\left(x\right) to vanish from the {n}_{0}(x) {n}_{0}(x) while the other bound states remain bound respecting the isospectrality [ 40 ]. Therefore, the values of {\alpha }_{1} {\alpha }_{1} and {\alpha }_{2} {\alpha }_{2} in our study dictate the degree of transformation of {n}_{0}\left(x\right) {n}_{0}\left(x\right) in relation to two eigenstates {\psi }_{m} {\psi }_{m} and {\psi }_{n} {\psi }_{n} with the condition of {\alpha }_{i}<-1 {\alpha }_{i}<-1 or {\alpha }_{i}>0 {\alpha }_{i}>0 to avoid any singularity in n(x) n(x) . In our design process for the index distribution, SUSY transformation is applied at each z z coordinate to the original index distribution {n}_{0}\left(x\right) {n}_{0}\left(x\right) (as defined in the main text) twice by substituting into Eq. 1 the values of {\alpha }_{1} {\alpha }_{1} and {\alpha }_{2} {\alpha }_{2} , respectively, which are continuously varied along the z z direction (Additional file 1 : Fig. S1a and S1b). {\alpha }_{1} {\alpha }_{1} and {\alpha }_{2} {\alpha }_{2} should be individually controlled to ensure the adiabatic change of the refractive index distribution n\left(x\right) n\left(x\right) along the z z direction. At z=0.5L z=0.5L , n\left(x\right) n\left(x\right) is close to {n}_{0}\left(x\right) {n}_{0}\left(x\right) owing to the large values of |{\alpha }_{1}| |{\alpha }_{1}| and |{\alpha }_{2}| |{\alpha }_{2}| . From z=0.5L z=0.5L to z=0 z=0 , |{\alpha }_{1}| |{\alpha }_{1}| and {|\alpha }_{2}| {|\alpha }_{2}| are gradually reduced to adiabatically change n\left(x\right) n\left(x\right) in association with {\psi }_{3} {\psi }_{3} and {\psi }_{1} {\psi }_{1} , respectively. Here, {\alpha }_{1} {\alpha }_{1} is decreased to near 0 and {\alpha }_{2} {\alpha }_{2} is increased to near -1 -1 , and the signs of {\alpha }_{1} {\alpha }_{1} and {\alpha }_{2} {\alpha }_{2} are opposite because their signs are related to the +x +x or -x -x direction of the spatial change of n\left(x\right) n\left(x\right) . Similarly, from z=0.5L z=0.5L to z=L z=L , {\alpha }_{1} {\alpha }_{1} is decreased to near 0 0 and {\alpha }_{2} {\alpha }_{2} is increased to near -1 -1 with {\psi }_{2} {\psi }_{2} and {\psi }_{3} {\psi }_{3} being used in the transformation, and their variations are different from those in the region of z=0 z=0 to 0.5L 0.5L because of different choices of the eigenstates. The resulting asymmetric n\left(x,z\right) n\left(x,z\right) (Additional file 1 : Fig. S1c) features a smooth transformation of the refractive index along the z z direction, in which three eigenstates are accordingly altered during the propagation with unchanged mode indices, as presented in the asymmetric design illustrated in Fig. 2 and Fig. 3 of the main text. 4.3 Sample fabrication The sample was fabricated by electron beam lithography and reactive ion etching (Additional file 1 : Fig. S4). The first step was to spin-coat the Si-on-insulator wafer by 6% Hydrogen Silsesquioxane (HSQ) negative resist, which was a 2:3 mixture of FOX15 and Methyl isobutyl ketone (MIBK). After electron beam patterning, the resist was developed by immersing into the MF CD-26 at 60 °C for 40 s and then rinsing in DI water for 30 s. The resist in the exposed areas remained on the sample and acted as a hard mask for reactive ion etching. Subsequently, the Si layer was etched via reactive ion etching in SF 6 /C 4 F 8 plasma. Finally, the sample was cleaved after coating a protection layer of PMMA to make a sharp facet for the output waveguides and the PMMA layer was fully removed after cleaving. The Si GRIN metamaterial used for the characterization of the transverse mode profile at z=0.5L z=0.5L (as shown in Fig. 4 ) shares the same design as Fig. 1 b from z=0 z=0 up to z=0.5L z=0.5L but has an extended region beyond z=0.5L z=0.5L , where the gap distribution no longer changes along the z z direction. This strategy guarantees that we can observe the light intensity profile effectively at z=0.5L z=0.5L after cleaving the chip at z>0.5L z>0.5L . Availability of data and materials Supporting data and materials are given in the supplementary information. Additional data are available from the corresponding author upon reasonable request.
Transformation optics has formulated a versatile framework to mold the flow of light and tailor its spatial characteristics at will. The coordinate transformation often yields extreme material parameters unfeasible even with metamaterials. In a new paper published in eLight, a team of scientists, led by Professor Liang Feng from the University of Pennsylvania, have developed a new chip that can transfer different optical states to switch light flows. Their paper, titled "Broadband continuous supersymmetric transformation: a new paradigm for transformation optics," seeks to provide an adaptable strategy to tame the flow of light. Attempts at bending light on demand and arbitrarily transforming its spatial characteristics are rooted in the fundamentals of electromagnetics. The form-invariance of Maxwell's equations under coordinate transformations led to the formulation of transformation optics. Their equivalence allows for the rearrangement of electromagnetic fields in a given coordinate system. It has left open avenues to a series of intriguing functionality such as invisibility cloaking and illusion optics. Metamaterials have excellent design flexibility and enable a wide range of optical properties. Experimental realization of transformation optics has been at a stalemate for a decade because of optical extremity and singularity often resulting from the transformation. Therefore, new schemes for transformation optics with broadband parameter values within achievable limits are essential. For example, conformal mapping with the spatially varying local index of refraction had been demonstrated. This technique can perform the coordinate transformation using inhomogeneous Si nanostructures. It can yield delicate phase-front control for multicolor carpet cloaking. This approach elucidated the possibility of exploiting gradient-index (GRIN) to warp the space. However, a paradigmatic shift beyond traditional coordinate transformation is further required to achieve richer functionality other than bending the trajectories. Here, the research team takes a different approach from conventional transformation optics: observing the Hamiltonian of the system under transformation. The invariance of the Hamiltonian under symmetry operation endows us with insights into how a system can be transformed with a conserved quantity. In particular, Supersymmetry (SUSY) features the degenerate eigenenergy spectra between two distinct Hamiltonians, which has facilitated advanced control of the spatial characteristics of light. Strategic coupling between the original optical system and its dissipative superpartner has triggered intriguing applications such as high-radiance single-mode microlaser arrays and mode division multiplexing. These previous experimental studies are based on lattice Hamiltonians, which can be factorized via matrix operation. Hence, they constructed systems composed of many coupled discrete elements corresponding to coupled waveguides or resonators. In contrast, the extended method of SUSY that can generate an infinite number of strictly isospectral potentials has remained experimentally unexplored since it requires an intrinsically different approach to realize arbitrary potentials. At the same time, its mathematical framework is ideal for the continuous Hamiltonian transformation to enable a distinct scenario for transformation optics. The research team reported the first experimental demonstration of continuous SUSY transformation by designing a novel GRIN metamaterial on a Si platform. The idea is to construct a metamaterial that can emulate arbitrary potentials to achieve advanced light control through transforming the optical media under supersymmetry. They utilized the synergy of supersymmetry and the metamaterial to design spatially varying dielectric permittivity. It constituted a two-dimensional map where arbitrary transformations are prescribed simultaneously to multiple optical states for routing, switching, and spatial mode shaping, while strictly maintaining their original propagation constants. Their result featured broadband continuous SUSY transformation optics. The interplay of supersymmetry and a metamaterial demonstrated in this study illuminated a novel path to fully utilizing a chip's spatial degrees of freedom for versatile photonic functionalities. The team's continuous SUSY transformation approach is scalable to a higher number of eigenstates and free parameters. It applies to more complicated index distribution, creating an ideal platform for on-chip space-division multiplexing in information technologies. Additionally, further extending the SUSY transformation into higher dimensions may provide a design strategy to exploit the full potential of metamaterials in the three-dimensional space.
10.1186/s43593-022-00023-1
Medicine
Scientists isolate protective proteins that influence outcomes for type 2 diabetes
Eirini Giannoudaki et al, Interleukin-36 cytokines alter the intestinal microbiome and can protect against obesity and metabolic dysfunction, Nature Communications (2019). DOI: 10.1038/s41467-019-11944-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-11944-w
https://medicalxpress.com/news/2019-09-scientists-isolate-proteins-outcomes-diabetes.html
Abstract Members of the interleukin-1 (IL-1) family are important mediators of obesity and metabolic disease and have been described to often play opposing roles. Here we report that the interleukin-36 (IL-36) subfamily can play a protective role against the development of disease. Elevated IL-36 cytokine expression is found in the serum of obese patients and negatively correlates with blood glucose levels among those presenting with type 2 diabetes. Mice lacking IL-36Ra, an IL-36 family signalling antagonist, develop less diet-induced weight gain, hyperglycemia and insulin resistance. These protective effects correlate with increased abundance of the metabolically protective bacteria Akkermansia muciniphila in the intestinal microbiome. IL-36 cytokines promote its outgrowth as well as increased colonic mucus secretion. These findings identify a protective role for IL-36 cytokines in obesity and metabolic disease, adding to the current understanding of the role the broader IL-1 family plays in regulating disease pathogenesis. Introduction Over the past 40 years, global levels of obesity have more than doubled. As obesity predisposes to the metabolic syndrome and has been definitively linked to coronary heart disease, stroke, type 2 diabetes and certain forms of cancer, this growing epidemic represents one of the most significant current global health challenges. In tandem with the emergence of this problem has been an increase in understanding the pathological mechanisms which link the obese state to the development of disease. Central to these mechanisms is the heightened state of systemic inflammation as a result of obesity. Since the seminal observations made by Hotamisligil et al., demonstrating the role of adipose tissue-derived TNF as an inhibitor of insulin signalling and first linking obesity-driven inflammation to type 2 diabetes 1 , it is now recognised that the nature of adipose tissue inflammation is a major contributor towards obesity-related diseases 2 , 3 , 4 . Further complexity is added through the instructive role of the gastrointestinal immune system in influencing the development of inflamed adipose tissue. It is now evident that changes in the homoeostatic barrier function of the gut can play a major role in the development of systemic inflammation in obesity 5 . In this context, it is perhaps unsurprising that the constituents of the intestinal microbiome are also emerging as an important factor in this regard. In healthy individuals, the gastrointestinal immune system plays an immunosurveillant role, in constant interaction with the intestinal microbiome which profoundly influences its activity. The role of the microbiota in the development of obesity-driven metabolic disease first emerged from studies on germ-free mice, which demonstrated reduced levels of experimental weight gain and improved glucose tolerance 6 , 7 . Moreover, the transfer of an ‘obese’ microbiome to germ-free mice can lead to significant increases in adiposity confirming the important instructive role in contributing to disease 8 . Similarly, changes in the composition of the intestinal microbiome have been associated with obesity and the development of metabolic disease in humans 9 , 10 . As well as contributing to obesity and disease, it is also becoming apparent that certain constituents of the microbiota can act opposingly to restrict adiposity. In recent times, it has been demonstrated that the presence of the bacteria Akkermansia muciniphila in the intestinal microbiome can act directly to suppress weight gain and impaired glucose tolerance in mice 11 . Many of these protective effects have now been recapitulated using a purified outer membrane protein, Amuc_1100, derived from A. muciniphila 12 . It is also noteworthy that several dietary interventions targeting obesity and glucose intolerance, and the glucose-lowering drug metformin, increase A. muciniphila abundance 11 , 13 , 14 , 15 . Significantly, follow-up studies have also demonstrated a clear association between the relative abundance of this bacterium in the microbiome and the metabolic health of obese patients 16 . Such observations suggest that the composition of the intestinal microbiome can be altered towards a more ‘healthy’ lean state, highlighting its potential as a therapeutic strategy, and placing the identification of the mechanistic pathways which can alter the composition of the microbiota as a particular area of interest. In this study, we identify the IL-36 family of cytokines as one such pathway. The IL-36 family of cytokines are a recently described subset of the larger IL-1 family which are emerging as important mediators of obesity-related metabolic disease 17 . The family consists of three separate agonistic ligands, designated IL-36α, IL-36β and IL-36γ, and a specific IL-36 receptor antagonist (IL-36Ra), all of which act through a specific IL-36 receptor 18 . Similar to the more extensively characterised ‘classical’ IL-1 cytokines, IL-1α and IL-1β, IL-36 cytokines are thought to act as important mediators of homoeostasis and inflammation, but in a more tissue-restricted manner. In this role, IL-36 cytokines are known to play a central role in orchestrating psoriatic inflammation in the skin and can also act as mediators of gastrointestinal inflammation and homoeostasis 19 , 20 , 21 , 22 , 23 . In contrast to these established roles of the IL-36 family, little information is available concerning how, and if, this cytokine family may influence obesity-induced systemic inflammation leading to metabolic syndrome. Related IL-1 family members have previously been studied in this context revealing opposing functions 17 , 24 . For example, IL-1β has long been implicated in the pathogenesis of both type 2 diabetes and atherosclerosis, and Canakinumab (anti-IL-1β-neutralising monoclonal antibody) is currently the focus of extensive clinical investigation for these indications 17 . In direct contrast, IL-18 and IL-33 have been demonstrated to play a protective role in animal models of obesity-driven metabolic syndrome, although the precise mechanisms through which this occurs have not been identified 17 , 24 . Moreover, there are currently no data to indicate that the microbiome may play a role in these protective effects. In this study, we demonstrate that IL-36γ expression is increased in the serum of clinically obese patients and these elevated expression levels are negatively correlated with both haemoglobin A1c (HbA1c) and fasting blood glucose (FBG) levels among patients with type 2 diabetes indicating a protective role for these cytokines in countering metabolic dysfunction. These protective effects are mirrored in mice with a deficiency in the IL-36 receptor antagonist ( Il36rn−/− ), which exhibit reduced weight gain and metabolic dysfunction. Mechanistically, the protective effects occur in association with the relative outgrowth of the commensal bacterial strain A. muciniphila in the intestinal microbiome which is facilitated through IL-36-mediated enhanced mucus secretion in the colon. Results Elevated IL-36 cytokine expression among obese patients In an effort to determine whether the IL-36 family of cytokines may play a role in the development of obesity and metabolic disease, we investigated serum expression levels of the IL-36 family among a cohort of adult patients presenting with clinical obesity (BMI>30 kg/m 2 ), with or without signs of diabetes, as determined by serum HbA1c levels, according to American Diabetes Association guidelines 25 . Expression of all IL-36 family ligands was detected among a small number of lean patients. Interestingly, levels of IL-36γ alone were found to be specifically and significantly elevated among obese patients when compared with healthy lean control subjects (Fig. 1c ). These elevated levels were observed among obese patients irrespective of whether they exhibited evidence of type 2 diabetes (i.e. HbA1c ≥ 48 mmol/mol), indicating that elevated IL-36γ expression is directly associated with the obese phenotype. In contrast, no such differences were observed in the expression levels of the other agonistic ligands, IL-36α and β (Fig. 1a, b ). Expression of the IL-36 receptor antagonist was low or not detected among all subjects examined (Fig. 1d ). Fig. 1 IL-36 cytokine expression levels in the serum of obese patients. a – d IL-36α, β, γ and IL-36Ra levels in the serum of lean individuals ( n = 37), obese non-diabetic patients (HbA1c < 48) ( n = 51) and obese with diabetes (HbA1c ≥ 48) ( n = 16), as measured by ELISA. e – h Correlation analysis of IL-36γ levels with HbA1c and FBG in the serum of diabetic obese patients ( e , f ) and non-diabetic obese patients ( g , h ). Spearman correlation coefficient ( r ) and the corresponding p value shown. For ( a – d ) data are shown as means ± SEM, statistical analysis by two-tailed Mann–Whitney test, ** p < 0.01, **** p < 0.0001. DM: diabetes mellitus. Source data are provided as a Source Data file Full size image To further determine what impact, if any, increased IL-36γ might play in the pathogenesis of obesity and metabolic disease, we examined whether elevated expression levels were correlated with established clinical and biochemical parameters of disease recorded for each patient subgroup. These parameters included sex, age, BMI, weight in kg, HbA1c and FBG levels, as well as serum lipid levels including cholesterol, triglycerides, LDL and HDL (Supplementary Table 1 ). Interestingly, this analysis revealed a strong negative correlation between blood glucose levels as determined by both HbA1c ( r = −0.74, p = 0.0015) and FBG ( r = −0.54, p = 0.03), and serum levels of IL-36γ, specifically among obese patients with type 2 diabetes (HbA1c ≥ 48 mmol/mol) (Fig. 1e, f ). No such correlation was found to exist among obese patients that did not have diabetes, with HbA1c levels less than the 48 mmol/mol cut-off (Fig. 1g, h ). These data demonstrate that elevated IL-36γ levels are associated with lower blood glucose levels among obese patients with diabetes and indicate that IL-36 cytokines may act to promote metabolic health in obesity. Reduced obesity and metabolic disease in Il36rn−/− mice To gain a greater understanding of how the IL-36 family of cytokines might impact the pathogenesis of obesity and metabolic disease, we examined mice deficient in the Il36rn gene which encodes the IL-36 receptor antagonist 19 . Interestingly, under normal chow-feeding conditions, these mice exhibited progressively less weight gain compared with wild-type control mice (wt), which was evident by 6 months of age (Fig. 2a ), despite similar levels of food consumption (Fig. 2b ). This decreased weight gain was associated with a lower mass of both epidydimal (EAT) and subcutaneous (SAT) white adipose tissue depots (Fig. 2c ). Furthermore, Il36rn−/− mice were found to have significantly improved glucose tolerance and lower insulin resistance at 10 months of age over their wt counterparts (Fig. 2d–g ), demonstrating that loss of expression of the IL-36 receptor antagonist leads to a decrease in normal weight gain and the spontaneous development of glucose intolerance observed in aged mice. Moreover, younger Il36rn−/− mice (2 months old), with comparable starting weights to their wild-type counterparts, exhibited significantly less weight gain and adipose tissue accumulation when placed on a high-fat diet (HFD) (60 kcal% fat) for 8–10 weeks (Fig. 3a, c ). Again, this occurred in the absence of any alteration in daily food intake (Fig. 3b ). These mice also displayed significant protection from HFD-induced metabolic dysfunction as determined by their fasting serum insulin levels, which were significantly lower in the Il36rn−/− mice (Fig. 3h ), as well as glucose and insulin tolerance tests (Fig. 3d–g ). In addition, Il36rn−/− mice exhibited increased Akt phosphorylation in the liver, after insulin challenge, indicating enhanced insulin sensitivity in this tissue compared with wt mice (Fig. 3i, j ). Phosphorylation of the insulin receptor β (IR) also appeared to be slightly higher in the liver of the Il36rn−/− mice, albeit not to a significant extent (Fig. 3i, j ). No differences in insulin sensitivity were detected in either muscle or EAT under these conditions (Fig. 3i, k, l ). These data indicate that unrestricted IL-36 cytokine signalling, through deficiency of the IL-36 receptor antagonist, suppresses weight gain and the development of metabolic dysfunction in a diet-induced obesity mouse model. Fig. 2 Il36rn−/− mice are protected from spontaneous weight gain and metabolic dysfunction. a – e Male Il36rn−/− and wt control mice of the same age were kept on normal chow diet up to 12 months of age ( n = 5 per group). Weights ( a ) and daily food intake per mouse ( b ) were monitored over time from 2 months of age. c EAT and SAT depots mass was measured at 12 months of age. d , e Intraperitoneal glucose tolerance test (GTT) performed at 9 months of age; glucose measurements over time after glucose bolus-injected i.p. ( d ) and area under curve (AUC) ( e ) shown. f , g Intraperitoneal insulin tolerance test (ITT) performed at 9 months of age; glucose measurements over time after insulin bolus-injected i.p. ( f ) and AUC ( g ) shown. Data represent means ± SEM. Statistical analysis using two-tailed unpaired student’s t -test, or RM two-way ANOVA with Bonferroni’s correction for multiple comparisons for ( a ), ( d ) and ( f ). ns p > 0.05, * p < 0.05, ** p < 0.01, *** p < 0.001. Source data are provided as a Source Data file Full size image Fig. 3 Il36rn−/− mice are protected from diet-induced obesity and insulin resistance. a – h Male Il36rn−/− and wt mice of the same age were fed a HFD (60 kcal% fat) for 8–10 weeks starting from 8 weeks of age. Weights ( a ) and daily food intake per mouse ( b ) were monitored and data shown are pooled from three independent experiments ( wt n = 15, Il36rn−/− n = 22). c EAT and SAT depots mass was measured after 10 weeks in HFD. d , e Intraperitoneal GTT on wt and Il36rn−/− mice after 8 weeks in HFD; glucose over time after i.p. glucose injection ( d ) and AUC ( e ) shown. f , g Intraperitoneal ITT on wt and Il36rn−/− mice after 9 weeks in HFD; glucose over time after insulin bolus-injected i.p. ( f ) and AUC ( g ) shown. h Fasting blood insulin levels for wt and Il36rn−/− mice after 10 weeks in HFD. i Relative protein expression levels of phospho-Akt (P-Akt)/total Akt (T-Akt) and phospho-insulin receptor β (P-IR)/total insulin receptor β (T-IR) in the liver, skeletal muscle and EAT of wt ( n = 7) and Il36rn−/− ( n = 4) mice after insulin injection, as indicated by densitometry analysis of western blot data (in Source Data file). j – l Protein expression of P-IR, T-IR, P-Akt and T-Akt in the liver ( j ), skeletal muscle ( k ) and EAT ( l ) of representative wt ( n = 3) and Il36rn−/− ( n = 3) mice after insulin injection. Data represent means ± SEM. Statistical analysis using two-tailed unpaired student’s t- test for ( c ), ( g ) and ( i ), two-tailed Mann–Whitney test for ( b ), ( h ) and ( e ) and RM two-way ANOVA for ( a ), ( d ) and ( f ). ns p > 0.05, * p < 0.05, ** p < 0.01, *** p < 0.001. Source data are provided as a Source Data file Full size image In contrast, mice with a global deficiency in Il1rl2 , the gene encoding the IL-36 receptor, exhibited normal weight gain and comparable levels of glucose and insulin tolerance to those observed in wt mice after 8–10 weeks exposure to HFD (Supplementary Fig. 1 ). Together, these data suggest that while constitutive expression of the IL-36 receptor does not play a role in HFD-induced adiposity in mice, hyperactive IL-36 family activity can protect against obesity and metabolic disease. The intestinal microbiome is altered in Il36rn−/− mice As we observed that IL-36 receptor antagonist deficiency improves glucose tolerance in mice, we next examined whether adipose tissue inflammation induced upon exposure to HFD was altered in this setting (Supplementary Fig. 2 ). Although the absolute numbers of adipose tissue infiltrating CD45 + cells were reduced in Il36rn−/− mice after 10 weeks exposure to HFD (Supplementary Fig. 2a ), as were the numbers of F4/80 + CD11b + total macrophages (Supplementary Fig. 2b ) and F4/80 + CD11b + CD11c + CD301 − M1 macrophage subset (Supplementary Fig. 2c ), these differences were not evident when analysed per gram of adipose tissue indicating that Il36rn deficiency did not quantitatively affect the degree of CD45 + cell infiltration observed (Supplementary Fig. 2e ). In addition, no differences in the relative numbers of infiltrating pathogenic or homoeostatic macrophage subsets were observed (Supplementary Fig. 2f–h ), indicating that Il36rn deficiency does not significantly alter the inflammatory profile of adipose tissue induced by exposure to HFD. The IL-36 cytokine family has recently been shown to play a role in the homoeostasis of the intestinal mucosa where it can act opposingly to promote acute inflammation, as well as tissue homoeostasis and resolution 9 , 20 , 22 , 26 . Interestingly, we found that gene expression levels of Il36a and Il36b were constitutively elevated in the colons of Il36rn−/− mice in the steady state. In addition, activation of the MAPK (p42/p44) signalling pathway was also found to be elevated in the colon, indicating enhanced IL-36 receptor signalling in the colonic tissue microenvironment in Il36rn−/− mice (Fig. 4a, b ). In contrast, phosphorylation of the p65 subunit of NFκβ was found to be similarly high in both wt and Il36rn−/− colon tissues, suggesting constitutive activity of this pathway in the steady-state colon (Fig. 4b ). The colonic gene expression levels of several other inflammatory mediators implicated in regulating intestinal inflammation and/or obesity and metabolic disease, including Il1b , Tnfa , Il6 , Il10 and Ifng , were similar between wt and Il36rn−/− mice (Supplementary Fig. 3a ), indicating that loss of IL-36Ra expression did not lead to a broad non-specific state of inflammation in the gut. Furthermore, IL-6 and TNFα were also found to be expressed at similar levels in the serum of wt and Il36rn−/− mice after HFD, indicating that loss of IL-36Ra expression did not significantly alter systemic inflammation and contribute to the effects on weight gain and metabolism described (Supplementary Fig. 3b, c ). Fig. 4 Altered composition of intestinal microbiome in Il36rn−/− mice dependent on IL-36 activity. a Relative gene expression of IL-36 cytokines and IL-36 receptor in the colon of wt and Il36rn−/− mice ( n = 6 per group). b Protein expression of phospho-p44/p42 (Thr202/Tyr204, P-p44/P-p42) with total p44/p42, and phospho-NF-κB p65 (Ser536, P-NF-κB p65) with total NF-κB p65, in the colons of wt ( n = 3) and Il36rn−/− ( n = 4) mice. c Weighted ordination by principal coordinate analysis (PCoA) of beta-diversity of intestinal microbiome composition in wt and Il36rn−/− mice ( n = 5 per group). d Proportional relative abundance of the most abundant phyla in the microbiome of wt and Il36rn−/− mice. e Percent relative abundance means ± SD of the eight most abundant phyla, Kruskal–Wallis rank sum test to determine significance. f Percent relative abundance of Akkermansia muciniphila in regard to total bacteria in the microbiome of 8–10-week-old wt and Il36rn−/− mice, as determined by qPCR using specific primers, before (day −1) and after (days 6, 9) three i.p. injections of rIL-36Ra ( n = 9 and 5) or PBS control ( n = 9) (days 1, 2, 3) were performed . Data show means ± SEM unless otherwise described. Statistical analysis using two-tailed Mann–Whitney test for ( a ) and ( f ), or as described. ns p > 0.05, * p < 0.05, ** p < 0.01. Source data are provided as a Source Data file Full size image Prompted by these observations and the emerging important role for the intestinal microbiome in influencing the development of obesity 5 , 9 , we investigated whether Il36rn deficiency resulted in any alterations in the constituents of the intestinal microbiome through 16S V4 rRNA gene sequencing. This analysis revealed significant beta-diversity differences in the overall composition of the intestinal microbiome between wt and Il36rn−/− mice (PERMANOVA, p = 0.009), with weighted ordination analysis based on Bray–Curtis dissimilarity values showing clear separation according to genotype (Fig. 4c ). At the phylum level, the most striking difference observed was a significant increase in the abundance of Verrucomicrobia ( p = 0.01, Kruskal–Wallis rank sum test), which was found to be >1600-fold more abundant in Il36rn−/− mice compared with wt (15.2 ± 4.65% vs. 0.0091 ± 0.0114% relative abundance, respectively) (Fig. 4d, e ). Other phyla with significant, albeit minor, differences in abundance were Cyanobacteria (enriched in knockout), Tenericutes and Deferribacteres (enriched in wt) (Fig. 4d, e ). A deeper analysis of differentially detected operational taxonomic units (OTU) at the genus/strain level, identified Akkermansia muciniphila , a member of the Verrucomicrobia phylum, as the most significantly enriched microbial strain among those identified in the intestinal microbiome of Il36rn−/− mice ( p = 1.08 × 10 −58 ) (Table 1 ). Recent evidence indicates that A . muciniphila can play an important protective role against obesity and metabolic dysfunction in mice and may also play a similar role in humans 11 , 12 , 16 . The relative outgrowth of this strain was confirmed through qPCR of stool samples from an additional cohort of mice, using specific primers for A . muciniphila , and universal bacteria primers. Importantly, this relative outgrowth was found to be specifically dependent on Il36rn deficiency and was lost upon treatment of Il36rn−/− mice with recombinant IL-36Ra (Fig. 4f ). Three intraperitoneal injections of IL-36Ra on days 1, 2 and 3, led to gradually decreasing abundance levels of A . muciniphila , similar to the levels observed in wt mice, as detected by qPCR on days 6 and 9 (Fig. 4f ). These data demonstrate that loss of expression of the IL-36 receptor antagonist results in enhanced IL-36 cytokine gene expression in the colon and promotes the outgrowth of A. muciniphila in the intestinal microbiome. Table 1 OTUs identified at strain level in the microbiome of wt and Il36rn−/− mice Full size table Increased mucus expression in the colons of Il36rn−/− mice We next sought to investigate what changes may be evident in the colons of Il36rn−/− mice, which might influence the composition of the intestinal microbiome, and in particular the relative outgrowth of A . muciniphila . A . muciniphila is a Gram-negative anaerobe which colonises the mucus layer of the gastrointestinal tract and degrades mucins as its predominant energy source 27 . Histological examination of the colon of Il36rn−/− mice through haematoxylin and eosin (H&E) and Alcian Blue/Periodic acid–Schiff (AB/PAS) staining, revealed a significant increase in the number of mucus-secreting goblet cells in the colon (Fig. 5a, c ). This increase was also accompanied by a thickening of the outer mucus layer and significantly elevated expression of the Muc2 gene (Fig. 5b, d, e ), demonstrating that Il36rn deficiency leads to increased production of mucus in the colon providing an abundant source of nutrients to support the relative outgrowth of A . muciniphila . Consistent with these changes, we also observed improved intestinal barrier function in Il36rn−/− mice after HFD, as demonstrated by decreased gut permeability to FITC-dextran (Fig. 5f ). Fig. 5 Increased colon mucus levels and reduced intestinal permeability in Il36rn−/− mice. a Representative micrographs of distal colon sections from wt and Il36rn−/− mice stained with H&E and AB/PAS. Scale bar = 1 μm. b Representative micrographs of colon sections stained with AB/PAS with mucus layer (ML) indicated by red arrow. Scale bar = 20 μm. c , d Quantitation of average goblet cells per crypt ( c ) and mucus layer thickness ( d ) in distal colon sections ( n = 6 per group, 10 crypts measured per sample). Data shown representative of three independent experiments. e Relative gene expression of Muc2 gene in the distal colon of wt and Il36rn−/− mice ( n = 6 per group). f Fluorescein isothiocyanate (FITC)-dextran concentration in the serum of wt and Il36rn−/− mice on HFD for 12 weeks, 4 h after oral gavage ( n = 4 and 6 per group). Data show means ± SEM. Statistical analysis using two-tailed unpaired student’s t -test. * p < 0.05, ** p < 0.01. Source data are provided as a Source Data file Full size image It has recently been reported that IL-36 can act to drive the expression of IL-9 by CD4 + T cells in the colons of mice 21 and IL-9 has been previously reported to play a central role in promoting goblet cell hyperplasia in the gut 28 . Interestingly, levels of IL-9 secretion were found to be significantly increased in colonic tissue from Il36rn−/− mice compared with wild-type controls (Supplementary Fig. 4a ). Secretion levels of IL-13 and IL-22, which have, also, been implicated in driving goblet cell differentiation 29 , 30 , were not found to be significantly altered (Supplementary Fig. 4b, c ). To explore the mechanism through which enhanced IL-36 activity leads to increased mucus secretion in the colon, we investigated whether administration of rIL-36Ra, which inhibited the relative outgrowth of A . muciniphila (Fig. 4f ), could also reverse these changes. Interestingly, Il36rn−/− mice treated with rIL-36Ra had decreased goblet cell numbers, that were comparable with those of the wild-type mice (Fig. 6a, b ). In contrast, treatment with an anti-IL-9-neutralizing antibody did not alter goblet cell numbers in Il36rn−/− mice, suggesting that elevated IL-9 expression does not play a significant role (Supplementary Fig. 4d, e ). Furthermore, ex vivo stimulation of wild-type colon explants with IL-36α for 4 h led to a significant increase in Muc2 gene expression, suggesting that IL-36 cytokines can act directly to stimulate increased mucus production (Fig. 6c ). Together, these data confirm that IL-36 cytokines can alter the colonic tissue microenvironment to promote the relative outgrowth of A. muciniphila . Fig. 6 IL-36 signalling increases goblet cell numbers and Muc2 gene expression. a , b Representative micrographs of AB/PAS-stained sections ( a ) and average goblet cells per crypt quantitation ( b ) in the colon of wt and Il36rn−/− mice after (day 6) three consecutive injections (days 1, 2, 3) of rIL-36Ra or PBS control ( n = 4 and 5 per group). Scale bar = 1 μm. c Relative Muc2 gene expression in wt colon explants either untreated or stimulated for 4 h with recombinant IL-36α (200 ng/ml). Data show means ± SEM. Statistical analysis using two-tailed unpaired student’s t -test. * p <0.05. Source data are provided as a Source Data file Full size image The microbiome influences reduced obesity in Il36rn−/− mice In an effort to examine the influence of the altered intestinal microbiome described above on the reduced weight gain and metabolic dysfunction observed in the setting of Il36rn deficiency, we sought to equilibrate the gut microbiome through prolonged (8 weeks) cohousing of Il36rn−/− and wt mice in the same cage, to investigate whether this altered the response to HFD exposure. As previously reported in similar studies, cohousing of mice resulted in a relative equalization of the constituents of the intestinal microbiome 31 , 32 , as determined by analysis of beta-diversity when compared with separately housed mice (PERMANOVA, p = 0.001). Weighted ordination analysis demonstrated similar positional clustering for all cohoused mice in contrast to separately housed mice which clustered according to genotype (Fig. 7a ). Critically, this equalisation coincided with a complete loss of the relative outgrowth of the phylum Verrucomicrobia (0.149 ± 0.0367% relative abundance for Il36rn−/− , 0.183 ± 0.133% for wt), and consequently A . muciniphila , as demonstrated through relative OTU analysis and confirmed by qPCR for A. muciniphila detection (Fig. 7b–d ). Cohoused Il36rn−/− mice exhibited a significant enrichment of Bacteroidetes , and decreased abundance of Actinobacteria when compared with their cohoused wt counterparts, but none of the previously differentially abundant phyla were found to be altered in cohoused mice (Fig. 7b, c ). Fig. 7 Microbiome analysis and obesity phenotype of cohoused wt and Il36rn−/− mice. a Weighted ordination analysis of beta-diversity of intestinal microbiome composition in cohoused for 8 weeks and separately housed wt and Il36rn−/− mice ( n = 5 per group). b Proportional relative abundance of the most abundant phyla in the microbiome of cohoused wt and Il36rn−/− mice. c Percent relative abundance means ± SD of the eight most abundant phyla (including unclassified phylum), Kruskal–Wallis rank sum test to determine significance. d Percent relative abundance of A. muciniphila in regard to total bacteria before and after cohousing wt and Il36rn−/− mice for 4 and 8 weeks, as determined by qPCR ( n = 4 per group). e – h Aged matched male wt and Il36rn−/− mice were cohoused since weaning in the same cage for 8 weeks ( n = 4 per group). They were then given HFD for 8–10 weeks, and weights were monitored weekly ( e ). f EAT and SAT depots mass after 10 weeks in HFD. g , h GTT on cohoused mice after 8 weeks in HFD, glucose over time ( g ) and AUC ( h ) shown. i – j ITT on cohoused mice after 9 weeks in HFD, glucose over time ( i ) and inverted AUC ( j ) shown. Data show means ± SEM unless otherwise described. Statistical analysis using two-tailed Mann–Whitney test for d , two-tailed unpaired student’s t -test for ( f ), ( h ) and ( j ), RM two-way ANOVA for ( e ), ( g ) and ( i ) or as described. ns p > 0.05, * p < 0.05, ** p < 0.01. Source data are provided as a Source Data file Full size image Strikingly, cohoused Il36rn−/− mice now exhibited similar levels of weight gain, adipose tissue mass and glucose and insulin intolerance when compared with their wild-type counterparts upon exposure to HFD for 8–10 weeks (Fig. 7e–j ). These data demonstrate a clear association between the altered intestinal microbiome observed in Il36rn−/− mice and their protection from obesity and metabolic disease. Discussion Cytokines of the interleukin-1 family are emerging as one of the most important mediators of obesity and metabolic health, with diverse and often opposing roles in either driving inflammatory mechanisms associated with disease pathogenesis, e.g. IL-1β, or inhibiting inflammation and/or caloric intake to promote the maintenance of metabolic health, e.g. IL-33, IL-18 and IL-37 17 , 24 . Although the precise mechanisms through which members of the IL-1 family exert these dichotomous effects are incompletely understood, to date all members have been investigated for their roles in disease with the exception of the IL-36 subfamily. Similar to related IL-1 family members, expression of IL-36 cytokines, specifically IL-36γ, is elevated in the serum of obese patients. Importantly, these elevated levels are negatively correlated with primary indicators of metabolic health, i.e. HbA1c and FBG levels, among obese patients presenting with diabetes, indicating that elevated IL-36 cytokines may play a protective role in reducing blood sugar levels among obese patients who have developed consequent metabolic disease. In an effort to explore mechanistically how the IL-36 family might regulate obesity and metabolic disease, we investigated what role, if any, these cytokines might play in disease pathogenesis in mice with a deficiency in the IL-36 receptor antagonist ( Il36rn−/− ) and the IL-36 receptor ( Il1rl2−/− ). This analysis revealed that constitutively hyperactive IL-36 cytokine activity ( Il36rn−/− ) led to a significant reduction in weight gain and adiposity, with improved glucose and insulin tolerance, without altering food intake. These data demonstrate that elevated IL-36 family signalling plays a broadly similar protective role to related IL-1 family members IL-18, IL-33 and IL-37 33 , 34 , 35 , 36 . In contrast to what has been reported under settings of IL-18R or IL-33R deficiency 33 , 37 , we observed that mice with a deficiency of the IL-36 receptor ( Il1rl2−/− ) did not display any changes in adiposity development after HFD, demonstrating that constitutive IL-36R expression does not influence disease progression in this model and suggesting that protection from disease is only evident under conditions of elevated IL-36 family activity. In this regard, it is interesting to note that elevated levels of IL-36α, β and γ are found in the serum of a number of lean as well as obese individuals (Fig. 1 ), and it would be of interest to investigate whether these patients are less susceptible to the development of obesity and metabolic dysfunction. Related IL-1 family members have previously been shown to influence the development of insulin resistance, at least in part, through altering the nature of adipose tissue inflammation 17 . However, we did not observe any significant alterations in immune cell infiltration in the adipose tissues of Il36rn−/− mice, suggesting that this cytokine family may regulate obesity development at a different tissue site. IL-36 cytokines are well established as mediators of inflammation in psoriatic skin, and more recently, as important regulators of homoeostasis at the gut mucosa 18 . Prompted by these discoveries, and the emerging role of the intestinal microbiome in regulating metabolic health, we investigated whether IL-36 cytokines can alter the composition of the intestinal microbiome, and identified a significant outgrowth of the mucin-degrading bacterial strain Akkermansia muciniphila in Il36rn−/− mice. A. muciniphila has recently emerged as an important commensal which can protect against obesity and metabolic disease in mice and has been associated with improved health among obese patients 11 , 12 , 16 . Although the precise host mechanisms through which the bacterium exerts these effects are yet to be fully identified, our mouse data demonstrate that its abundance is regulated by IL-36 family cytokines and indicate that it plays a central role in mediating protection from disease in Il36rn−/− mice. While alterations in the intestinal microbiome have previously been described in studies using mice deficient in related IL-1 family members, the significance of these findings in relation to obesity and metabolic health has not been reported. Indeed, two recent studies have found that IL-33 and IL-18, which suppress obesity and metabolic dysfunction, can in fact act to limit A. muciniphila abundance in mice under certain settings 32 , 38 . This suggests that IL-36 cytokines act in a mechanistically distinct fashion to protect from disease. In line with this observation, while IL-18 has been reported to act to suppress food intake in mice 33 , no such changes were observed in Il36rn−/− mice in this study. It is also intriguing to note that IL-18 has been reported to suppress the differentiation of mucus-secreting goblet cells 31 , while Il36rn deficiency leads to increased goblet cell numbers as well as increased mucus expression and likely creates an environmental niche to support A. muciniphila outgrowth 17 , 27 , 39 . Similarly, numerous studies have uncovered prominent roles for both IL-18 and IL-33 in regulating the homoeostasis of adipose tissue- resident immune cells, while no such changes were found in Il36rn−/− mice in this study 17 . While demonstrating novel protective effects against disease, this study does not rule out a significant role for IL-36 cytokines in having further, unidentified important mechanistic roles in mediating these effects. In this regard, our observation that higher IL-36γ levels are inversely associated with blood glucose levels among diabetic patients is of particular significance and it would be of interest to determine whether IL-36 cytokines can act specifically to regulate glucose secretion/uptake among obese patients. Similarly, it has previously been reported that IL-36 cytokines can act to suppress peroxisome proliferator-activated receptor γ and perhaps restrict adipocyte differentiation in vitro 40 . A further important consideration is the fact that several of the obese diabetic patients included in our study were under treatment with metformin which has previously also been found to alter the composition of the intestinal microbiome in diabetic patients and specifically increase the abundance of A. muciniphila in the intestines of mice 14 , 15 . Therefore, a deeper investigation of the effects of IL-36 activity in obese diabetic patients is warranted. IL-36 cytokines have been described as playing dichotomous pro-inflammatory and pro-resolving roles across several tissue sites, including the gut 18 . In this regard, it is possible that a constitutive basal level of inflammation in Il36rn−/− mice may play a confounding role in the phenotype described. However, it is noteworthy that we were unable to detect any significant changes in expression of inflammatory mediators implicated in obesity-dependent metabolic disease, either in colonic tissue in the steady state, or systemically in the serum after HFD exposure (Supplementary Fig. 3 ). In summary, our data provide novel insight into how IL-36 cytokines can act to regulate the pathogenesis of obesity and metabolic disease and demonstrate for the first time their ability to direct the composition of the intestinal microbiome towards a more metabolically healthy state. Furthermore, these data provide important new advances in understanding the dichotomous nature of the broader IL-1 family as critical regulators of inflammatory disease and identify the role of the intestinal microbiome as an important regulator of these effects. Methods Human subjects Serum samples were obtained from 104 adult donors, of whom 67 were classified as obese having a body mass index of ≥30 kg/m 2 . A clinical assessment of all subjects was undertaken upon enrolment, and parameters including sex, age, BMI, weight in kg, HbA1c, FBG, cholesterol, triglycerides, LDL and HDL, as well as current medication use was recorded (Supplementary Table 1 ). Levels of human IL-36 alpha, IL-36 beta, IL-36 gamma and IL-36Ra were measured in the sera of patients using DuoSet ELISA kits (R&D Systems, Minneapolis, MN) according to the manufacturer’s instructions, in a blinded fashion. For further analysis, obese subjects were classified as type 2 diabetic using HbA1c ≥48 mmol/mol as per WHO and ADA guidelines 25 or non-diabetic with HbA1c <48 mmol/mol. Patients with controlled diabetes, i.e. HbA1c <48 mmol/mol, undergoing current antidiabetic therapy were excluded from the study. Ethical approval for the human study was obtained from the Ethics Committees at St. Vincent’s University Hospital Dublin, and all patients and control subjects gave written, informed consent. Mice Wild-type (wt) and Il36rn−/− mice on C57BL/6 background were previously described 19 and bred in-house. Il36rn−/− mice were rederived from heterozygotes in our facility. Il1rl2−/− mice on C57BL/6 background obtained from Amgen under Material Transfer Agreement were described previously 22 . Male mice were fed irradiated normal chow diet and weights were measured monthly from 8 weeks of age; food and water intake was measured weekly. For diet-induced obesity experiments, three independent cohorts of male mice were fed with a HFD (60 kcal% fat, D12492i, Research Diets Inc.) for 8–12 weeks starting from 8 to 10 weeks of age, and weights and food intake were measured weekly. For cohousing experiments, wt and Il36rn−/− mice were housed in the same cage at a 1:1 ratio since weaning, for 8 weeks, before being given HFD for 8–10 weeks, their weights measured weekly. To determine specificity of IL-36Ra effects on microbiome and goblet cells, wt and Il36rn−/− mice were injected i.p. with recombinant IL-36Ra (R&D Systems) (1.5 μg/mouse) on 3 consecutive days (days 1, 2, 3) and colon tissue collected in formalin on day 6 for histology, or faecal pellets collected for DNA extraction and bacteria detection using qPCR on days −1, 6 and 9. For IL-9 neutralisation experiments, wt and Il36rn−/− mice ( n = 6–8 per group) were administered by i.p. injection either with 100 μg/mouse anti-mouse IL-9 monoclonal antibody (InVivoMAb, clone 9C1, BioXCell) or isotype control (InVivoMab mouse IgG2a isotype control, clone C1.18.4, BioXCell) every other day for 5 days and colons were subsequently harvested for goblet cell enumeration. All animal experiments were performed with ethical approval by TCD Animal Research Ethics Committee and under license by the Irish Health Products Regulatory Authority (project authorization no: AE19136/PO77). Glucose and insulin tolerance tests For glucose tolerance test, mice were injected i.p. with 2 g/kg glucose (Sigma) after overnight fasting. For insulin tolerance test, they were injected i.p. with 1 U/kg insulin (recombinant human, Sigma) after overnight fasting. Blood glucose was measured by submandibular bleeding at defined time intervals, using handheld blood glucose monitor (On Call Vivid, ACON Laboratories or FreeStyle Lite, Abbot Laboratories). RNA extraction and real-time quantitative RT-PCR Colon samples from wt and Il36rn−/− mice were harvested and stored in RNA later (Sigma) at −20 °C. Total RNA was extracted using Isolate II RNA Mini Kit (Bioline, London, UK) after tissue disruption using BeadBug homogenizer and 1.5 mm Zirconium beads (Benchmark Scientific, Inc.). Reverse transcription was performed using High-Capacity cDNA Kit (Applied Biosystems, Foster City, CA) following the manufacturer’s instructions, with 100 ng of total RNA per sample. Real-time PCR for the indicated transcripts was performed in triplicate using specific TaqMan Gene Expression Assays (Supplementary Table 2 ) and TaqMan Fast Universal PCR Master Mix in a 7900HT Fast Real-Time PCR System (Applied Biosystems, Foster City, CA). 18S ribosomal RNA was used for normalisation and relative expression levels were calculated using the ΔΔCt method. 16S V4 rRNA gene sequencing for microbiome profiling Intestinal microbiome profiling analysis was performed by Second Genome Inc. (CA, USA). DNA isolation from 10-week-old mouse fecal material was performed with the MoBio PowerMag® Microbiome kit (Carlsbad, CA) according to the manufacturer’s guidelines and optimised for high-throughput processing. All samples were quantified via the Qubit® Quant-iT dsDNA High Sensitivity Kit (Invitrogen, Life Technologies, Grand Island, NY) to ensure that they met minimum concentration and mass of DNA. To enrich the sample for bacterial 16S V4 rDNA region, DNA was amplified utilising fusion primers designed against the surrounding conserved regions which are tailed with sequences to incorporate Illumina (San Diego, CA) adapters and indexing barcodes. Each sample was PCR amplified with two differently barcoded V4 fusion primers. Samples that met the post-PCR quantification minimum were advanced for pooling and sequencing. For each sample, amplified products were concentrated using a solid-phase reversible immobilisation method for the purification of PCR products and quantified by qPCR. A pool containing 16S V4 enriched, amplified, barcoded samples were loaded into a MiSeq® reagent cartridge, and then onto the instrument along with the flow cell. After cluster formation on the MiSeq instrument, the amplicons were sequenced for 250 cycles with custom primers designed for paired-end sequencing. Second Genome’s analysis software package was used for statistical analysis. Sequenced paired-end reads were merged using USEARCH and the resulting sequences were compared with an in-house strain database using USEARCH (usearch_global). All sequences hitting a unique strain with an identity ≥99% were assigned a strain Operation Taxonomic Unit (OTU). To ensure specificity of the strain hits, a difference of ≥0.25% between the identity of the best hit and the second-best hit was required (e.g. 99.75 vs. 99.5). For each strain OTU, one of the matching reads was selected as representative and all sequences were mapped by USEARCH (usearch_global) against the strain OTU representatives to calculate strain abundances. The remaining non-strain sequences were quality filtered and dereplicated with USEARCH. The resulting unique sequences were then clustered at 97% by UPARSE (de novo OTU clustering) and a representative consensus sequence per de novo OTU was determined. The UPARSE clustering algorithm comprises a chimera filtering and discards likely chimeric OTUs. All non-strain sequences that passed the quality filtering were mapped to the representative consensus sequences to generate an abundance table for de novo OTUs. Representative OTU sequences were assigned taxonomic classification via mothur’s bayesian classifier, trained against the Greengenes reference database of 16S rRNA gene sequences clustered at 99%. For beta-diversity analysis, abundance-weighted sample pairwise differences were calculated using the Bray–Curtis dissimilarity. Principal coordinate analysis (PCoA) was used to plot two-dimensional ordination. PCoA uses the sample-to-sample dissimilarity values (beta-diversity) to position the points relative to each other by maximising the linear correlation between the dissimilarity values and the plot distances. Permutational analysis of variance (PERMANOVA) was utilised for whole-microbiome significance testing of beta-diversity differences. For taxon significance testing, univariate differential abundance of OTUs was tested using a negative binomial noise model for the overdispersion and Poisson process intrinsic to these data, as implemented in the DESeq2 package 41 , and described for microbiome applications 42 . It takes into account both technical and biological variability between experimental conditions. DESeq was run under default settings and q -values were calculated with the Benjamini–Hochberg procedure to correct p -values, controlling for false discovery rates. Bacteria DNA extraction and quantitative PCR DNA was extracted from mouse faecal material after homogenisation with 1.5 mm zirconium beads using BeadBug homogenizer (Benchmark Scientific, Inc.). DNA isolation was performed using QiaAmp Fast DNA Stool Mini Kit (QIAGEN) following the manufacturer’s protocol. An amount of 10–20 ng of DNA was then used in qPCR reactions for A . muciniphila with specific primers (AM1: CAGCACGTGAAGGTGGGGAC, AM2: CCTTGCGGTTGGCTTCAGAT), and for total bacteria (UniF340: ACTCCTACGGGAGGCAGCAGT, UniR514: ATTACCGCGGCTGCTGGC) for normalisation. qPCR was performed in 7900HT Fast Real-Time PCR system using Fast SYBR Green Master Mix (Applied Biosystems, Foster City, CA). Histology and staining Distal colon samples were fixed in 10% neutral buffered formalin (Medical Supply, Mulhuddart, Dublin), or Carnoy’s solution for mucus layer analysis, and embedded in paraffin. Blocks were sectioned in a microtome and 5-μm thickness sections were mounted in Superfrost Plus adhesion slides (Thermo Scientific, Braunschweig, Germany). Slides were stained with H&E and AB/PAS for histological assessment and to enumerate goblet cells per crypt and mucus layer thickness in the tissue. Cytokines and insulin ELISA assays Colon explants were cultured for 24 h (37 °C, 5% CO 2 ) to measure secretion of cytokines IL-9, IL-13 and IL-22 in the supernatants using mouse uncoated ELISA kits (Invitrogen, Ready-SET-Go! ELISA kits, eBioscience), or for 4 h in the presence of 200 ng/ml IL-36α to measure Muc2 mRNA expression. IL-6 and TNF-α were measured in the serum of mice after HFD using mouse uncoated ELISA kits (Invitrogen, Ready-SET-Go! ELISA kits, eBioscience). Levels of fasting plasma insulin were measured using Ultra Sensitive Mouse Insulin ELISA Kit (Crystal Chem) according to the manufacturer’s instructions. Flow cytometry Adipose tissue was digested with 1 mg/ml collagenase D, passed through a cell strainer and red blood cells were lysed. Stromal vascular cells were then stained for flow cytometric analysis using fluorophore-conjugated antibodies specific for mouse CD45 (30-F11), F4/80 (BM8), CD11b (M1/70), CD11c (N418) (eBioscience, Invitrogen) and CD301 (LOM-14) (BioLegend), and Aqua Live/Dead stain (Invitrogen) to identify live-cell population. LSR Fortessa instrument (BD Biosciences) was used and data were analysed using FlowJo software (TreeStar). Cell enumeration was performed using CountBright Absolute Counting Beads for flow cytometry (Invitrogen). Western blotting For insulin sensitivity assays, protein lysates were prepared from liver, leg skeletal muscle and epididymal white adipose tissue harvested from mice fed with a HFD for 8 weeks, 10 min after intraperitoneal challenge with 1.5 U/kg insulin (recombinant human, Sigma), using RIPA Lysis Buffer System (Santa Cruz). For analysis of signalling pathways downstream of IL-36 in colon tissues, tissues were harvested from wt and Il36rn−/− mice and lysed in RIPA buffer. Protein concentration was determined with Bicinchoninic Acid Kit for Protein Determination (Sigma) and lysates were diluted with NuPAGE LDS Sample Buffer and Reducing Agent (ThermoFisher Scientific) and heated for 10 min at 70 °C. Samples were loaded on precast NuPAGE Novex 4–12% Bis-Tris gradient gels (Invitrogen) or RunBlue Bis-Tris 4–12% gels (Expedeon) and SDS-PAGE was performed in XCell SureLock Mini-Cell Electrophoresis system (ThermoFisher Scientific) for 120 min at 100 V. Separated proteins were transferred to Invitrolon polyvinylidene fluoride (PVDF) membranes (ThermoFisher Scientific) for 1 h at 30 V, followed by blocking with 5% BSA or skimmed milk and incubation with appropriate antibodies. Primary antibodies used were Phospho-NF-κB p65 (Ser536) (93H1) Rabbit mAb (1:1000, Cell Signaling), NF-κB p65 (C-20) Rabbit polyclonal (1:1000, Santa Cruz), Phospho-AKT (S473) (D9E) XP Rabbit mAb (1:1000, Cell Signaling), Akt (pan) (C67E7) Rabbit mAb (1:1000, Cell Signaling), Phospho-p44/42 MAPK (Thr202/Tyr204) (D13.14.4E) XP Rabbit mAb (1:1000, Cell Signaling), p44/42 MAPK (Erk1/2) (137F5) Rabbit mAb (1:1000, Cell Signaling), Phospho-IGF-I Receptor β (Tyr1135/1136)/Insulin Receptor β (Tyr1150/1151) (19H7) Rabbit mAb (1:1000, Cell Signaling) and Insulin Receptor β (4B8) Rabbit mAb (1:1000, Cell Signaling). Secondary antibody used was Goat anti-Rabbit IgG (1:5000, Sigma) horseradish peroxidase conjugated. Blots were developed in FUSION-FX chemiluminescence system (Vilber) using SuperSignal™ West Pico PLUS Chemiluminescent Substrate (ThermoFisher Scientific). Densitometry analysis of immunoblots was performed using ImageJ software. Uncropped versions of blots can be found in the Source Data file. FITC-dextran for intestinal permeability Mice on a HFD for 12 weeks were administered 20 mg/mouse FITC-dextran (Sigma) by oral gavage after overnight fasting. After 4 h, blood was collected and serum was prepared. Serum was diluted 1/2 in PBS and fluorescence at 490/525 nm was measured, using the standard curve of FITC-dextran and serum from control (non-gavaged) animal as blank to determine FITC-dextran concentrations in serum. Statistical analysis Sample size for patient’s serum analysis and mouse studies were chosen based on our published studies 35 , 43 . Analysis of cytokines in patients serum was carried out in a blinded fashion and subsequently segregated based upon clinical parameters. Analysis of data from mouse studies, i.e. goblet cell enumeration was carried out in a blinded fashion. Statistical analysis of individual data sets was performed by first analysing data sets for normal distribution (Shapiro–Wilk and Kolmogorov–Smirnov test) and equality of variance (F test), followed by unpaired two-tailed student’s t -test or Mann–Whitney test for nonparametric data, as appropriate. For repeated measures analysis, RM two-way ANOVA with Bonferroni post hoc test was used instead. Correlation analysis were performed using Spearman’s correlation test. All analysis and graph representation were performed using GraphPad Prism 6 software. Data are shown as means ± SEM unless specified otherwise. All qPCR data were analysed using the ΔΔCt method. Statistical details for each figure can be found in the figure legends. Data availability All relevant data are available from the authors. Source data for all relevant figures provided as a Source Data file. 16S sequence data are available through GenBank, Accession Numbers MN211558–MN212548.
Scientists from the School of Medicine, Trinity College Dublin, have, for the first time, discovered a family of proteins that are associated with lower blood sugar levels among obese patients with type 2 diabetes. Their research is published today in the international journal Nature Communications. The study showed that patients with type 2 diabetes who have high levels of the protein, IL-36 cytokines, were found to have lower blood sugar levels, implying that those proteins are associated with better control of the patient's blood sugar levels and their disease. IL-36 cytokines are members of a larger family of proteins known as the interleukin-1 family which have emerged as central players in the development of obesity related disease. Researchers have linked the protective effects of these proteins with their ability to alter the make-up of the intestinal microbiome. Obesity causes an increased level of fatty acids and inflammation leading to insulin resistance. When the body is resistant to the insulin it produces it causes a high build-up of glucose or blood sugar, ultimately leading to type 2 diabetes. Obesity is now recognised as a global pandemic and has been definitively linked to a wide range of diseases including metabolic disorders such as diabetes, stroke and many types of cancer. The World Health Organisation state that global levels of obesity have more than doubled since 1980. In Ireland, according to the Healthy Ireland survey, 854,165 adults over 40 in the Republic of Ireland are at increased risk of developing (or have) type 2 diabetes. The economic burden of diabetes on the Irish health care system is becoming a major challenge for the government. Given the scale and global reach of the problem, current approaches aimed at reversing the tide of obesity driven disease are insufficient. The Trinity research team believe that there is an urgent need to achieve a greater understanding of the mechanisms associated with obesity related diseases. Lead scientist Dr. Patrick Walsh from the School of Medicine, Trinity, said: "This study has added to a substantial body of work which has revealed the important function of the broader interleukin-1 family as mediators of metabolic health and disease. Our findings have opened the door to a deeper investigation of how IL-36 cytokines impact on the development of such diseases in humans and whether this can be exploited for the better treatment of patients."
10.1038/s41467-019-11944-w
Computer
A new solution to cool electronic devices and prevent them from overheating
Tarek Gebrael et al, High-efficiency cooling via the monolithic integration of copper on electronic devices, Nature Electronics (2022). DOI: 10.1038/s41928-022-00748-4 Journal information: Nature Electronics
https://dx.doi.org/10.1038/s41928-022-00748-4
https://techxplore.com/news/2022-05-solution-cool-electronic-devices-overheating.html
Abstract Electrification is critical to decarbonizing society, but managing increasing power densification in electrical systems will require the development of new thermal management technologies. One approach is to use monolithic-metal-based heat spreaders that reduce thermal resistance and temperature fluctuation in electronic devices. However, their electrical conductivity makes them challenging to implement. Here we report co-designed electronic systems that monolithically integrate copper directly on electronic devices for heat spreading and temperature stabilization. The approach first coats the devices with an electrical insulating layer of poly(2-chloro- p -xylylene) (parylene C) and then a conformal coating of copper. This allows the copper to be in close proximity to the heat-generating elements, eliminating the need for thermal interface materials and providing improved cooling performance compared with existing technologies. We test the approach with gallium nitride power transistors, and show that it can be used in systems operating at up to 600 V and provides a low junction-to-ambient specific thermal resistance of 2.3 cm 2 K W –1 in quiescent air and 0.7 cm 2 K W –1 in quiescent water. Main Thermal management infrastructure plays a key role in decreasing the energy consumption of electronic systems in, for example, data centres 1 , 2 and electric vehicles 3 , 4 , 5 . Efficient thermal management techniques using indirect liquid cooling 6 , 7 , 8 and immersion cooling 9 , 10 , 11 can enable heat removal with low energy consumption. Thermal management is also important for keeping device temperatures below their reliable operating limits, leading to increased reliability and higher system power densities 12 . Emergent wide-bandgap devices such as gallium nitride (GaN) and silicon carbide (SiC) transistors can tolerate junction temperatures up to 150 °C, but other components located nearby are often rated for lower operating temperatures (<100 °C), thus requiring efficient cooling techniques at the chip, board and system levels 13 . Conventionally, heat spreading is accomplished by adding a high-thermal-conductivity component to the heat-generating device 14 , thus reducing the overall junction-to-coolant thermal resistance 15 , 16 , 17 . Laptops, for example, use heat-spreading graphite, which is pressed against the electronics to facilitate heat dissipation 18 . One drawback of conventional heat spreaders is their inability to reach shadowed regions underneath devices. The absence of contact between the heat source (the junction) and the heat-spreading medium leads to less efficient cooling. Although heat-spreading methods based on diamond 19 and graphene 20 address this problem, they are not scalable. In this Article, we report a board-level heat-spreading technology in which the heat-spreading material can reach confined regions underneath devices on circuit boards and systems. The on-board devices are first coated with a high-dielectric-strength poly(2-chloro- p -xylylene) (parylene C) for electrical insulation 11 , 21 . Then, using successive depositions with thermal evaporation, electroless plating and electroplating, a conformal copper coating is monolithically grown on parylene C. The conformal copper reaches underneath the devices, creates contact with heat-generating regions and provides thermal dissipation routes from the top, bottom and sides of the package. The monolithic integration of copper also eliminates the need for a thermal interface material (TIM) 22 , 23 , typically required to fill air crevices between two mating solid surfaces even with efficient heat-spreading methods using heat pipes 24 and vapour chambers 25 . The application of TIMs requires compression to improve contact and reduce thermal impedance, which can compromise the reliability of chip-scale packages. To illustrate the capabilities of our approach, we integrate copper heat spreaders directly on GaN devices, and then characterize their electrothermal performance during steady-state and transient operations. Cooling in quiescent air or water highlights the potential for ultraefficient passive (non-pumped) cooling. Our approach offers improved performance compared with established copper heat sinks and copper-plane heat spreaders; further, by removing the need for large heat sinks, it could potentially be used to create compact and power-dense electronics. Heat spreader fabrication To demonstrate the integration of our coating (Fig. 1 ) with devices using different soldering methods, we designed printed circuit boards (PCBs) with two different GaN power transistors: a top-cooled GS66508T surface-mount device (SMD) and a top-cooled EPC2034 ball grid array (BGA) device. We start by depositing an electrical insulating layer of parylene C to prevent the electrical shorting of devices by the Cu coating (Fig. 1a ). Since parylene is deposited by chemical vapour deposition (CVD), it conformally covers all the exposed circuits, ensuring the electrical insulation of PCBs. Next, we proceed with growing Cu on top of parylene. We first deposit a nanometric seed metal layer via physical vapour deposition (PVD) such as thermal evaporation 26 . Since PVD methods cannot reach the shadowed regions underneath devices (Fig. 1b ), if we directly proceed with electroplating after thermal evaporation, the discontinuous film of Cu fails to drive the electrical current to the top of the devices, resulting in uncoated devices. To overcome this challenge, we added an electroless deposition step that bridges the PVD film and creates a continuous coating (Fig. 1c ). We attempted to use electroless deposition without the PVD step but failed to achieve high-quality coatings. After the electroless deposition step, we increase the thickness of the Cu coating to the desired level by electroplating Cu, resulting in a monolithically integrated Cu-coated heat spreader (Fig. 1d ). Fig. 1: Cu-coated heat spreader fabrication. a , Schematic showing the coating of GaN devices and PCBs with a layer of parylene C for electrical insulation. Parylene C is deposited via CVD, conformally covering the boards and reaching underneath the devices. b , Schematic of the deposition of a 20-nm-thick Cr layer followed by a 50-nm-thick Cu layer via PVD. The PVD Cu layer acts as a seed layer for the micrometre-thick electroless Cu deposition. c , Schematic of the electroless deposition of Cu to cover the shadowed regions underneath the devices and create a continuous Cu film that can drive electrical current from FR-4 to the top of the device. d , Schematic showing further growth of Cu using d.c. Cu electroplating. The schematic is not to scale. e , Calculated thermal resistance R P of parylene C and specific thermal resistance based on the measured thickness t P of parylene C. f , Maximum voltage drop applied across the parylene C layer for different devices tested. The solid green bars and shaded red bars correspond to layers that passed (leakage current, <1 µA) and failed (leakage current, >1 µA) the voltage test, respectively. The device number corresponds to an experiment with a specific GaN device and parylene C and Cu plating duration (Supplementary Table 2 ). g , Cu coating thickness as a function of electroplating time. Linear regression shows a Cu plating rate (PR) of 10.58 ± 0.32 µm h –1 . Full size image Parylene C and Cu coating characterization The deposited parylene C coatings were 8.49 ± 0.51 and 9.83 ± 0.27 µm thick. The approximate specific thermal resistance R' ′ P of these layers is equivalent to 1.01 and 1.17 cm 2 K W –1 , respectively (Fig. 1e ), obtained by dividing the thickness of the parylene layer by thermal conductivity k P = 0.084 ± 0.002 W m −1 K −1 (refs. 27 , 28 ). The thermal resistance R P of parylene is device specific and obtained by dividing R' ′ P by the total area of the device across which the heat flows from the device to the Cu coating (Supplementary Section 2 provides the calculation details). We measured the leakage current through the parylene C layers to characterize their current-blocking performance under voltages up to 600 V (Methods). Figure 1f shows the maximum voltage applied across the parylene layer of 14 GaN devices and identifies failure (red) or success (solid green) for blocking the leakage current. The coatings resulting in leakage currents below 1 µA were considered successful in electrically insulating the devices. Each device number corresponds to an experiment with a specific GaN device, parylene thickness and Cu plating duration (Supplementary Table 2 ). Here 600 V corresponds to a maximum electric field of 70.7 V µm –1 , which is less than the dielectric strength of parylene C (~220 V µm –1 ) 27 , 29 . The true maximum electric field that the parylene film experiences is higher than 70.7 V µm –1 , especially in locations with electric-field concentration. Three of the four failed parylene layers (devices 3, 10 and 11) were electroplated with Cu for 47 h, the longest electroplating duration used in this study, meaning that thicker Cu coatings lead to a lower parylene voltage rating. The electrical failure mechanism for these four devices remains unclear and presents an avenue for future investigations of failure mechanisms. The Cu thickness is dominated by the electroplating step; therefore, we correlated the measured total Cu thickness with the electroplating duration and we obtained a nearly linear relationship with a deposition rate of 10.58 ± 0.32 µm h −1 (Fig. 1g ). The deposited Cu coatings are free of voids and defects of sizes larger than the micrometre range (Supplementary Fig. 2b ). The current density of 74 A m –2 used here produced dense Cu films and ensured void-free deposition 30 . Experimental four-point probe measurements were used to obtain a deposited Cu thermal conductivity of 127 ± 14 W m −1 K −1 (Methods and Supplementary Section 2 ). Steady-state cooling performance We tested the cooling performance of the fabricated coated PCBs in both quiescent air and quiescent water at ambient temperature ( T amb ≈ 22 ± 1 °C). The cooling in these two media is passive where heat is dissipated from the PCB into natural convection streams of the fluid. We compared the thermal performance of Cu coating (Fig. 2b,e ) with commercial 70-µm-thick solder-coated Cu plane fabricated with the PCB (Fig. 2a ) as well as commercial 1.4 × 1.4 × 1.4 cm 3 heat sinks (Fig. 2c,f ). Fig. 2: Photographs of the tested configurations. a – c , Photographs of 4.8 × 2.5 cm 2 70-µm-thick solder-coated Cu-plane heat spreader ( a ), Cu-coated heat spreader ( b ) and pair of 1.4 × 1.4 × 1.4 cm 3 Cu heat sinks ( c ). The insets show the schematic of the cross-sectional material stackup of the solder-coated Cu plane (top left) and Cu heat sinks (top right). d , For the experiments, we designed and fabricated custom PCBs having two GaN power transistors: a top-cooled SMD from GaN Systems (GS66508T) and a top-cooled BGA device from Efficient Power Conversion (EPC2034). e , Top-view photograph of the 5.4 × 2.5 cm 2 Cu-coated heat spreader. f , To ensure good thermal contact between the GaN devices and Cu heat sinks, we added layers of gap filler followed by a thermal paste. All scale bars correspond to 1 cm. Full size image We measured the temperature of the outer device surface ( T s ) that contacts the cooling fluid. This includes the Cu surface for Cu coatings, parylene surface for bare devices in water and top surface of the device in the other cases. We measured the temperature difference (Δ T = T s – T amb ) versus the Joule heating power in the device and the heat flux based on the device footprint area (Methods and Supplementary Fig. 3 ). The slopes of the linear regression curves of these data correspond to the surface-to-ambient thermal resistance. The measured thermal resistances are summarized for EPC2034 (Fig. 3a,b ), where we add the junction-to-case thermal resistance R JC and the calculated thermal resistance of the 8.49-µm-thick parylene layer R P to determine the total junction-to-ambient thermal resistance. The thermal resistances of both GaN devices are summarized in Supplementary Fig. 4 . Water has a lower thermal resistance due to its higher heat transfer coefficient ( h ) compared with air. For the two devices in both air and water, the spreading capability of Cu increases with thicker coatings. The thermal resistance decreases with a larger coating thickness t s , until it reaches a plateau when the spreading is not considerably affected by the thickness (Fig. 3a,b ). We see that the asymptotic behaviour is the same for both GS66508T and EPC2034 devices whether they are operated in air (~15 K W –1 ) or in water (~2 K W –1 ). As the coating becomes thicker, it efficiently spreads the heat such that the device footprint area does not contribute considerably in the spreading process compared with spreading within the coating. Moreover, we see that the Cu-plane heat spreader—with almost the same area as the Cu coatings—does not provide a substantial reduction in thermal resistance over that of the bare device (device with no spreader), demonstrating a higher thermal resistance compared with thinner Cu coatings. The poor heat transfer performance of the Cu plane is due to the gap that separates the Cu plane from the devices to allow space for the circuitry (Fig. 2a ). The commercial Cu heat sink provided a junction-to-ambient thermal resistance that was comparable to that of the thickest Cu coating in air. However, the Cu coating offers substantial volume saving compared with a commercial Cu heat sink. Fig. 3: Thermal performance of EPC2034 monolithically integrated with copper. a , b , Thermal resistance ( R , left axis) and specific thermal resistance ( R ″, right axis) as a function of Cu coating thickness ( t s ) for the air-cooled EPC2034 GaN device ( a ) and water-immersion-cooled EPC2034 GaN device ( b ). The thermal-resistance error bars obtained from linear regression are smaller than the symbol size and are not shown for clarity. The thickness error bars correspond to standard deviation from the Cu thickness measurements (Supplementary Table 1 ). TC in the inset of a corresponds to the place of the thermocouple in the thermal resistor network. T J , T amb , and R JC in the inset of a correspond to the junction temperature of the device, ambient temperature of the fluid, and junction-to-case thermal resistance of the device, respectively. c , d , Time constant ( τ ) as a function of Cu coating thickness ( t s ) for the air-cooled EPC2034 GaN device ( c ) and water-immersion-cooled EPC2034 GaN device ( d ). The time-constant error bars were obtained from the curve-fitting analysis. The thickness error bars correspond to the standard deviation from the Cu thickness measurements (Supplementary Table 1 ). e , f , Temperature-swing (Δ T = T s – T amb ) response as a function of time for the air-cooled EPC2034 GaN device with the Cu heat sink, 55-µm-thick Cu coating, 172-µm-thick Cu coating and 476-µm-thick Cu coating at a pulsed heat load of 1.00 Hz ( e ) and 0.33 Hz ( f ). The energy per pulse was set to 6.09 ± 1.46 J with a duty cycle of 50%. The error bars in the measured Δ T are ±1.1 °C and are not shown for clarity. Full size image Transient cooling performance We measured the characteristic time constants of the Cu-coated PCBs, the Cu-plane PCB and the Cu-heat-sink PCB to quantify the device-temperature stability and thermal mass. First, we turn on the device and wait until its temperature reaches a steady value. Then, we turn the device off and record the temperature as it decays with time (Methods). Since the maximum temperature ( T max ) is different for every experiment, we non-dimensionalize the temperature and fit the results according to a double-phase exponential decay model Θ ( t ) = ( T ( t ) – T amb )/( T max – T amb ) = A 1 exp(– t / τ d ) + A 2 exp(– t / τ s ) + Θ 0 (Supplementary Tables 3 – 6 list the fitting parameters). A double-phase exponential model was found to describe our results better than a single-phase model especially for thinner Cu coatings because of the different timescales involved. The first exponential term, having a time constant τ d < τ s , describes the device-temperature decay. The gradient in temperature at the device level is less considerable for thicker Cu coatings (for example, 439 µm thick) due to their better heat-spreading capability. Hence, we found that the single exponential model is suitable for describing the temperature decay for thicker Cu coatings. The second exponential term mainly corresponds to the heat spreader, and therefore, we denote its time constant by τ s . Higher-order exponentials that represent the PCB-temperature decay are embedded in constant Θ 0 . Figure 3c,d shows the heat-spreader time constant τ s as a function of Cu-coating thickness t s for EPC2034. Supplementary Fig. 6 summarizes the time-constant values for both GaN devices. We observe that the time constant keeps increasing due to the continuous increase in coating thermal mass m s C s , where m s is the heat-spreader mass and C s is the heat-spreader specific heat capacity. The time constant for the Cu plane is comparable to that of the bare device, whereas the Cu-heat-sink PCB in air had the highest time constants (owing to its higher thermal mass). Moreover, although the Cu-coating time constant could exceed 80 s in air for both EPC2034 and GS66508T GaN devices having thick coatings, it remained below 8 s in water, where the relative increase in the coating time constant over that of the bare device is smaller than in air. The time constant τ s ≈ m s C s / hA s , where the thermal mass is divided by the product of the heat transfer coefficient ( h ) and spreading area ( A s ). The high heat transfer coefficient in water compared with that in air is the reason why the time constants are relatively low in water. Therefore, although the Cu coatings are effective in stabilizing the temperature in air, this is not true for water. However, even though fast thermal response is undesirable for preventing sudden temperature surges, the high heat transfer coefficient keeps water (or other dielectric fluids) a viable solution for temperature stabilization because it keeps temperature fluctuation amplitudes low. We also tested the temperature response to a pulsed heat load, obtained by exposing the GaN devices to an ON/OFF square-voltage signal. Figure 3e,f shows the device-temperature-swing (Δ T = T s – T amb ) response as a function of time for the air-cooled EPC2034 GaN device at a pulsed heat load of 1.00 Hz and 0.33 Hz, respectively. In the experiments, we set the energy per cycle to 6.09 ± 1.46 J with a duty cycle of 50%. Thicker copper coatings result in smaller temperature fluctuations, irrespective of the device type, coolant fluid or heat-load frequency. The maximum temperatures observed at larger pulsing frequencies are higher because the power dissipation per cycle is larger. In air, the temperature increases during a single cycle, because the temperature rise is larger than the temperature fall. The temperature keeps increasing until it reaches a steady temperature oscillation. This steady behaviour is confirmed in our experiments in water, where the temperature maximum does not grow within the 20 s experiment time (Supplementary Section 4 provides additional details of the pulsed heat experiments.) Spreading depth of monolithically integrated Cu coatings The thermal performance of Cu coatings depends on their surface area relative to their heat-spreading capability. To understand this relation, we choose to describe the spreading capability with the distance from the device where the temperature is sufficiently close to the ambient temperature such that negligible spreading, and hence negligible heat dissipation, occurs beyond this distance. Numerically, we define the 10% spreading depth δ s as ( T ( δ s ) – T amb )/( T max – T amb ) = 0.1. The spreading depth can be numerically obtained for radial heat transfer in a thin Cu coating (Supplementary Section 5 ). To understand the effect of the different coating parameters on δ s , we show that \(\delta _{{{\mathrm{s}}}}\approx \delta _{{{\mathrm{s}}}}^ \ast\) , where \(\delta _{{{\mathrm{s}}}}^ \ast = (k_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}/h)^{0.5}\) represents the characteristic spreading depth and k s is the planar thermal conductivity of the coating. Hence, δ s increases with a higher coating thickness and thermal conductivity and decreases with a higher cooling heat transfer coefficient. Supplementary Section 6 provides a full derivation of \(\delta _{{{\mathrm{s}}}}^ \ast\) . Figure 4a shows that δ s is reached within the lateral bounds of the 51-µm-thick Cu coating, where δ s is smaller than lateral distance L s from the device to the boundary of the heat spreader. Here Δ T = T s – T amb drops by 72% going from the device to the right-side edge of the heat spreader. In this heat-spreading mode, the spreader surface-to-ambient thermal resistance R s decreases with thicker Cu coatings as \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) , where r d is the effective device radius (Supplementary Section 5 ). When the heat-spreader lateral dimensions become small compared with the spreading depth, the heat spreader achieves an almost uniform temperature that is close to the maximum spreader temperature. Fig. 4a explains this behaviour with the 439-µm-thick Cu coating where Δ T = T s – T amb only drops by 21% going from the device to the right-side edge of the heat spreader. In this case, thermal resistance R s ≈ 1/( hA s ); hence, it is independent of the coating thickness or coating thermal conductivity, which agrees with the observation that the thermal-resistance curves reach a plateau and cease to decrease with thicker Cu coatings in air (Fig. 3a ). The same convergence behaviour that we observe with water (Fig. 3b ) occurs due to a different mechanism. The higher heat transfer coefficient associated with water makes \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) converge faster. Therefore, we observe that the spreading depth and thermal resistance converge even though the spreading depth is smaller than the lateral distance L s (Fig. 4b,c ). Conversely, even if the spreader dimensions are small compared with δ s , increasing the coating thickness further adds a thermal mass that contributes in increasing the time constant further; therefore, no plateau is seen in the curves of Fig. 3c,d . Fig. 4: Heat-spreading analysis. a , Infrared imaging of the planar temperature distribution for the 51-µm- and 439-µm-thick Cu-coated heat spreaders and their decay with time. At t = 0 s, the device with the 51-µm-thick Cu coating operates at 1.22 W, whereas that with the 439-µm-thick coating operates at 2 W, both at the steady state. We turned off the power and captured the infrared temperature distribution at 30, 60 and 90 s. b , Calculated 10% spreading depth δ s as a function of Cu thickness ranging from 20 to 1,000 µm. The calculations are done for devices operating at 1, 2 and 5 W, in air or water. c , Spreader thermal resistance R s as a function of 10% spreading depth δ s . Three operating regimes were observed: the uniform maximum temperature regime when the spreader dimension L s ≪ δ s , the semi-infinite spreading regime when L s > δ s and the finite spreading regime between the two. The calculations show that the thermal resistance reaches a plateau in the semi-infinite regime, where \(R_{{{\mathrm{s}}}}\approx r_{{{\mathrm{d}}}}^{ - 1}\left( {hk_{{{\mathrm{s}}}}t_{{{\mathrm{s}}}}} \right)^{ - 0.5}\) declines fast due to the high heat transfer coefficient h . Here R s reaches a resistance limit in the uniform maximum temperature regime as well, where the spreader is at a constant temperature and R s ≈ 1/( hA s ). Full size image Design of Cu-coated heat spreaders The spreading depth is an important figure of merit for designing monolithically integrated Cu coatings as an efficient cooling mechanism. The resulting Cu-coated heat spreader operates under three different regimes distinguished by δ s . The first regime is the uniform maximum temperature regime where the dimensions of the heat spreader are much smaller than the spreading depth ( L s ≪ δ s ). Here thermal resistance R s is independent of coating thickness t s and thermal conductivity k s . The thermal resistance depends only on the heat transfer coefficient of the fluid medium h and heat-spreader coverage area A s . This case can be observed with the 439-µm-thick spreader at t = 0 s (Fig. 4a ). The second regime is the semi-infinite spreader regime where the dimensions of the heat spreader are greater than the spreading depth ( L s > δ s ). Here increasing A s does not result in a remarkable reduction in the thermal resistance R s because the spreading depth has been reached inside the spreader. In this case, increasing δ s , and hence reducing R s , is achieved by increasing either k s or t s . This case can be observed in the horizontal direction of the 51-µm-thick spreader at t = 0 s (Fig. 4a ). The third regime is the finite spreader regime where the dimensions of the heat spreader are comparable to the spreading depth ( L s ≈ δ s ). Here R s depends on all the three parameters, namely, t s , k s and A s . This case can be observed in the vertical direction of the 51-µm-thick spreader at t = 0 s (Fig. 4a ). Knowledge of the spreading depth is also essential to predict the effect of hotspots on neighbouring active or passive components (inductors and capacitors, for example). Some devices cannot tolerate the high temperatures at which wide-bandgap semiconductor power transistors are rated to operate. Therefore, if heat is spread from hotspots to those low-temperature devices, it can compromise their operation. Furthermore, the temperatures of multiple active devices that are thermally interacting are higher than individual standalone devices operating under the same coating and cooling-medium conditions. Thermal coupling is considerable when the devices are located within a distance from each other that is smaller than δ s . Designing the heat spreader and cooling medium such that the distance separating the devices is greater than δ s can alleviate the adverse effects of thermal interactions. The spreading depth also determines the dependence of R s on the device location within the spreader. Our computational fluid dynamics (CFD) simulations show that this dependence is the strongest when the dimensions of the spreader are close to δ s (Supplementary Section 8 ). In this case, the thermal resistance increases as the device is moved from the centre of the spreader to its edges. The more the dimensions of the spreader deviate from δ s (larger or smaller), the less considerable is the dependence of R s on the device location. This indicates that quantifying this dependence when L s ≈ δ s is an important step when designing coated heat spreaders. Another aspect studied through finite element analysis (FEA) simulations is the effect that our coatings have on the thermomechanical reliability of electronics (Fig. 5a–c ). The FEA results reveal that our coatings can potentially extend the life of solder joints by coupling the substrate/chip deformations and reducing plastic strain energy density (Fig. 5d ), and hence, crack formation, propagation and failure are delayed 31 . Although the high Young’s modulus of Cu increases the stress on the chip (Fig. 5e ), the added stress is far below the Si fracture strength of ~4 GPa 32 . Also, the relatively low stiffness of parylene helps in reducing the chip stresses compared with those obtained with stiffer insulators such as silicon dioxide (SiO 2 ) (Fig. 5e ). Supplementary Section 9 provides the complete FEA thermomechanical analysis. An effect not captured by our FEA simulations is the internal stress of Cu coating that increases with thicker Cu coatings 33 . A high enough internal stress may result in Cu piercing the flexible parylene, leading to potential electrical short circuits. Fig. 5: Coating effect on thermomechanical reliability. a , Schematic of the model used in the FEA simulations. The model consists of a silicon (Si) chip (grey colour) connected to an FR-4 PCB (green colour) through 25 solder balls (brown colour). All the exposed surfaces are conformally coated with a 10-µm-thick electrical insulation film (parylene C or SiO 2 ) followed by a 150-µm-thick Cu coating. We applied a three-cycle thermal load alternating between −55 and 85 °C, where each cycle consists of a 10 min dwell at −55 °C, a 7 min ramp up, a 10 min dwell at 85 °C and a 7 min ramp down. b , Contour plots of the equivalent plastic strain ε p in the solder ball farthest away from the Si centre, obtained from the FEA simulation of the model with parylene C and Cu coatings at −55 °C during the first cycle. c , Contour plots of the von Mises stress σ v in the Si and solder balls obtained from the simulation of the model with parylene C and Cu coatings at −55 °C during the first cycle. d , Bar plot of the plastic strain energy density accumulated during the second cycle in the solder-joint region of the solder ball farthest away from the Si centre for the no coating, SiO 2 and Cu, and parylene C and Cu cases. The plastic strain energy density was averaged across the volume of the solder-joint portion 10 µm below the Si. e , Bar plot of the maximum von Mises stress in Si that occurred during the entire thermal loading for the no coating, SiO 2 and Cu, and parylene C and Cu cases. Methods and Supplementary Section 9 provide additional details of the FEA simulation. Full size image The performance and reliability of the coatings in boiling immersion cooling systems need investigation. The addition of nanostructured CuO can offer attractive boiling performance when used in immersion cooling by increasing the critical heat flux of water 34 , 35 . In addition to CuO, the developed Cu coating provides a platform for the creation of alternative Cu-based methods for heat transfer enhancements directly on electronic devices. These include the cathodic deposition of Cu 36 and inverse opal deposition 37 . Replacement materials for Cu and parylene (like diamond and ceramics, respectively) can be investigated in the future. The coating materials should be compatible with the electronic application. For example, the effects of the integration of parylene/Cu coatings to devices involving radio-frequency signals and electromagnetic-interference shielding need more investigation. Also, implementation of these coatings requires consideration of the serviceability of electronics. Replacing a malfunctioning device should not require re-coating of the entire PCB. Electrothermal co-design is needed to fabricate electrical modules that can mechanically separate from each other and can be easily detached from the system, fixed and re-coated without interfering with other PCB parts. Conclusions The low junction-to-ambient thermal resistances measured in quiescent air and quiescent water in our work indicate that coated heat spreaders can achieve efficient and inexpensive passive cooling in electrical systems, saving cooling energy consumption and enhancing product performance. Dielectric fluids, such as mineral oils and hydrofluoroethers, have been extensively used as immersion coolants due to their dielectric properties. Water offers even higher heat transfer coefficients and heat fluxes 11 , but its cooling performance is compromised by its electrically conductive property. Our copper-coating approach enables immersion cooling in water due to the insulating parylene layer. Our approach can also be designed to operate in the single-phase cooling regime, cooling electronics and eliminating boiling mechanical stresses that are common when using dielectric coolants. Since the copper coatings are planar, they would not interfere with the stacking of server modules. Therefore, integration can lead to higher system power densities compared with standard cooling methods (Supplementary Section 10 ). For example, our experiments show that although a heat sink and the 223-µm-thick Cu coating have similar thermal resistances, the power per unit volume of the copper coating is 740% higher than that of the heat sink. This increase in power density is due to an 89% decrease in the volume occupied by the coatings relative to that of the heat sink. We benchmarked our coatings with existing heat sinks and cold plates. The 562-µm-thick copper coatings have a 10.1 and 2.7 cm 3 K W –1 volumetric thermal resistance ( r ″) in air and water, respectively, require zero cooling power and should outperform existing air-forced convection and indirect single-phase liquid-forced convection cooling. The 562-µm-thick copper coating in water achieved the lowest r ″ value among all the cooling methods compared in the benchmarking analysis (Supplementary Section 11 ). Our monolithic integration method can also achieve better specific thermal resistance (cm 2 K W –1 ) than vapour chambers attached with TIMs, as well as removing the need to apply pressure to the electronic device (Supplementary Section 11 ). Recent developments in additive manufacturing have led to the removal of TIMs by directly depositing metal on semiconductor devices 38 , 39 . These additive manufacturing techniques can produce thermally optimized solutions, but they cannot coat several devices at once because of electrical short-circuit risks. This challenge prevents additive manufacturing from achieving the volume compactness that coated heat spreaders are capable of, unless an electrically insulating layer is applied. Methods Cu coating deposition recipe The following steps were used to achieve monolithic integration of Cu coating on the devices and PCB, and we used Kapton tape masks where coatings are not desired. Adhesion promoter and parylene C deposition Here A-174 silane (also known as γ-MPS (3-(trimethoxysilyl) propylmethacrylate)) was used as an adhesion promoter for parylene C on the PCB. It was deposited through the following steps recommended by the Marvell Nanofabrication Laboratory. We started by preparing a promotion solution containing isopropyl alcohol, deionized (DI) water and A-174 in a 100:100:1 volume ratio. We stirred the solution for 30 s and allowed it to stand for 2 h. Then, we immersed the PCBs in the promotion solution for 30 min, removed them from the solution and allowed them to dry for 30 min. Finally, the PCBs were rinsed in isopropyl alcohol for 30 s and dried with nitrogen. The parts were coated with parylene C within 30 h of depositing the adhesion promoter. The parylene C layer was then deposited on the PCBs through a CVD process using the SCS Labcoter 2 parylene deposition system (Specialty Coating Systems). Chromium/Copper thermal evaporation A 20-nm-thick layer of chromium (Cr) was deposited onto the PCBs followed by a 50-nm-thick layer of Cu. Both layers were deposited through a PVD thermal evaporation process. Chromium acts as an adhesion promoter for Cu. In this step, it is essential to add Kapton tape on all the parts that are not covered with parylene C because Cu can penetrate through the solder mask layer of the PCB and short-circuit the underlying Cu traces. The Denton DV-502A vacuum evaporator (Denton Vacuum) was used for this step. Coatings were performed at less than 4 × 10 −6 torr, with Cr application at ~90 A current and 1.9–2.8 Å s –1 deposition rates and Cu application at ~80 A and 15 Å s –1 deposition rate. We attempted to use electroless deposition without this PVD step but failed to achieve high-quality coatings. Although a plasma surface treatment of parylene C can facilitate electroless copper deposition, the PVD step we incorporate in this recipe is beneficial for pattern coating. Masking the PVD copper particles would result in a copper pattern on the surface and prevents copper deposition in the masked regions during the subsequent steps of the recipe. Cu electroless deposition An electroless Cu kit (Caswell Inc.) was used for this process. We followed the steps provided by Caswell Inc. with a shorter waiting time for sensitization and activation to minimize the formation of Cu oxides. The PCBs were not allowed to dry between the steps. First, the PCBs were immersed into the sensitizer solution (acidic stannous chloride solution) at room temperature for 50 s with no agitation and rinsed in DI water. Next, the PCBs were immersed into the activator solution (acidic palladium chloride solution) at room temperature for 50 s and rinsed in DI water. Since the sensitizer and activator solutions were separately and successively applied, the catalysis is based on ionic solutions, contrary to mixed PdCl 2 /SnCl 2 solutions where colloidal particles form and contribute to the catalysis 40 . Finally, the PCBs were immersed in the electroless Cu solution for 3 min, rinsed in DI water and dried with nitrogen. The electroless Cu solution was obtained by mixing electroless Cu A and electroless Cu B in equal amounts. Cu A solution contains water (> 75% concentration), triethanolamine (5-10%), ethylenediaminetetraacetic acid tetrasodium salt (1-5%), diethanolamine (1-5%), copper sulfate pentahydrate (1-5%), and sodium hydroxide liquid (not specified). Cu B solution contains water (71% concentration), methanol (25%), and formaldehyde (4%). Sensitizer solution contains water (90-98% concentration), hydrochloric acid (1-5%), and stannous chloride (1-5%). Activator solution contains water (90-98% concentration), hydrochloric acid (1-5%), and palladium chloride (1-5%) The compositions and concentrations are obtained from the manufacturer's safety data sheets. Cu electroplating Standard electroplating was performed as the last step where Cu was transferred from a Cu electrode to the PCBs. The electrolyte solution contains 0.2 M Cu( ii ) sulfate (CuSO 4 ) and 1 M sulfuric acid (H 2 SO 4 ) (Sigma-Aldrich). A current density of 74 A m –2 (based on the spreader area) was used. An HP 6033A d.c. power supply was used to provide the d.c. current. After electroplating is done, the boards were rinsed in DI water and dried with nitrogen gas. Following this recipe, we fabricated ten Cu coatings on 8.490 ± 0.513-µm-thick parylene C layers. The only difference between these coatings is the electroplating duration: two coatings were fabricated each with 2, 11, 17, 22 and 47 h of electroplating. We chose a large gap (between 22 and 47 h) because the plating times in this region result in coatings with the same thermal resistance and plating thickness behaviour: the thermal resistance does not remarkably decrease with higher thickness and the plating thickness is linear with plating time. We masked the PCBs such that we always obtain a 5.4 × 2.5 cm 2 coating area. We fabricated ten PCBs with Cu thicknesses ranging from 14.24 ± 10.15 to 562.10 ± 71.13 µm (Supplementary Table 1 ). Parylene C and Cu coating characterization A KEYENCE VK-X1000 three-dimensional laser scanning confocal microscope was used to measure the thickness of both parylene C and Cu layers. For parylene C, we used the microscope in the film mode, where the intensity of the laser light reflected from samples with a transparent film is measured to determine the film thickness (Supplementary Fig. 2a ). The distance between the two light peaks reflected from the surface of parylene C and the surface of buried Kapton tape determines the thickness of parylene C. We chose to measure the thickness of the parylene C film on top of the Kapton tape because the Kapton tape provides a smooth underlying surface compared with FR-4. The refractive index of parylene C of 1.64 41 was used to correct the measurements. A ×50 objective lens was used for these measurements. The microscope was also used in the manual mode with a ×5 objective lens to measure the thickness of the Cu coatings. The upper and lower limits of the lens elevation were set so that the scanning encompasses the levels of both parylene C and Cu top surface. The microscope scans the heat spreaders along the centreline of the spreader (parallel to the 2.5 cm dimension) and the thickness is determined by subtracting the elevation of parylene C from that of the Cu top surface. We calculated the average thickness and its standard deviation for each heat spreader (Supplementary Table 1 ). The data in Fig. 1g refer to the average thickness and standard deviation of heat spreaders with the respective electroplating duration. To investigate the electrical insulation performance of parylene C, we conducted a leakage current test using boards coated with parylene C and Cu (7 boards/14 devices). We shorted the drain, source and gate of the GaN devices and we connected them in series with an N5752A d.c. system power supply (Keysight Technologies), a 100 kΩ resistor, a 34461A digital multimeter (Agilent) and then back to the Cu layer (Supplementary Fig. 2c ). We increased the voltage from 0 to 600 V or until the leakage current increased from the microampere to the milliampere range, indicating that current blocking by parylene C has failed. We chose this maximum voltage because it compares well with the drain-to-source voltages V DS that typical GaN devices are rated for (650 V for GS66508T and 200 V for EPC2034). The devices labelled with a solid green bar in Fig. 1f were able to handle a voltage of 600 V for 5 min with leakage current below 1 µA; the shaded red bars show the devices that failed to meet this criterion with their corresponding maximum voltages. The thermal conductivity of the copper layer is derived from its electrical conductivity, which is determined by the four-point probe technique (Jandel four-point probe, model RM3-AR), using the Wiedemann–Franz law, namely, k / σ = LT , where k is the material’s electronic contribution to the thermal conductivity, σ is the material’s bulk electrical conductivity, T ≈ 295 K is the temperature and L is the Lorenz number generally given as L = 2.44 × 10 −8 W Ω K –2 . Supplementary Section 2 provides details regarding the measurement of σ . The measured thermal conductivity, k Cu = 127 ± 14 W m –1 K –1 , is lower than that of bulk multipurpose copper (~400 W m –1 K –1 ) 42 , but comparable to the value of sputtered copper films (~150–200 W m –1 K –1 ) 43 . The reason for the reduction in thermal conductivity is due to the introduction of impurities and defects during the electroplating process 44 . Steady-state and transient experiments We designed and fabricated 10 × 7 cm 2 70-µm-thick Cu 1.6-mm-thick FR-4 PCBs for this study. Each board contains one top-cooled SMD GS66508T and one top-cooled BGA EPC2034 GaN transistors (Fig. 2d ). Each device is connected to a five-pin terminal that provides access to the gate, source, drain, kelvin source and kelvin drain connections of the device. The two kelvin connections were added to accurately measure the voltage drop as close as possible to the device and eliminate any added voltage drop due to the Cu traces and electric cables during operation. All the Cu coatings were grown on those PCBs. The Cu-plane heat spreader board (Fig. 2a ) follows the same PCB design with an added Cu plane that is isolated from the circuit and acts as a heat spreader. For the heat-sink experiments, we start by depositing a layer of TGF 4000 gap filler (BERGQUIST) that surrounds the devices (Fig. 2f ). Then, the thermocouple is added on top of the device and a layer of NT-H2 pro-grade thermal paste (Noctua) covers the device, thermocouple and gap filler. A 1.4 × 1.4 × 14 cm 3 14-plate pin Cu heat sink (Alphacool) is attached on top of the thermal paste and a thermally insulated vice is added to apply pressure and ensure good thermal contact between the heat sink and the device. The following procedure was implemented to measure the surface temperature and device power in the thermal characterization experiments. A 40 American wire gauge (AWG) (80 µm diameter) K-type perfluoroalkoxy (PFA)-insulated thermocouple (Omega) made from special limits of error wire (±1.1 °C) was attached to the surface to measure its temperature. The thermocouples were calibrated against temperatures of a 0.95 emissivity black tape measured with a A655sc camera (FLIR Systems). We then covered the thermocouple head with a tiny bead of NT-H2 pro-grade thermal paste (Noctua), and we used a 1.5 × 0.25 cm 2 aluminium tape on top of the paste to attach the thermocouple to the surface (Fig. 1d ). We used a PXIe-1073 chassis and TB-4300 module (National Instruments) to acquire the thermocouple measurements in LabVIEW 2019 SP1 software, where we processed and recorded the data. The 10 kHz low-pass filter available in the TB-4300 module was used to eliminate the sensor’s high-frequency noise. We connected the GaN transistors to the HP 6033A d.c. power supply in the diode mode to increase the power dissipation while keeping the terminal currents low, where the gate is shorted to the drain and the drain-to-source voltage drop V DS is positive. The current flowing in the circuit is measured with the 6033A power supply (±0.2500% accuracy) and the voltage drop V DS is measured across the kelvin connections using the 34461A digital multimeter (Agilent) (±0.0055% accuracy). The Joule heating power dissipated by the device is equal to the product of the measured voltage drop and current. In the steady-state experiments, we fixed the device voltage and waited until the average temperature settled. We then recorded one temperature data point every 10 ms for 1 s and we calculated their average and standard deviation. The average corresponds to the surface temperature at this power dissipation and the standard deviation divided by 10 corresponds to its random error, which is small compared with the thermocouple accuracy. We repeated the same process for the other power levels. Finally, the ambient temperature was measured and subtracted from the surface temperatures and the Δ T versus power and heat flux ( q ″) plots were created (Supplementary Fig. 3 ). The heat flux was calculated by dividing the power level by the device footprint area. A linear regression analysis of these curves results in the thermal resistance along with its standard error. We added the junction-to-case thermal resistance (0.50 K W –1 for GS66508T and 0.45 K W –1 for EPC2034) and the calculated thermal resistance of parylene C to the surface-to-ambient thermal resistance to obtain the total thermal resistance. In the transient experiments, we turned the device power on and we waited until its temperature reached the steady state. Depending on the device and cooling medium, the maximum temperature was between 55 and 80 °C. Since we used dimensionless temperatures, this interval did not create problems in the comparison later. Then, we started recording the temperature versus time and we abruptly turned the device power off. We saved one temperature data point every 100 ms for the experiments done in air and every 10 ms for experiments done in water, where the time constant is smaller. The time constant and its standard error were obtained from the double-phase exponential decay fitting discussed earlier. We investigated the transient response of select Cu-coated heat spreaders under pulsed heat loading in air and water. The surface temperature was measured as previously described but using an NI 9923 (National Instruments) terminal block instead of the PXIe-1073 chassis and TB-4300 module. This substitution allowed for higher precision in temperature measurements so that fluctuations during transient operations are more easily detected. The temperature was recorded at 1 kHz frequency. We used a triple-output d.c. power supply (OWON ODP6033) to apply an ON/OFF square-voltage signal at frequencies equal to 1.00 and 0.33 Hz in air, and 1.00 and 0.20 Hz in water, such that the total energy per cycle remained equal to 6.09 ± 1.46 J. To measure the corresponding voltage and current outputs, we used a digital oscilloscope (Keysight Technologies, DSOX3054T) connected to the voltage and current probes (Keysight Technologies, N2843A and 1147B, respectively). Thermomechanical analysis We performed FEA measurements to study the effect of parylene C/Cu coatings on the thermomechanical reliability of electronics. We used Ansys (2019 R3) Static Structural analysis to simulate all the cases of this study. The model we built consists of a 4.6 × 2.6 × 0.51 mm 3 Si chip connected to a 1.5 × 1.5 × 0.16 cm 3 FR-4 PCB through 25 solder balls (Fig. 5a ). The solder balls are made of SAC305, and they follow the truncated sphere model 45 with a height of 170 µm and sphere radius of 194.55 µm. We explored three coating cases in our simulations: SiO 2 with Cu, parylene C with Cu, and no coating at all. Both SiO 2 and parylene C act as the electrical insulation layer that isolates the electronics from Cu coatings. The electrical insulation and Cu coatings conformally cover all the exposed surfaces and have thicknesses of 10 and 150 µm, respectively. Since the model is symmetric, it was cut by two planes along the centrelines to reduce the computation time (Fig. 5a ). A fixed support was added on the lowest point of the line where the two symmetry planes intersect to prevent body motion. We refined the mesh in the regions of interest (coatings, solder joint and Si). The effect of refining the mesh further on the results is negligible (≤10%). Next, we applied a three-cycle thermal load alternating between −55 and 85 °C, where each cycle consists of a 10-min dwell at −55 °C, a 7 min ramp up, a 10 min dwell at 85 °C and a 7 min ramp down. At each instant of this thermal loading, the temperature of all the bodies of the model was equal. We measured the equivalent plastic strain in the solder balls and Cu coating, the plastic strain energy density in the solder joints, and the von Mises stress in the Si, solder balls, FR-4, Cu and insulation coatings. Supplementary Section 9 provides the details of the thermomechanical analysis. Data availability Data supporting the findings of this study are available at . All other data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Code availability MATLAB, Ansys Static Structural and Ansys Icepak input files generated for this work are available at . All other files that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Electronic devices, including smartphones and tablet portable computers, are becoming increasingly advanced and compact. As their performance increases and their size decreases, these devices generate more heat, which can reduce their safety and cause them to break. In recent years, engineers have thus been trying to develop strategies that could prevent electronics from overheating. One proposed solution entails the use of heat spreaders, layers that promote the spread and dissipation of heat inside devices. Researchers at University of Illinois at Urbana-Champaign and University of California, Berkeley (UC Berkeley) have recently devised an alternative strategy that could cool electronics more efficiently than other existing solutions. Their strategy, introduced in a paper published in Nature Electronics, is based on the use of heat spreaders comprised of an electrical insulating layer of poly (2-chloro-p-xylylene) (Parylene C) and a coating of copper. "Our recent paper was the culmination of our efforts to produce coating heat spreaders for high-efficiency electronics cooling," Tarek Gebrael, one of the researchers who carried out the study, told TechXplore. "The motivation was to enable effective heat dissipation from power-dense electronics." Heat spreaders are cooling systems comprised of materials with a high-thermal conductivity, such as copper and aluminum. These systems can spread the heat generated by the devices across a larger surface area, making it easier for them to dissipate heat into the surrounding environment. "The advantage of using our conformal coating heat spreaders is that they cover the electronic device entirely, including the top, bottom, and sides of the device," Gebrael explained. "This is impossible with standard heat spreaders which are usually added on top of the device or with standard PCB copper planes. By achieving those conformal coatings, we were able to provide more routes for the heat to leave the electronic device, which translates into a better cooling performance." In the past, teams had developed similar techniques that prevent overheating by opening more "routes" for heat to leave electronic devices. Previously proposed solutions, however, utilize very expensive materials, such as diamond. This makes them difficult to develop and implement on a large scale. Gebrael and his colleagues evaluated their copper coated-heat spreaders in a series of tests and found that they performed extremely well. Specifically, their solution achieved up to a 740% increase in the power per unit volume compared to standard air-cooled copper heat sinks used today. "This remarkable result derives from our spreaders' effectiveness in dissipating the heat, as well as the compact volume they occupy when applied on printed circuit boards," Gebrael said. "This feature enables fitting more electronics in a smaller space without overheating issues, which is essential to create the platforms of future technologies (AI, augmented reality, etc.)." In the future, the heat spreaders developed by this team of researchers could be used to cool down electronic devices more efficiently, without requiring expensive materials. Notably, the coating recipe they proposed combines processes that are already in use in the electronics industry. This could further facilitate its application in real-world settings and its commercialization. "We are now investigating the reliability and durability of our coatings in specific environments (boiling water, boiling dielectric fluids, thermal cycling, and high-voltage environments) for long periods of time," Gebrael added. "We want to make sure that our coatings retain their superior cooling performance. We are also implementing the coatings with full-scale power modules and GPU cards, whereas we used only simple test boards in the initial work."
10.1038/s41928-022-00748-4
Biology
The mystery of the flexible shell
Johannes Ihli et al, Mechanical adaptation of brachiopod shells via hydration-induced structural changes, Nature Communications (2021). DOI: 10.1038/s41467-021-25613-4 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-25613-4
https://phys.org/news/2021-09-mystery-flexible-shell.html
Abstract The function-optimized properties of biominerals arise from the hierarchical organization of primary building blocks. Alteration of properties in response to environmental stresses generally involves time-intensive processes of resorption and reprecipitation of mineral in the underlying organic scaffold. Here, we report that the load-bearing shells of the brachiopod Discinisca tenuis are an exception to this process. These shells can dynamically modulate their mechanical properties in response to a change in environment, switching from hard and stiff when dry to malleable when hydrated within minutes. Using ptychographic X-ray tomography, electron microscopy and spectroscopy, we describe their hierarchical structure and composition as a function of hydration to understand the structural motifs that generate this adaptability. Key is a complementary set of structural modifications, starting with the swelling of an organic matrix on the micron level via nanocrystal reorganization and ending in an intercalation process on the molecular level in response to hydration. Introduction For hundreds of millions of years, nature has evolved a large assortment of organic–inorganic hybrid materials such as bone, teeth, and shells. Each of these biominerals exhibits material properties that have been optimized to aid a particular function, such as navigation, protection, or mechanical support 1 , 2 . These properties arise from a three-dimensional multi-scale organization of the biomineral’s primary building blocks, e.g., inorganic nanocrystals, specialized proteins, and polysaccharides, from the molecular to the millimeter scale 3 , 4 , 5 . Biominerals with load-bearing functions are optimized, in particular, with respect to their mechanical properties, so as to provide sufficient stiffness to support the typical mechanical loads in the biomineral’s environment and enough toughness to resist crack propagation 3 . This optimization is achieved, first, by incorporating organic biopolymers within the inorganic phase, which increases the toughness of the inherently brittle mineral 6 , and second by organizing the basic building blocks of the tissue into higher-order structures 7 . This hierarchical organization creates a large number of internal interfaces that help to avoid crack propagation and significantly increases fracture toughness. A further advantage of a hierarchical structure is that it endows the organism with an additional level of constructional control, where the basic building blocks can be assembled into different structural motifs of different mechanical properties 8 . Altering the material properties of the biomineral in response to environmental stresses, generally requires active restructuring by remodeling by the organism. A time- and energy-consuming process that involves the resorption of the existing biomineral, followed by the precipitation of new tissue with a different structure and composition 9 , 10 . In this paper, we report that the load-bearing shells of the brachiopod Discinisca tenuis 11 are able to dynamically modulate their mechanical properties in response to a change in the environment without the need for remodeling via resorption and regeneration of the tissue, i.e., they switch from hard and stiff when dry to malleable when hydrated within minutes. Importantly, when hydrated the shell can freely bend to the point that it can be folded in two without fracturing. The effects that water and organic matrix hydration degree have on the mechanical properties of biominerals are well recognized 12 . Water, as a component of most biominerals, is known to increase the flexibility of materials such as bone, teeth, and shells. Modulation of hardness and elastic modulus/flexibility 13 by passive control of the water content of the tissue has been suggested to occur in non-mineralized insect cuticle 14 and in mineralized crustacean cuticles, both of which contain organic matrices composed of chitin and proteins 15 , 16 . In these cases, the changes in mechanical properties are due to the plasticizing role of water and do not involve major changes in the structure of the tissue 12 , 16 . However, none of these aforementioned mineralized tissues exhibit flexibility that is comparable to that of the mineralized D. tenuis shell when in their natural, hydrated state. We hypothesized that the extreme flexibility of the hydrated D. tenuis shell cannot be accounted for solely by the plasticizing effect of water as in these other examples. Rather, such reversibility between stiff and flexible as a function of hydration must have its origins in the structure of the D. tenuis shell with water promoting structural changes at different hierarchical levels. The mechanisms that underpin these changes in mechanical properties as a function of hydration are unknown. Chemically controllable material properties and the causal structural motives are of significant interest in the design of stimuli-responsive synthetic materials 17 . As such, there is imperative to determine how hydration alters the structure of the D. tenuis shell, and how these changes facilitate the modulation in mechanical properties. Using a combination of ptychographic X-ray tomography, electron microscopy, small- and wide-angle X-ray scattering, solid-state nuclear magnetic resonance spectroscopy, and mechanical testing, we characterized the shell’s hierarchical structure and composition as a function of hydration covering the micro- and nanoscales and provide an insight into molecular changes. We demonstrate that water absorption by the shell induces a complementary set of structural modifications, starting with the swelling of an organic matrix on the micron level, via nanocrystal reorganization and restructuring, and ending in the intercalation of water between the organic framework and the mineral on the molecular level. In combination, we propose that these changes endow the shell with its mechanical adaptability. We envisage that these observations will aid/ inspire the design of novel synthetic materials with properties that can be modulated in real-time. Results Global compositional analysis The shells of D. tenuis (Fig. 1a ) are an organic-inorganic composite material, where the mineral phase constitutes about 68 wt% of the dry shell 11 . The mineral phase is composed predominantly of carbonate-substituted fluorapatite crystals in the form of francolite 18 (Supplementary Fig. 1 ) with minor contributions of amorphous calcium phosphate, octacalcium phosphate, and tricalcium phosphate 19 . The remaining ~32 wt% of the shell consist of various organic fractions, of which chitin, glycosaminoglycans, and proteins make up the dominant portion 11 , 19 , 20 , 21 . While these shells do not exhibit discernible structural motifs at the micron scale (Fig. 1b ), high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) has shown that they are hierarchically structured consisting of a laminated brick-work-like structure. This structure is arranged normal to the shell height, where francolite crystals (Fig. 1c , bright objects) are enwrapped by a network of chitin and proteins (Fig. 1c , dark regions). Fig. 1: Hierarchical structure of a brachiopod Discinisca tenuis shell. a Top-down optical micrograph of a dry brachiopod Discinisca tenuis shell. Scale bar is 2 mm. b Cross-sectional scanning electron microscopy images of the dry shell at increasing magnification. Left: low magnification image showing the cross-section across the z -axis. Right: high magnification of the area marked by the dotted square. Scale bars are 20 µm (left), 2 µm (right). c High-angle annular dark-field scanning-transmission electron microscopy (HAADF-STEM) images of a thin section from a dry shell. Scale bars are 200 and 20 nm. White arrows point to the organic matrix component (dark areas) surrounding the mineral (bright areas). d Cross-sectional electron micrograph acquired with backscattered electrons (BSE) of a fully hydrated shell thin section folded in two. Scale bar 50 µm. Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Importantly, these shells which are hard and brittle when dry, significantly increase in flexibility upon hydration to the point where they can be folded in half without fracturing as seen in Fig. 1d and supplementary Movie 1 . This process is reversible so that shells can be cycled multiple times through hard/brittle and soft/flexible by dehydrating/rehydrating. Thermogravimetric analysis (TGA) was used to determine the water content of an atmospherically dry shell stored in the air as compared to a fully hydrated shell after immersion in H 2 O for 24 h (Supplementary Fig. 2 ). The dry shell displayed a gradual water weight loss of 5% from ambient temperature to 200 °C. The hydrated shell exhibits a water weight loss of 24% across the same temperature range. Notably, in the hydrated shell, these losses occur in two distinct steps, the first at 60 °C, corresponding to the loss of physisorbed water accounting for ~18 wt% H 2 O and a more gradual secondary loss between ~100 and 200 °C (~6 wt% H 2 O). Further weight loss steps, observed for both the dry and the hydrated shell, occurring above 287 and 400 °C, are related to the pyrolysis of the organic components 22 , 23 , while decarbonation occurs above 700 °C and results in the formation of fluorapatite (Supplementary Fig. 3 ). Mechanical behavior characterization Depth-sensing nanoindentation was used to determine the mechanical properties of the shell as a function of the degree of hydration. Due to practical and geometrical constraints, i.e., the brachiopod’s shell is curved, has an irregular surface and a thickness of 50–500 µm that is both shell- and location-dependent, depth-dependent dynamic nanoindentation measurements were performed on shell cross-sections. These cross-sections expose the laminated structure (Fig. 1b ), i.e., the indentation direction is in the plane of the laminae. Measurements were performed on atmospherically dry and fully hydrated shells on the same sample, at the center of the cross-section or shell diameter. Depth-sensing nanoindentation measurements show that both Young’s modulus ( E IT ) and hardness ( H IT ) drop drastically when the shell becomes hydrated (Fig. 2 ). At the maximum tested load of 30 mN, E IT is about 26% of the dry value and H IT shows a similar reduction down to 22% (Supplementary Table 1 ). The greater deformability of the hydrated sections under the same applied load is demonstrated also by the increase of the maximum indentation depth from ~1.5 to ~3.5 µm at 30 mN (Supplementary Fig. 4 ), as well as by the larger residual indentation imprint (Supplementary Fig. 5 ). Fig. 2: Depth-dependent dynamic nanoindentation measurements of D. tenuis brachiopod shell samples at different hydration levels. Plotted is the dependence of shell hardness H IT (red, right) and Young’s modulus E IT (blue, left) as a function of indentation depth for an atmospherically dry shell sample (top, filled symbols) and a fully hydrated shell sample (bottom, open symbols). Measurements were conducted in continuous stiffness mode up to a maximum force of 30 mN. Shown is the average over eight indentation measurements. Full size image Micron-scale characterization Ptychographic X-ray computed tomography (PXCT) was used to characterize the shell structure at increasing levels of hydration on the micron and sub-micron levels. PXCT provided quantitative electron density tomograms ( n e Å − 3 ) 24 , 25 , 26 , with a half-period spatial resolution of roughly 85 nm (Supplementary Fig. 6 ). These tomograms allow a structural evaluation of the shell and provide information such as local variations in swelling behavior and the hydration degree of specific components in the sample, e.g., organic- and mineral-rich regions. Sample cylinders, greater than 15 µm in diameter, were extracted along the shell width ( z- axis in Fig. 1b ) and prepared using a nitrogen-cooled micro-lath (Supplementary Movie 2 ) 27 . These cylinders were then either vacuum-dried or incubated at 70 % or 100 % relative humidity (RH) for 36 h 28 , 29 , resulting in samples of increasing hydration level. Lastly, samples were flash-frozen in liquid nitrogen and analyzed using cryo-PXCT at −180 °C. PXCT-derived electron density tomograms are shown in Fig. 3 . As the vacuum-dried sample can be described as a two-component system of mineral and organic fractions, each with approximately known electron densities of 0.78 n e Å − 3 (francolite) and 0.46 n e Å −3 (chitin), the measured electron densities can be used to determine the shell’s composition globally and locally in consideration of partial volume effects 24 , 30 . Partial volume effects refer to the occupation of a volume element, e.g., a voxel, by multiple components, leading to a fractional occupancy-related electron density. For example, the average electron density of the vacuum-dried sample of 0.59 n e Å −3 , suggests that the vacuum-dried shell consists of ~58 vol% organics or roughly 33 wt%, as previously reported 11 , 31 . Moreover, using the vacuum-dried shell as a compositional reference point, the average water content in the hydrated samples can be estimated, i.e., observable changes in electron density are attributed to the incorporation of water into the shell structure. In detail, whereas the shell sample stored at 70% RH contains roughly 17 vol% water or ~4 wt%, the sample incubated at 100% RH possesses up to 50 vol% water or ~12 wt%. Although the hydration level of the sample stored at 70% RH is comparable with that of the atmospherically dry sample shown in Fig. 1c , a lower hydration level was measured for the 100% RH sample by PCXT when compared to the shell sample fully immersed in water shown in Supplementary Fig. 2 . This discrepancy is a result of the sample cylinder hydration process, which was used to avoid structural alteration of the shell during the freezing process. Fig. 3: Electron density tomograms of D. tenuis brachiopod shell samples at increasing hydration level. a Example volume rendering of the imaged, cylindrical, sample pillars. Sagittal cut slices through the center of b a vacuum dried sample, c a sample incubated at 70% RH, and d a sample incubated at 100% RH. The cutting plane is represented by the yellow line shown in ( a ). Scale bars are 2 µm. Common to all cuts is a single color scale ranging from white to yellow representative of electron density values. Shown in e are sample plane averaged electron density line profiles normal to the laminae structure (pink arrow) alongside secondary derivatives highlighting major fluctuations in electron density. Sample corresponding frequency normalized ( N ) electron density histograms are shown in ( f ). Further provided in f are the theoretical electron density values of the shell main components: francolite 0.78 n e Å − 3 , high molecular weight polysaccharides approximated using chitin ~0.46 n e Å −3 , and low-density amorphous ice, 0.31 n e Å −3 . PCXT measurements were conducted under cryogenic conditions. The voxel size of all tomograms is (38.8 nm) 3 . Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Figure 3a provides an example volume rendering of one of the imaged sample cylinders. Sagittal slices through the acquired electron density tomograms are presented in Fig. 3b–d and Supplementary Movies 3 – 5 . These slices reveal a progressively more defined laminar structure of alternating high electron density, mineral-rich layers and low electron density, organic-rich layers normal to the shell height from the dry to the fully hydrated sample. To quantify the separation of these layers and emphasize local variations in their thickness and composition, layer-averaged electron density profiles normal to the laminae structure were calculated (Fig. 3e ). Visible in these profiles is a continued expansion of the organic-rich layer thickness from ~160 nm in the dry sample to ~180 nm in the partially hydrated sample (70 RH) to ~340 nm in the fully hydrated sample (100 RH). The mineral-rich layers also appear to expand, although to a much lesser extent. From ~90 nm in the dry sample to ~110 nm in the partially and fully hydrated samples. These data suggest the existence of two local hydration environments, associated with either the mineral-rich or the organic-rich layers, each possessing a distinct hydration capacity. The corresponding electron density histograms of the tomograms are supportive of this interpretation (Fig. 3f ). With increasing hydration, an evolution of the Gaussian distribution centered at 0.59 n e Å −3 is visible. Partial hydration results in a shift of the Gaussian distribution towards an electron density center at 0.54 n e Å − 3 . This shift, while retaining the Gaussian distribution, suggests a near-uniform water uptake throughout the shell, i.e., including the francolite nanoparticle enwrapping organics. Further hydration leads to the development of a broad and asymmetric electron density distribution with 3 main peaks: centered at 0.42, 0.51, and 0.59 n e Å −3 . While the retained peak at 0.59 n e Å −3 and the newly emergent peak at 0.42 n e Å −3 can reasonably be assigned to mineral-rich domains and fully hydrated organic-rich domains respectively, the persistence and dominance of the intermediate peak are intriguing. It is indicative of either a not yet fully completed hydration process or the presence of not one but two organic-rich layer structures in the shell each with different hydration capacities. To investigate the existence of such a variety in structure and to establish a correlation between the degree of hydration and volume expansion of the organic-rich layers, we remapped the electron density tomogram of the fully hydrated sample to percent water weight. Subsequently, we characterized the organic-rich layers in this tomogram, which revealed that within a single layer, hydration is rather uniform. However, hydration can and does vary across layers. Visible in the tomogram is a zonation in hydration along with the shell height. Organic-rich layers in close proximity to the shell exterior and extending microns in height, compartmentalize up to ~70 vol% of water or ~24 wt%. Organic-rich layer toward the shell interior adopts a hydration level of around 8 wt% H 2 O. This zonation is in agreement with the variety in organic-rich layer structure detected in the histogram (see Methods and Supplementary Fig. 7 ). Nanometer-scale characterization As PXCT measurements are limited in spatial-resolution, e.g., a single voxel is occupied by multiple organic matrix-coated francolite crystals (Fig. 1c ), we used backscattered electron-scanning electron microscopy (BSE-SEM), offering a higher resolving power, to confirm and expand on these observations. Further BSE-SEM allowed us to resolve the shell’s fine structure and probe the hydration behavior on the nanoscale. Cross-sections of a fully hydrated (fixed in 4% formaldehyde) and a dry shell stored in the air were prepared through a series of dehydration, critical-point drying, resin embedding, and mechanical polishing (details in Methods). Overview micrographs confirm both the presence of organic-rich layers of varying thickness and their volume expansion upon hydration (Fig. 4a, b ). The volume expansion is suggested to occur through the uptake of physisorbed water. In addition, given the available cross-sectional view of the entire shell height in these micrographs, it is evident that major organic-rich layers are predominantly located toward both the outer and inner surfaces. Equally visible in these micrographs is a sparse network of transport channels, <400 nm in diameter 11 , throughout the entire shell, exhibiting a tortuous structure running dominantly normal to the laminae structure. See also Supplementary Movies 3 and 4 and complementary small-angle X-ray scattering measurements shown in Supplementary Fig. 8 . In addition, PXCT data show that these pores change in electron density from ~0.05 to 0.2 n e Å −3 in the vacuum-dried sample to ~0.3–0.35 n e Å −3 in the hydrated sample (Supplementary Fig. 9 ). This change is in agreement with the pores carrying water in the hydrated state of the shell; for comparison, the electron density of amorphous ice is 0.31 n e Å −3 Fig. 4: Scanning electron microscopy of polished shell samples. Shown in the left panels are SEM-BSE cross-section micrographs of a a dry shell stored in air and b a fully hydrated shell fixed with 4% formaldehyde. Blue arrows highlight fracture lines incurred during sample preparation. Red arrows are used to point out example areas of high organic content. Blue circles are used to indicate pores in the shell. Scale bars are 20 µm. Provided in the central panels are images at a higher magnification acquired either with secondary electrons (SE) to stress variations in morphology and surface topography or acquired with backscattered electrons (BSE) used to highlight compositional/elemental contrast. Scale bars are 200 nm. The inset highlights the francolite bundle dimension. The scale bar is 50 nm. In the far right panels are average line profiles of the grayscale within the colored boxes in ( a ) and ( b ), normal to the laminae direction. Source data for this figure are available at the University of Edinburgh DataShare, data identifier 67 . Full size image Selected-area high magnification SEM images (Fig. 4b ) probing the nanoscopic level, not only confirm the swelling of organic components at this level but additionally discloses another feature of the shell’s organization. Throughout the shell, francolite crystals appear to be organized in individual, rod-shaped bundles, roughly 25 nm in diameter and 100 nm in length (also seen in Fig. 1c and Supplementary Fig. 8 ). These bundles are found within both mineral- and organic-rich layers, and appear to expand in volume upon hydration, suggesting that they are composed of francolite nanocrystals and organic matrix elements. PXCT measurements confirm this assessment, as the highest voxel-level recorded electron density is still well below that of francolite, confirming that each bundle possesses a minimum organic content of 15 vol%. The nanocrystals within a bundle are asymmetric in shape with a short axis, 3–6 nm in diameter, and a long axis, 14–19 nm in diameter, according to Rietveld’s refinement of powder diffraction data, (Supplementary Fig. 1 ). The spatial arrangement of the francolite crystals, evident from Fig. 4b , is that the bundles and the nanocrystals preserve some preferential orientation with their long axis parallel to the laminar direction upon hydration. Molecular-scale characterization Solid-state nuclear magnetic resonance (NMR) (ssNMR) spectroscopy and differential scanning calorimetry (DSC) were used to investigate the hydration process of the mineral and of organics surrounding individual francolite crystals on the molecular level. To examine the effect that hydration has on the molecular structure of the organic matrix we collected 13 C solid-state NMR spectra. Presented in Fig. 5a are 13 C cross-polarization magic angle spinning NMR ( 13 C CP MAS NMR) spectra of a dry and a fully hydrated shell, revealing a hydration-induced sharpening of amide signals (170–180 ppm) to the extent that two amide signals are resolved for the hydrated shell (170.5 and 174.4 ppm) compared to a single broad signal in this region for the dry shell. The chitin N-acetyl methyl group 13 C signal occurs as expected between 20 and 25 ppm and the N-acetyl C=O 13 C around 175 ppm. We assign the signals at 22.8 and 174.4 ppm which sharpen with increasing hydration to chitin (or other glycans) N-acetyl groups that become more mobile with hydration of the shell. Fig. 5: Solid-state nuclear magnetic resonance spectroscopy and differential scanning calorimetry of D. tenuis brachiopod shell samples. a 13 C Cross-polarization magic angle spinning (CP MAS) NMR spectra of an atmospherically dry and hydrated shell. b Rotational-echo double resonance (REDOR) NMR on an atmospherically dry shell, cyan: reference spectrum; orange: REDOR dephasing spectrum. c Differential scanning calorimetry measurements of an atmospherically dry and hydrated specimen and of β-chitin extracted from a brachiopod shell. Full size image To examine the interface between the organic matrix and mineral, 13 C{ 31 P} rotational-echo double resonance (REDOR) NMR spectra were collected to determine which organic functional groups are in closest contact with the phosphate anions of the francolite crystals (Fig. 5b ). Signals that have a reduced intensity in the REDOR spectrum compared to the reference spectrum of the dry shell (orange spectrum in Fig. 5b ) are indicative of 13 C sites that are within 0.8–1 nm of 31 P. An insufficient signal intensity even in the reference spectrum of the hydrated shell sample, due to short T 2 relaxation times indicate that hydration results in a significant increase in molecular mobility of all organic components of the shell material. Further, signals from methyl groups at 16.9 and 22.8 ppm show a reduction in intensity in the dry REDOR spectrum (orange spectrum in Fig. 5b ), along with both amide carbonyl signals, a broad signal at ~185 ppm from carboxylate groups, and a set of signals corresponding in the chemical shift to primary or secondary amine 13 Cs (50–60 ppm, as labeled in Fig. 5b ). Interestingly, there is no reduction in intensity in the REDOR spectrum of signals from chitin/glycan ring carbons, suggesting that these carbons are more than 0.8–1 nm from phosphate. The amide carbonyl signals are from glycan N-acetyl and/or protein–peptide bond carbonyls and the 22.8 ppm signal is from the N-acetyl methyl 13 C in chitin or other glycans, suggesting that the N-acetyl moieties of chitin/glycan molecules are associated with the mineral. In summary, the ssNMR data indicate that chitin is organized in layers with its N-acetyl groups facing the mineral where possible, creating inter-layer channels that allow the intercalation of water molecules during hydration. This picture is consistent with the chitin network surrounding the crystals absorbing water. The result is increased mobility of the macromolecular chains and thus flexibility of this particular part of the shell when hydrated. DSC measurements of entire shells and chitin extracted from D. tenuis shells, presented in Fig. 5c , are in general agreement with the suggested hydration process. Not only does the DSC signal of an atmospherically dry shell (5 wt% H 2 O) display two endothermic peaks, one at 156 °C and a second at 200 °C, it also matches the expected stepwise transition of β-chitin dihydrate to its anhydrous form 32 . Importantly, these transitions are recorded at significantly reduced temperatures, 137 and 175 °C (Fig. 5c ) in the case of a fully hydrated shell (24 wt% H 2 O). These observations imply that water molecules intercalate between the polysaccharide chains of dihydrate chitin and decrease the inter-chain interactions, making the molecules more mobile. Chemically this can be explained by the preference of the hydroxyl groups in the pyranose ring to make hydrogen bonds with the more mobile solvent molecules rather than with a neighboring sugar residue 33 . Lastly, to characterize the effect that hydration has on the mineral, 2D 1 H– 31 P heteronuclear correlation solid-state NMR spectra were collected on shell fragments (Fig. 6 ). Spectra from atmospherically dry shells show multiple distinct 1 H environments correlated with phosphate 31 P signals, evident of a well-structured mineral on the molecular length scale. The 1 H environment near 4.7 ppm is similar to that previously observed for water in the amorphous hydrated surface layer of nanocrystalline hydroxyapatite 34 . The 1 H signals between ~10 and 15 ppm are from mineral hydrogen phosphate groups with 1 H chemical shifts that are similar to those found in the amorphous hydrated surface layer of synthetic nanocrystalline hydroxyapatite 35 and in the hydrated layers of octacalcium phosphate, and in the hydrated calcium hydrogen phosphate phases, monocalcium monohydrate and brushite 36 . There is an additional intriguing 1 H signal around 7.5 ppm; a similar 1 H chemical shift was observed in hydroxyapatite samples and has previously been tentatively assigned as hydroxyapatite-associated hydrogen phosphate 36 , and 2D 1 H– 31 P correlation spectra of synthetic hydroxyapatites contain intensity in this spectral region as well 34 (Fig. 6a ). Upon hydration, the 1 H spectrum is dominated by a single water signal which has shifted to being centered at 5.9 ppm (Fig. 6b ), indicating a significant change in the mineral water environment; the shift to a higher frequency for the water 1 H resonance suggests the water is in a more strongly hydrogen-bonded environment, such as that in the hydrated layers of OCP or the crystalline water in brushite 36 . These observations suggest that in the hydrated shell, water associated with the mineral is in smaller width channels than in the dry shell. These spectral changes are consistent with a relaxation of strain in the mineral structure upon hydration, as suggested by wide-angle-X-ray-scattering measurements of a shell in its dry and hydrated form (Supplementary Fig. 10 ) and possibly the result of cracking or partial hydrolysis of mineral crystals and admission of water into the resulting cracks/ hydrolyzed regions. Fig. 6: 1 H– 31 P heteronuclear correlation solid-state NMR spectra of D. tenuis brachiopod shell samples. Shown are changes in the chemical structure of the atmospherically dry shell ( a ) upon hydration ( b ). The correlation degree follows a normalized color map ranging from red to white (0–1). Full size image Discussion We show that water absorption causes structural changes in the shell at three levels: (1) At the microscopic level, where organic-rich laminae swell due to the uptake of physisorbed water; (2) at the nanoscopic level, where the organic matrix, surrounding mineral bundles, swells, and (3) at the molecular level, where the chitin network that surrounds the mineral crystals within each bundle become hydrated. This results in a mobility increase of the macromolecular chains of the polysaccharide and in the intercalation of water molecules between chitin and the mineral. These insights allowed us to develop a hypothesis as to how such structural changes translate into the observed mechanical adaptability. Our proposed model and the wider significance of the material properties of the shell are discussed below. To explain the structure–property relationship of the shells of D. tenuis we propose the following. At the microscopic level, the shell has organic-rich regions which swell when hydrated (Fig. 7 i). This results in thicker laminae of low stiffness intercalated with high-stiffness mineral-rich regions, which provides higher flexibility and fracture toughness 37 . At the nano-scale, the mineral-rich regions are composed of francolite crystals assembled into rod-like bundles ca . 25 nm in diameter and 100 nm in length surrounded by a network of chitin. This intercalation of two different materials with different elastic moduli—low-stiffness chitin surrounding high-stiffness mineral crystals—is responsible for the inherent high toughness of the shell 38 . We propose that the swelling of the organic components at this level, increasing the disorder in the arrangement of these bundles, facilitates the movement of these structural building blocks when a load is applied (Fig. 7 ii). At the molecular level, hydration reduces the stiffness of chitin 39 by breaking stabilizing hydrogen bonds between the sugar residues 33 , perhaps analogous to how water breaks the inter-peptide bonds in collagen in dentine, decreasing the hardness and stiffness of the latter. It is therefore conceivable that this reduction in the stiffness of chitin, together with the increase in mobility of the polysaccharide side chains, helps to dissipate mechanical stress 40 . In addition, we propose that the intercalation of water molecules between the polysaccharide and the mineral decreases the interaction between these two components so that they can slide/move with respect to each other when under a load (Fig. 7 iii). In combination, these structural changes could explain the mechanical adaptability of the shell as a function of the surrounding environment. Fig. 7: Hydration scheme of a D. tenuis brachiopod shell. Schematic of the proposed hydration mechanism across length scales from the micron (i), sub-micron (ii), nano (iii), and molecular level (iv) 33 , 68 , 69 . In (iii) and (iv), we propose that the intercalation of water between the mineral and chitin enables the mineral units to move more freely when a load is applied. Full size image In view of the passive, rapid, and repeated adaptability of the shell, a key factor facilitating the hydration process is the efficient transport of water into and out of the shell. As the mechanical properties of the shell change within minutes after immersion in or removal from water, diffusion through the mineral layers is unlikely to be the dominant transport mechanism. As shown in Fig. 4a, b and Supplementary Fig. 9 , the shell is permeated by pores that run predominantly normal to the laminae structure. As these pores become filled with water in the hydrated sample (Supplementary Fig. 9 ) and are interconnected with the protein and chitin networks, as discussed by Williams et al. 11 , it is conceivable that they serve as hydration channels in the mineralized tissue. In terms of how hydration affects the mechanical properties of the shell, absorption of water causes a reduction in elastic modulus ( E ) and hardness by factors of around four. Considering that hardness is generally proportional to yield stress ( σ y ), and the shell diameter or thickness ( h ) is nearly unchanged following hydration, this suggests that both the dry and hydrated shell possess roughly the same flexibility ( f ) as defined by Peng and Snyder (2019), ƒ = (2/ h )( σ y / E ) 13 . Therefore, the increased flexibility observed upon hydration does not originate from a larger decrease in \({{{{{{{\rm{E}}}}}}}}\) compared to σ y . A decrease in elastic modulus by a factor of four implies that a four times higher elastic strain can be imposed on the hydrated shell compared to the dry one under the same load, possibly explaining why it becomes so easy to bend. Yet, the observed change in hardness, positively correlated with a yield stress, suggests that plastic deformation develops under a four times smaller stress. In other words, when the bending force is increased, the hydrated shell material enters the plastic deformation regime at about four times smaller stress compared to the dry one. The fact that much larger deformations can be achieved in the hydrated state compared to the dry state can only be explained by considering the presence of some plastic deformation together with a much higher ductility upon hydration (In general, plasticity at smaller stress brings about an enhanced ductility, i.e., the ability to withstand larger plastic deformation without fracture). If the situation were otherwise, it would be possible to achieve the same deformation in the dry shell simply by applying a four times higher force. This is not the case as the dry shell fractures at small strains. In summary, we suggest: (1) The large deformations in the hydrated shell are never purely elastic but include a certain degree of plasticity; (2) the hydrated shell has a much higher ductility and does not fracture immediately when entering the plastic deformation regime, in contrast to the brittle dry shell; (3) the combined changes of elastic modulus, hardness and ductility with hydration/dehydration determines the macroscopic mechanical behavior of the shell. It is interesting to compare this behavior with that of other biominerals. Dentine, for example, possesses a similar ratio of organic to mineral content, yet displays a decrease in elastic modulus by a factor of ~1.5, and hardness by a factor of ~3 upon hydration 41 . The crustacean endocuticle, which has a thickness that is more comparable to that of the D. tenuis shell ( ca . 200–300 µm 16 , whereas the shell is 50–500 µm thick), and a more comparable passive change in flexibility with changes in hydration, displays a decrease in stiffness by a factor of ~1.4 upon hydration, while the yield stress changes by a factor of ~4 16 . In the latter, these changes are driven mainly by the interaction of water with the protein that is associated with the chitin fibers 39 , and ingress of water breaking hydrogen bonds between macromolecular chains 33 or inter-peptide bonds 41 is a common mechanism by which water increases the flexibility of biominerals. In these cases, water chiefly acts as a plasticizer, increasing the viscoelasticity and plasticity of the organic matrix components 12 through changes in intermolecular hydrogen bonding in the tissue as discussed above. What sets the D. tenuis shells apart from other biominerals is the extent of the flexibility caused by hydration, which is not seen in other mineralized tissues, and the speed of the structural reorganization underlying the change in flexibility. To put it simply, mollucs shells, bone and dentine cannot bend in half without breaking, as the hydrated D. tenuis shell can, no matter what their water content. Considering that bone and dentine have similar amounts of organic and inorganic contents as the D. tenuis shell (60–70% inorganic and 30–40% organic), it is clear that the ability of the shell to freely bend is due not only to its high organic content and the plasticizing effect of water. Their differences in mechanical behavior must also be due to how the building blocks of each material are organized. As described above, the arrangement of the francolite nanocrystals into discrete bundles enwrapped by a layer of the organic matrix provides the shell with separate blocks that can move with respect to each other once a load is applied, and the stiffness of chitin and the chitin–mineral interactions are weakened by hydration. Moreover, the mineral itself appears from solid-state NMR measurements, to restructure reversibly upon hydration/dehydration. We speculate that the activation energy for such restructuring comes from the relaxation of crystal strain upon water ingress and the resulting formation of hydrated channels or layers with dimensions of the scale of the water channels and layers in OCP or brushite, for instance. Bone, on the other hand, is made of collagen fibrils with intra- and extra-fibrillar mineral 42 , 43 , 44 , 45 . The mineralized collagen fibrils are further arranged into higher-order structures—such as unidirectional ordered fibrils or plywood structures, further arranged into super-structures 3 , 44 . This hierarchical organization results in a material with high stiffness that resists deformation when under a load. Similarly, other biominerals such as the nacreous layer in molluscs have their mineral building blocks arranged such that they cannot move much with respect to each other when under a bending load. The nacre of bivalves, for example, is made of aragonite tablets that are significantly larger than the crystalline units of the D. tenuis shells—500 nm in thickness and 5–15 μm in diameter 46 —and are staggered with respect to each other which readily prevents any deformation in the same scale as reported here for the brachiopod shell. In addition, while these tissues are naturally hydrated, they have not been reported to further uptake significant amounts of water as the D. tenuis shell does. As demonstrated, an increase in water content is a prerequisite to increase their flexibility. As for the non-mineralized insect cuticle and the mineralized crustacean cuticles, these tissues are composed of chitin-protein fibers aligned in parallel arrays forming horizontal planes that are stacked vertically with the gradual rotation of the long axis of the fibers around the normal axis of the cuticle, leading to a twisted plywood structure 16 , 39 . Their hardness and stiffness depend on the stacking density of the chitin–protein layers and on the degree of mineralization 47 . These two structural factors do not change upon hydration and dehydration. In summary, we conclude that the responsiveness of the mechanical properties of the D. tenuis shell to hydration, when compared to other biominerals, is a combination of several factors: (1) the high amount of organic content; (2) the plasticizing role of water on the organic matrix and mineral; (3) the weakening of the interaction between the mineral and chitin upon hydration, allowing the former to move more freely under a load; and (4) the unique hierarchical structure of the shell, with crystals surrounded by a chitin matrix at the nanoscale, and organic-rich layers at the micron scale. While factors (1) and (2) are common among other biominerals, factors (3) and (4) are unique to the D. tenuis shell. To address the lingering question as to why D. tenuis brachiopods evolved and currently possess shells of such mechanical adaptability further investigations are needed. Nonetheless, one can draw a parallel between the brachiopod Lingula anatina , which also has a phosphatic shell. A certain degree of mechanical adaptability and flexibility in the shells of L. anatina was proposed to be needed for the burrowing of the animal into sediment 37 , for its infaunal habitat. It is likely that the mechanical properties of D. tenuis shells are similarly suitable to their environment. Large clusters of D. tenuis specimens, attached only to each other, inhabit the inter-tidal zone. In such a high-energy environment, with extreme ranges of hydration throughout diurnal tidal cycles, as mimicked in the presented experiments, environment-adapting flexibility could be advantageous to prevent shell damage and therefore could be key to the survival of the animals. Thus, differences between the ecological niches between the two species of phosphatic brachiopods, D . tenuis and L. anatina , or even when compared to calcitic-shelled species, could mean different mechanical requirements for their shells and hence explain the reason D. tenuis shells have higher flexibility when hydrated than other brachiopods. In conclusion, we report on the mechanical behavior of the shells of D. tenuis . The former displays passive adaptability among binominerals in mechanical properties as a function of hydration/the environment it finds itself in. Mechanical testing and characterization of the structure of the shell as a function of hydration level, at several length scales, from the micro- to the molecular level, revealed that these shells conform to a hierarchical, non-uniform construction, wherein water absorption within distinct environments facilitates structural adaptation to a changing environment. The discovered design motifs and modifications thereof upon water absorption, underpinning the properties of this natural composite material, will help material scientists to design and synthesize novel stimuli-response materials that are as tough and adaptable as these brachiopod shells. Methods Materials Brachiopod D. tenuis s hells were collected in Swakopmind, Namibia by Sir Alwyn Williams. The soft tissue was removed, and shells were stored in the air. Electron microscopy (EM) SEM was performed either on a Quanta 650 FEG SEM or on a Zeiss Crossbeam 550 cryoFIB-SEM. HAADF TEM was performed on a Thermofisher Scientific Scios Dual Beam FIB-SEM. Sample preparation for electron microscopy Shell fragments were incubated in MilliQ water overnight at room temperature, followed by fixation in 4% formaldehyde for 4 h. The specimens were then gradually dehydrated in ethanol following a dilution series (50%, 70%, 96%, and 100% ethanol), followed by critical point drying using a Polaron critical point dryer. Subsequently, the shells were embedded in resin, polished, and coated with carbon. To check that the sample preparation procedure does not introduce artifacts such as local sample shrinkage, control experiments on non-treated samples were performed. A comparison between dry shells in their native state and after sample preparation showed no discernible differences. Further, electron microscopy observations, where possible, are cross-validated and confirmed by cryo-PXCT. Moreover, both dry and hydrated samples underwent the same sample preparation, i.e., samples would be largely equally affected by potential preparation artifacts. Thermogravimetric analysis (TGA) TGA was performed using a Thermal Analysis SDT Q600 instrument. A 5–10 mg of ground shell, were subjected to a heating rate of 10 °C/min under nitrogen. Fourier transform infrared spectroscopy (FT-IR) FT-IR spectroscopy measurements were performed using an FTIR Nicolet iS10. Approximately, 2 wt% of ground shell fragments were mixed with KBr and pressed into a transparent disk 48 . Data were acquired between 4000–400 cm −1 with a spectral resolution of 4 cm −1 . Differential scanning calorimetry DSC measurements were performed using a Thermal Analysis SDT Q600 instrument. A 2–5 mg of ground shell was analyzed. Samples were heated up to 240 °C with a rate of 10 °C/min. β-chitin extracted from D. tenuis shells was used as a standard. Chitin extraction was done by decalcifying 10–15 mg of shells in 0.55 M HCl twice for 30 min, then once for 1 h at room temperature. The remaining organic material was then incubated in 0.3 M NaOH at 80 °C under reflux for 1 h. The extracted chitin was dried at 80 °C for 1 h 49 . Powder X-ray diffraction (PXRD) Diffraction patterns were collected using an X′Celerator detector fitted on a PANalytical X′Pert Pro diffractometer, using Cu-Kα radiation generated at 40 kV and 40 mA. Data were collected within the 2 θ range from 5° to 70° with a step size of 0.02° and a counting time of 1200 s. Fixed anti-scatter and divergence slits of 1/16° were used with a 10 mm beam mask. Small-angle X-ray scattering (SAXS) Monochromatic radiation with a wavelength of λ = 1.54 Å was produced by a rotating Cu anode (MicroMax 007HF). Scattering patterns were acquired using a Dectris PILATUS 300 K, with a pixel size of 172 μm, placed at a different sample to detector distances between 0.5 and 1.6 m for each dataset. The obtained 2D SAXS patterns were azimuthally integrated, normalized with respect to the incident beam intensity and acquisition time, and then merged to construct a single 1D intensity profile I ( q ) vs. q covering an effective scattering vector range of 0.0035 to 1 Å − 1 . For each sample, at least three SAXS datasets were collected across the shell width. Two representative intensity profiles, i.e., one per sample, are shown in Supplementary Fig. 8 . Wide-angle X-ray scattering (WAXS) WAXS data were collected at the 11-BM Complex Materials Scattering (CMS) beamline at National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. Data were collected at 13.5 keV with a beam footprint on the sample of 0.2 × 0.2 mm. The point acquisition time was 10 s. WAXS patterns were recorded with a Pilatus800k placed 0.26 m downstream of the sample. The obtained 2D WAXS patterns were azimuthally integrated, normalized with respect to the incident beam intensity and acquisition time. The resulting 1D intensity profiles are shown in Supplementary Fig. 10 . Solid-state nuclear magnetic resonance spectroscopy All experiments were carried out on shells that had been stored at −80 °C, packed into inserts for 4 mm zirconia rotors in a Bruker double-resonance MAS probe on a Bruker AVANCE II 400 MHz wide-bore spectrometer. CP-MAS: MAS frequency 10 kHz, 1 H 90° pulse 2.5 μs, contact time of 2.5 ms with a ramped pulse on 1 H and square pulse on 13 C 70 kHz spin lock field strength, 100 kHz field strength SPINAL64 decoupling during acquisition with 4.4 μs pulses, recycle delay 2 s. Heteronuclear correlation (Hetcor) spectra were recorded at 400 MHz, 10 kHz MAS and 290 K. NMR parameters were: 1H 90° pulse length and decoupling 86 kHz, 1H contact pulse 54 kHz, 31 P contact pulse 44 kHz, 200 µs contact time. Lee–Goldburg (LG) RF field was set to 50 kHz, with an offset for proton evolution under LG of −2000 Hz. Depth-dependent dynamic nanoindentation To prepare the physical cross-sections, the two large shell surfaces were first covered with a thin layer of plasticine (about 1 mm in thickness) before the sample was embedded in epoxy resin and cut to size using a rotating diamond blade. The exposed cross-sections were polished with silicon carbide sandpaper of two decreasing grit sizes (P600 and P1220) and finally using alumina colloidal suspensions with grain sizes of 3-1 and 0.05 microns. Finally, the plasticine was removed with the help of a thin curved dissecting needle generating two cavities (Supplementary Fig. 11 ). These cavities were filled with double-distilled water for the measurements in hydrated conditions. The same sample was then used for the measurements in dry conditions upon water removal and overnight air drying. Depth-dependent mechanical properties of dry and fully hydrated shell sections were measured with a nanoindentation tester (model NHT-TTX by CSM Instruments) equipped with a Berkovich diamond tip. The instrument was operated in continuous stiffness mode up to a maximum applied load of 30 mN. During the 60 s loading phase, the oscillatory force modulation had an amplitude equal to 10% of the current force value and a frequency of 5 Hz, while the unloading phase was carried out linearly in 15 s. The instrumented values of the elastic Young’s modulus E IT and hardness H IT were determined as a function of the indentation depth by Oliver–Pharr dynamic analysis 50 of the loading phase. The mechanical properties of the tested shell samples were completely reversible in response to the applied hydration-drying cycle. Ptychographic X-ray computed tomography (PXCT) Sample preparation for PXCT. Brachiopod shells were first mechanically fractured and cut into mm-sized pieces. Pieces from adjacent areas taken from the environment facing side along the shell width were then glued onto individual custom-build tomography pins 51 . The epoxy was pre-cured and only applied to the top of the tomography pin to avoid sample contamination. The sample-loaded pins were mounted on a custom-built micro-lath and milled under cryogenic conditions 27 . The resulting cylindrical pillars had a diameter of ~20–40 microns and a sample height of ~50–80 microns. The prepared pillars were then either vacuum dried or incubated in desiccators containing salt solutions to create an atmosphere of 70% relative humidity or 100% relative humidity for 36 h 28 , 29 . The pillars were following frozen using liquid nitrogen to lock the set hydration level in place for the duration of the PXCT measurement. No signs of crystalline ice on the surface of prepared pillars were recorded. PXCT setup and data acquisition . PXCT experiments were carried out at the cSAXS beamline of the SLS. The photon energy was 6.2 keV. The horizontal aperture of slits located 22 m upstream of the sample was set to 20 μm in width to coherently illuminate a Fresnel zone plate, the latter being 220 μm in diameter with an outermost zone width of 60 nm 52 . Coherent diffraction patterns were acquired using a 500k Eiger detector 53 with a 75 μm pixel size, 7.284 m downstream of the sample. A flight tube was positioned between sample and detector to reduce air scattering and absorption. Measurements were carried out using the positioning instrumentation described in Holler et al. 54 , 55 . The samples were imaged in an in-vacuum version of this setup at a temperature of −180 °C in a vacuum. Sampling positions were set using a Fermat spiral scanning grid 56 with an average step size of 2 μm. Tomography projections were acquired using a binary acquisition strategy as described by Kaestner et al. 57 with two nests of projections. Around 600–1200 projections were acquired depending on the sample diameter. Each projection was obtained by a ptychographic scan of ~400–800 to diffraction patterns, each with an exposure time of 0.1 s. Ptychographic image and tomogram reconstruction . From each diffraction pattern, a region of 512 × 512 pixels was used in the ptychographic reconstruction of acquired projections. The resulting pixel size is (38.8 nm) 2 . Reconstructions were obtained with 300 iterations of the difference map algorithm 58 followed by 300 iterations of maximum likelihood refinement using 2 probes modes 59 , 60 . Reconstructions were performed using the PtychoShelves package 61 . Prior to tomography reconstructions, the complex-valued projections were aligned and processed as described in Guizar-Sicairos et al. 62 . Horizontal alignment was ensured based on tomographic consistency 63 . Tomographic reconstruction of phase projections was performed using a modified filtered back-projection algorithm (FBP) 62 . To mitigate noise in the reconstruction, a Hanning filter was used. The tomograms provide the 3D distribution of the refractive index decrement, δ ( r ), and electron density away from sample relevant absorption edges as in the present case 24 , 25 . PXCT dose estimation . The X-ray dose imparted to a shell sample during tomogram acquisition was estimated to be on the order of 10 6 to 10 7 Gy. The estimated dose is based on the average area flux density of each scan and the assumed mass density of the specimen 64 . Here, the specimen was assumed to consist of hydroxyapatite and chitin. Estimation of spatial resolution . The half-period spatial resolution of ptychographic tomograms was estimated by Fourier shell correlation (FSC) 65 . The full dataset of angular projections used for the tomographic reconstructions was divided in half, and two independent tomograms with double angular spacing were reconstructed independently. Then, the correlation between these two tomograms in the Fourier domain was calculated and the resolution was estimated based on the intersection with a set threshold. The threshold criteria for the FSC was the ½ bit criteria 65 . FSC line plots of are shown in, Supplementary Fig. 6 . Tomogram analysis . Owed to the superior spatial-resolution, the analysis focused on the retrieved phase respectively electron density tomograms 30 . To exclude any potential sample preparation artifacts we extracted sub-volumes (Fig. 3a ) from the center of the imaged volume. Figure 3e , electron density line profiles normal to the laminae structure were obtained by calculating the radially averaged electron density of the identifiable layers. The average layer thickness was calculated using a parallel plate model. Overall sample composition and hydration were based on linear combination fitting using the theoretical electron density values of known shell components as well as the measured electron densities of the fully dry shell as reference points. Equally, component matching was achieved by comparing calculated electron densities of known shell components, i.e., francolite, organic matrix (approximated using molecular weight and density of chitin), and water/ice, with the measured electron densities of manually isolated, i.e., visual pure, components in the tomogram where possible. Supplementary Fig. 7 shows local variations in hydration level for the fully hydrated sample calculated from the respective electron density tomogram using the average electron density of the dry sample and the electron density of amorphous ice as reference values. The volume percentages obtained were converted to water weight percent using tabulated density values of shell components. The resulting hydration tomogram was threshold segmented for visualization purposes and to determine the swelling degree of structurally coherent layers as a function of hydration level following using thickness analysis 26 , 66 . Data availability The electron microscopy and X-ray computed ptychography generated in this study can be retrieved from the University of Edinburgh DataShare, . The remaining data that support the findings reported in this study are available within the paper and its supplementary information files.
An international research team with participation of the Paul Scherrer Institute PSI has deciphered why the protective cover of the brachiopod Discinisca tenuis becomes extremely soft in water and gets hard again in the air. The study appears today in the journal Nature Communications. The brachiopod Discinisca tenuis lives on the west coast of Africa. It has a mineral-rich shell that protects it from harmful environmental influences. Bathing the shell in water leads to a structural change in the material: The flat, hard shell becomes so flexible that it can even be folded up without breaking. With the help of the Swiss Light Source SLS, the researchers have deciphered exactly how this transformation takes place. The phenomenon was discovered by chance a few years ago by Fabio Nudelman, a materials chemist currently at the School of Chemistry, University of Edinburgh in Scotland. Maggie Cusack, who was recently appointed president of Munster Technological University in Ireland, had provided Nudelman with shells of the brachiopod Discinisca tenuis, which originally came from Namibia. When he wanted to wash the hard object, it suddenly became soft and flexible in contact with water. The shell had absorbed liquid and thereby changed its structure. The process was reversible: When the shell dried, it became hard and brittle again. Together with colleagues from six countries, Nudelman set out to discover what exactly takes place during this unexpected transformation. "In its composition, the shell resembles bone," he explains. "But bone doesn't change its structure when it gets wet." The same goes for clams: If the animals need to adapt the properties of their shell to different environmental conditions, they normally have to rework the material in a lengthy and energetically costly process, by resorbing and redistributing minerals. It doesn't work simply through the absorption of water. Johannes Ihli und co-author Klaus Wakonig at SLS’s cSAXS beamline. Credit: Paul Scherrer Institute/Markus Fischer Hybrid material with a special trick It was so-called cryo-tomography, performed at the Swiss Light Source SLS, that "opened the door to reveal the secret," says Johannes Ihli, a PSI researcher at SLS. With this technique, the researchers examined the material as if under a very high-resolution microscope, and in fact at extremely low temperatures. "At room temperature it would not have been possible, since the high-energy X-ray light would immediately alter the sensitive shell structure," Ihli explains. The brachiopod's shell, which is no more than half a millimeter thick, consists of a hybrid material: mainly inorganic mineral in which organic polymers made from proteins and sugars are embedded. Bones, clam shells, and teeth are structured in a similar way out of a mixture of organic and inorganic material. The mineral that constitutes the main component of the shell is a type of fluoroapatite—similar to the material that makes up the enamel of our teeth. Tiny nanocrystals of this material are arranged in layers. Nudelman compares it to brick walls: "In this analogy, the bricks are the nanocrystals, and the mortar between the bricks consists of organic molecules such as chitin and proteins." As the researchers observed, this "mortar" can absorb large amounts of water, causing it to swell up. Through the storage of water, it changes its structure: It becomes soft, and the bricks become movable with respect to each other. "Then water acts like a lubricant between the individual nanocrystals," Ihli explains. "The crystals can then slip against each other." Through this movement, the shell becomes flexible. The researchers found a network of pores in the shell that was especially effective in guiding water inside and rapidly distributing it throughout the material. Evolutionary advantage Discinisca tenuis lives in large clusters in tidal zones on the coast where, depending on the tide, the animals are exposed to strong waves or calm waters. The researchers speculate that it is probably advantageous if the animals can quickly adapt the softness or hardness of their shell to the respective situation: "This could prevent damage to the shell and thus be a key to the animals' survival," they write in the study. The phenomenon may even be more widespread than suspected: "We don't know how many other animal species there might be that have this kind of property," says Nudelman. Aside from biology and evolution, the newly gained insights are also of interest for materials science: The development of a hard, brittle material whose stiffness can be controlled could hold promise for many applications. Sports clothing or helmets, for example, might be able to flexibly adapt to movements and always offer the protection required depending on the impact. Harnessing this phenomenon could also prove useful in developing bone-replacement materials.
10.1038/s41467-021-25613-4
Nano
Research team uses excitons to take electronics into the future
Dmitrii Unuchek et al, Room-temperature electrical control of exciton flux in a van der Waals heterostructure, Nature (2018). DOI: 10.1038/s41586-018-0357-y Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0357-y
https://phys.org/news/2018-07-team-excitons-electronics-future.html
Abstract Devices that rely on the manipulation of excitons—bound pairs of electrons and holes—hold great promise for realizing efficient interconnects between optical data transmission and electrical processing systems. Although exciton-based transistor actions have been demonstrated successfully in bulk semiconductor-based coupled quantum wells 1 , 2 , 3 , the low temperature required for their operation limits their practical application. The recent emergence of two-dimensional semiconductors with large exciton binding energies 4 , 5 may lead to excitonic devices and circuits that operate at room temperature. Whereas individual two-dimensional materials have short exciton diffusion lengths, the spatial separation of electrons and holes in different layers in heterostructures could help to overcome this limitation and enable room-temperature operation of mesoscale devices 6 , 7 , 8 . Here we report excitonic devices made of MoS 2 –WSe 2 van der Waals heterostructures encapsulated in hexagonal boron nitride that demonstrate electrically controlled transistor actions at room temperature. The long-lived nature of the interlayer excitons in our device results in them diffusing over a distance of five micrometres. Within our device, we further demonstrate the ability to manipulate exciton dynamics by creating electrically reconfigurable confining and repulsive potentials for the exciton flux. Our results make a strong case for integrating two-dimensional materials in future excitonic devices to enable operation at room temperature. Main Solid-state devices use particles and their quantum numbers for their operation, with electronics being the ubiquitous example. The need to improve power efficiency of charge-based devices and circuits is motivating research into new devices that would rely on other principles. Candidates so far include spintronics and photonics 9 , 10 . Excitons—electrically neutral quasi-particles formed by bound electrons and holes—can also be manipulated in solid-state systems. The development of such excitonic devices has so far been hindered by the absence of a suitable system that would enable room-temperature manipulation of excitons, limiting the expansion of the field. Here, we demonstrate room-temperature excitonic devices based on atomically thin semiconductors. These devices could open the way for wider studies and applications of excitonic devices in the academic and industrial sectors 11 . Many applications can be envisaged, because excitons could be used to efficiently couple optical data transmission and electronic processing systems. Although fast optical switches have already been demonstrated 12 , 13 , the comparably large size (about 10 μm) 14 , 15 of such devices limits packing density. This can be overcome in excitonic devices, the characteristic size of which is determined by that of electronic field-effect transistors (FETs). Owing to their finite binding energy E b , excitons can exist up to temperatures of around T ∝ E b / k B , where k B is the Boltzmann constant. In a conventional III–V-semiconductor coupled quantum well with a size of a few nanometres, the relatively small binding energy of around 10 meV permits the observation of excitons only at cryogenic temperatures (less than 100 K) 3 . To reach higher temperatures, different materials are required. To this end, systems with higher E b (in the range of tens of millielectronvolts) have been explored more recently, such as (Al,Ga)N/GaN (ref. 16 ) or ZnO (ref. 17 ). Two-dimensional semiconductors such as transition-metal dichalcogenides have even larger exciton binding energies, which can exceed 500 meV in some cases owing to strong quantum confinement 4 , 5 . This could enable the realization of excitonic devices that operate at room temperature 18 . Although intralayer excitons have relatively short lifetimes (about 10 ps) 7 , 19 , the spatial separation of holes and electrons in interlayer excitons results in lifetimes more than two orders of magnitude longer, well in the nanosecond range 6 . For the device presented here, we take advantage of interlayer excitons in an atomically thin MoS 2 –WSe 2 heterostructure. Type-II band alignment 20 , 21 (Fig. 1a ) results in charge separation between the constituent materials, with electrons and holes residing in MoS 2 and WSe 2 , respectively. The formation of indirect excitons is marked by the appearance of a new photoluminescence emission peak 22 , redshifted by about 75 meV with respect to the intralayer exciton of the WSe 2 monolayer. In Extended Data Fig. 1b we present a typical photoluminescence spectrum obtained from such a heterostructure on SiO 2 , in which the spectral signature of the interlayer exciton is clearly visible (dark blue line), together with the individual WSe 2 and MoS 2 monolayers (blue and red lines, respectively). Recent reports 23 suggest that excitons in the MoS 2 –WSe 2 system are not only spatially indirect, but also momentum-indirect owing to lattice mismatch. The phonon-assisted nature of the emission process further reduces the exciton recombination rate, yielding a longer lifetime 8 , 24 . Such an extended lifetime can be used to obtain interlayer exciton diffusion over a scale of micrometres, even at room temperature. Fig. 1: Interlayer excitons in the WSe 2 –MoS 2 van der Waals heterostructure. a , Type-II band alignment in the WSe 2 –MoS 2 heterostructure with intralayer ( X 0 ) and interlayer ( X i ) excitons. The red and blue areas represent the bands in the two materials and the heterobilayer. Positive and negative symbols indicate holes and electrons, respectively. b , Schematic depiction of the WSe 2 –MoS 2 heterostructure, showing the heterobilayer encapsulated in hexagonal boron nitride (h-BN) and the top and bottom gates. The interlayer exciton has a permanent out-of-plane dipole moment p that allows manipulation via the electric field E . c , False-colour optical image of the device, highlighting the different materials. d , e , Spatial maps of photoluminescence at 670 nm ( d ) and 750 nm ( e ), corresponding to MoS 2 and WSe 2 intralayer excitonic resonances, respectively. Photoluminescence is quenched in the heterostructure area owing to efficient charge transfer. Scale bars, 5 μm. a.u., arbitrary units. Full size image To obtain a pristine surface, the heterostructure is encapsulated in hexagonal boron nitride and annealed in high vacuum. Multiple transparent top gates are fabricated out of few-layer graphene. This double-gate configuration allows us to apply a vertical electric field without changing the carrier concentration in the MoS 2 –WSe 2 heterostructure. In Fig. 1c we show a false-colour optical micrograph of the resulting stack. We characterize the structure by using photoluminescence mapping at room temperature, under 647-nm excitation. In Fig. 1d, e and Extended Data Fig. 1 we show the intralayer emission distribution at the wavelengths characteristic of MoS 2 (670 nm), WSe 2 (760 nm) and the interlayer exciton (785 nm). Whereas individual monolayers appear to be homogeneously bright, emission from the heterostructure region is uniformly quenched by more than three orders of magnitude, owing to the efficient charge transfer between layers 24 . Even with this strong quenching, we are able to detect the interlayer peak in the photoluminescence spectra (Extended Data Fig. 2 ), confirming the generation of interlayer excitons. Because this effect has a central role in our work, we fabricated three more heterostructures encapsulated in hexagonal boron nitride, confirming the reproducibility of this result (Extended Data Fig. 3 ). Given that excitons do not carry a net electric charge, we do not expect their flow to be influenced by the direct application of an in-plane electric field. However, the confinement of oppositely charged carriers in different layers results in a well-defined interlayer-exciton dipole moment p with an out-of-plane ( z ) direction (Fig. 1b ). An electric field E z ( x , y ) perpendicular to the crystal plane can then be used to shift the exciton energy by δ E = − p z E z , while its lateral modulation drives the exciton motion towards regions of lower energy. Exciton dynamics in the longitudinal direction can be modelled by a diffusion equation with an external potential (see Methods ): $$\begin{array}{c}D\frac{{\partial }^{2}n}{\partial {x}^{2}}+\frac{D}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial \varphi }{\partial x}\right)+G-\frac{n}{\tau }=\frac{\partial n}{\partial t}\end{array}$$ (1) where n , D and τ are the interlayer-exciton concentration, diffusion coefficient and lifetime, respectively, φ is the exciton potential (including the electrostatic contribution φ el = −p z E z ) and G is the optical generation rate. This simple model qualitatively shows how the application of an electric field E z can affect interlayer exciton diffusion, as we discuss later. We first demonstrate an electrically controlled excitonic switch, represented schematically in Fig. 2a . Laser light focused inside the heterostructure area (input) generates interlayer excitons, which diffuse along the channel of the heterostructure. However, the low brightness of interlayer emission makes monitoring the operation of the device challenging. For this reason, we use the exposed WSe 2 that extends out of the heterostructure as a bright emitter. Here, interlayer excitons diffuse towards the edge of the heterostructure. During this diffusion process, interlayer excitons are expected to dissociate into single carriers, which are allowed to diffuse inside monolayer MoS 2 and WSe 2 , where they experience recombination with native charges, resulting in bright emission. The emitted radiation is recorded simultaneously using a charge-coupled device (CCD) camera and a spectrometer (see Methods ), to obtain spatial and spectral emission profiles. This allows us to further confirm the presence and diffusion of interlayer excitons inside the heterobilayer (Extended Data Fig. 2 ). In the absence of applied fields (Fig. 2b ), excitons diffuse away from the pumping area (red circle in Fig. 2d ), owing to temperature and concentration gradients 25 , 26 , 27 , and reach the recombination site, approximately 3 μm away. Comparison of pumping and emission profiles (Extended Data Fig. 4 ) lets us exclude the possibility of a direct excitation of monolayer WSe 2 by the low-intensity tail of the laser spot. This situation (bright output) is shown in the emission image in Fig. 2d and corresponds to the ON state of the excitonic transistor. On the contrary, by introducing a potential barrier higher than k B T on the path of the diffusing excitons (Fig. 2c ), we impede their motion, resulting in the suppression of light emission (Fig. 2e ). In this way, we can achieve efficient electrical modulation of the output emission, as shown in Fig. 2f , in which the emission intensity (normalized by the value in the OFF state, corresponding to V g1 = +16 V) is plotted as a function of applied voltage. For reference, we also plot the intensity modulation observed when the laser beam is located on the emission centre (input–output distance d i–o = 0 μm). The switching threshold is around 8 V, which corresponds well with the calculated exciton energy modulation of δ E ≈ k B T ≈ 25 meV (blue dashed line in Fig. 2f ). This result is consistent with our model: because the height of the energy barrier starts to become comparable to thermal excitation, it is now possible to block the diffusion of exciton flux. We extract an intensity ON/OFF ratio larger than 100, limited by the noise level of the set-up in the OFF state (see also Extended Data Figs. 4 , 5 ). Such a high ratio results from the realization of an excitonic transistor with complete suppression of emission in the OFF state. This effect is also clearly visible in the spectrum of the emitted light, in which the WSe 2 peak is selectively suppressed when the device is in the OFF state (Extended Data Fig. 6 ). We also note that strong emission from MoS 2 is detected in both states, because excitons can diffuse freely in other directions. Fig. 2: Excitonic transistor operation at room temperature. a , The application of gate voltages ( V g1 , V g2 , V g3 ) to transparent graphene electrodes (gates 1–3) can engineer a potential landscape for the diffusion of excitons, controlling their flux through the device. b , c , Calculated energy variation δ E for the excitons in the ON (free diffusion; b ) and OFF (potential barrier; c ) states. Red arrows represent laser excitation; the bound charges and black dashed arrows denote the excitons and their diffusion, respectively. d , e , Corresponding images of exciton emission. Dashed lines indicate the positions of the different layers that form the heterostructure and the top graphene gate (gate 1). The laser spot is represented by the red circle. Colour scale indicates the normalized photoluminescence intensity. Scale bars, 5 μm. f , Gate dependence of the ON/OFF ratio for optical excitation 3 μm away from the emission centre (left axis). The right axis shows the reference data, which were acquired with the incident laser beam located directly on the emission centre (input–output distance, d i–o = 0 μm). The measured emission intensity is normalized by the OFF-state value at V g1 = 15 V. The background shading indicates the ON (red) and OFF (grey) states. The blue dashed line represents the gate voltage at which the barrier height is equal to the thermal energy. Full size image An alternative mechanism that could in principle explain the recombination far away from the excitation spot is based on the diffusion of single carriers rather than interlayer excitons. It has been shown that such carriers (holes in particular) can have long lifetimes 6 , 28 , 29 . However, experimental observations indicate that this is not the dominant mechanism in our heterostructure. First, we observe the production of interlayer excitons directly in the excitation area, even if the intensity is low. Second, for a flux of single carriers, the voltage modulation necessary to counteract thermal excitation and block the single-particle flux would be about 50 mV, more than two orders of magnitude lower that the gate voltage of approximately 8 V required in our experimental result shown in Fig. 2 . Finally, this mechanism would also result in different emission profiles for different regimes of device operation (see Extended Data Fig. 7 ). To exclude the possibility that the observed effect arises from an unwanted modulation of the charge carrier density in WSe 2 , we perform a calibration experiment in which the excitation light is focused on the output area ( d i–o = 0) and the device is biased as before. This reference experiment is discussed in detail in Methods and the result is presented in Fig. 2f (grey curve); it shows that only a comparably small modulation of WSe 2 emission intensity is observed. This confirms that the energy barrier is the origin of the switching behaviour. We study the dependence of the ON/OFF ratio on d i–o further (Extended Data Fig. 8 ) by keeping the voltage profile constant and optically injecting excitons at different distances from the output point. Consistent with our model, we observe efficient modulation when the laser is focused beyond the energy barrier, with emission intensity decreasing with increasing d i–o owing to long-distance diffusion. The diffusion length can be doubled at lower temperature (4.7 K), resulting in operation over a longer distance (Extended Data Fig. 9 ). Having demonstrated that we can block or allow spontaneous exciton diffusion, we go further by creating a drift field in the desired direction, in analogy with the source–drain bias of a conventional FET. We show this type of operation in Fig. 3 , with all three electrodes used to create a potential ladder going upwards or downwards with respect to the excitation point (Fig. 3a, b ). When excitons encounter a gradually decreasing energy profile (forward bias), their diffusion is enhanced by a drift term, allowing us to operate the device with a larger distance between optical input and output. As shown in Fig. 3c , this regime of electrically assisted diffusion can result in exciton transport over a distance of 5 μm. To obtain a more quantitative estimate of the induced modulation, we measure the dependence of the emission intensity on the distance from the laser spot as it is displaced away from the output area at fixed gate voltages. The results (Fig. 3d ) show that the length over which excitons diffuse can be effectively modulated from 5.5 μm to 3 μm, compared to about 4.5 μm in the unbiased case. The modulation of the effective diffusion length with the potential φ el qualitatively follows the model introduced in equation ( 1 ). Fig. 3: Biasing of the excitonic device. a , b , Calculated energy profile δ E of the indirect exciton as a function of lateral coordinate X for the forward ( a ) and backward ( b ) bias cases. The black solid line indicates the direction of exciton drift. c , Image showing exciton emission from the device when injecting at a distance d i–o = 5 μm from the emission area. Colour scale, dashed lines and red circles as in Fig. 2d, e . Scale bar, 5 μm. d , Normalized output intensity as a function of the distance d i–o between optical injection and the emission point, for the forward (red) and backward (blue) bias configurations, compared to the unbiased case (grey). The grey shading indicates the noise floor. Exciton diffusion over a distance of 5.5 μm is achieved. Full size image We further use the multi-gate configuration to demonstrate more complex and electrically reconfigurable types of potential landscape and related device operation. In Fig. 4a–c we present the energy profiles calculated for free diffusion (Fig. 4b ) compared with a potential well (Fig. 4a ) and a repulsive barrier (Fig. 4c ) produced by the central gate (gate 2), while the side gates (1 and 3) are kept grounded. In this case, the position of the optical pump is centred on the middle electrode, which corresponds to the centre of the well or barrier. In Fig. 4d, g we show the CCD camera image and related emitted intensity profile along the device channel for the case of the potential well. We observe photoluminescence emission only from the narrow area below the central contact, which is indicative of electrical confinement of the excitonic cloud. Conversely, when applying a positive voltage to create a ‘potential hill’ (Fig. 4f, i ), we see an expulsion of excitons from the pumping area with the appearance of bright emission spots outside the middle section of the device, owing to excitons drifting along the energy profile and recombining on the edges of the heterostructure. This is evident from a comparison with the free-diffusion case in Fig. 4e, h . Interestingly, we also observe higher-energy emission from the neighbouring MoS 2 monolayer parts inside the well in the case of exciton confinement. A similar effect is also observed during exciton expulsion, with bright spots appearing at the edges of the heterostructure around the repulsive potential. Further inspection of the emission spectra from Fig. 4d, f confirms this, with the intensity of monolayer peaks decreasing (increasing) when confining (anti-confining) the excitons (Extended Data Fig. 6 ). As also discussed in Methods, the observed MoS 2 emission is affected by the local inhomogeneity of the substrate and by the optical filters used. As discussed earlier, the diffusion of single particles and their recombination with native charges that are available in the monolayers could have a role in light emission that extends from the edges of the heterobilayer into the monolayers. Fig. 4: Electrically reconfigurable energy landscape. a – c , Calculated energy profile δ E of the indirect exciton for the cases of a potential well ( a ), free diffusion ( b ) and a potential barrier ( c ). d – f , Imaging of exciton emission for the configurations shown in a – c . Incident laser light (red circle) is focused on top of gate 2. Dashed lines indicate positions of different layers that form the heterostructure and the graphene top gate 2; colour scale as in Fig. 2d, e . Scale bars, 5 μm. g – i , Cross-section of the intensity profile along the device channel, integrated over its width, for the three configurations. The red-shaded underlay represents the profile of the excitation laser. Full size image Methods Device fabrication The heterostructure was fabricated using polymer-assisted transfer (see Extended Data Fig. 10 ) of flakes of hexagonal boron nitride (h-BN), WSe 2 (HQ Graphene) and MoS 2 (SPI). Flakes were first exfoliated on a polymer double layer, as described previously 30 . Once monolayers were optically identified, the bottom layer was dissolved with a solvent and free-floating films with flakes were obtained. These were transferred using a custom-built set-up with micromanipulators to carefully align flakes on top of each other. During the transfer process, the sharp edges of the flakes were aligned to obtain a twist angle between the two crystal axes close to 0° (or 60°). However, in the case of MoS 2 –WSe 2 heterobilayers, the alignment has been shown to be not critical for the observation of interlayer excitons 23 , 31 . This is due to the indirect (in reciprocal space) nature of the transition and to the considerable lattice mismatch between the two layers (about 4%). Polymer residue was removed with a hot acetone bath. Once completed, the stack was thermally annealed in high vacuum at 10 −6 mbar for 6 h. Few-layer graphene flakes were obtained by exfoliation from graphite (NGS) on Si/SiO 2 substrates and patterned in the desired shape by electron-beam lithography and oxygen plasma etching. After thermal annealing, the patterned flakes were transferred on top of the van der Waals stack using a polymer-assisted transfer and the entire structure was annealed again in high vacuum. Finally, electrical contacts were fabricated by electron-beam lithography and metallization (60 nm/2 nm Au/Ti). Optical measurements All measurements presented here were performed in vacuum at room temperature unless specified otherwise. Excitons were optically pumped by a continuous-wave 647-nm laser diode focused to the diffraction limit with a beam size of about 1 μm. The incident power was 250 μW. The spectral and spatial characteristics of the device emission were analysed simultaneously. The emitted light was acquired using a spectrometer (Andor) and the laser line was removed with a long-pass 650-nm edge filter. For spatial imaging, we used a long-pass 700-nm edge filter so that the laser light and most of the MoS 2 emission were blocked. Filtered light was acquired by a CCD camera (Andor Ixon). The room-temperature photoluminescence spectrum of MoS 2 shown in Extended Data Fig. 1b was obtained under 150-μW excitation at 647 nm, whereas monolayer WSe 2 and the heterostructure fabricated on SiO 2 substrate were characterized under 488-nm excitation. Owing to the small separation between the interlayer and the intralayer WSe 2 exciton peaks, it is not possible to completely distinguish them in the images acquired on the CCD. The tail of the WSe 2 monolayer peak normally overlaps with the spectral line of the interlayer exciton considerably, meaning that weak luminescence around 785 nm can be observed even on monolayer WSe 2 (Extended Data Fig. 3e ), which is not due to interlayer excitons. Because of the use of the 700-nm filter, the emission from monolayer MoS 2 is in principle not observable on the CCD. However, some light can be transmitted when the broadening of the photoluminescence peak results in a low-energy tail (see Extended Data Fig. 11 ) extending beyond 700 nm. Local inhomogeneity in the substrate can affect this broadening, which could explain why the observed MoS 2 luminescence in Fig. 4f comes mostly from the left part of the device. Low-temperature measurements (Extended Data Fig. 9 ) were performed in a liquid-helium, continuous-flow cryostat (Oxford Instruments). Reference experiment We performed a reference experiment to exclude spurious effects that could compromise the interpretation of the data. First, we observed how the photoluminescence emission from monolayer WSe 2 changes when gating the device using the back gate. For this purpose, we excited the exposed WSe 2 with the laser beam directly and recorded the photoluminescence spectra. When applying voltage to the back gate, a modulation in the emission intensity is clearly observable (Extended Data Fig. 12a ). We repeated the same measurement, but instead of applying a voltage between the flake and the back gate, we biased the top and back gates, thus generating a vertical electric field inside the device. In this case, we cannot observe any substantial change in the emission intensity (Extended Data Fig. 12b ). This allows us to rule out the possibility that the switching action that we observe could be due to a suppression of photoluminescence from a changing doping level in the material. Image processing To aid the interpretation of images from the CCD camera, we performed several image-processing steps using ImageJ 32 . We first subtracted from the original image a background image obtained without laser illumination, to account for ambient light noise. In some cases, a simple background was not sufficient to compensate for the presence of spurious signals from unwanted reflections or changing ambient background. In these cases, a background image was generated by applying the rolling-ball algorithm in ImageJ. Contrast was adjusted to cover the range of values in the image. We provide an example of the procedure in Extended Data Fig. 13 . Modelling exciton diffusion The dynamics of the exciton in the channel of our device can be modelled by one-dimensional diffusion in the presence of an external potential φ ( x ) (temperature, electrostatic potential or dipole–dipole interaction). The gradient of exciton concentration n ( x ) drives diffusion current j diff while the potential gradient causes drift j drift : $${j}_{{\rm{d}}{\rm{i}}{\rm{f}}{\rm{f}}}=-D\frac{{\rm{\partial }}n}{{\rm{\partial }}x},\,{j}_{{\rm{d}}{\rm{r}}{\rm{i}}{\rm{f}}{\rm{t}}}=-\mu n\frac{{\rm{\partial }}\phi }{{\rm{\partial }}x}$$ where μ is the exciton mobility, which is related to the diffusion coefficient D and the thermal energy k B T by the Einstein relation D = μk B T . We also include an exciton generation rate G by means of optical pumping and an exciton recombination rate R , which is related to the exciton lifetime as R = − n / τ . From the exciton continuity equation we then obtain equation ( 1 ). In our system, in which excitons have a built-in vertical dipole moment p , the electrostatic potential induced by the vertical electric field is φ el = − E z p z . Because we use continuous-wave excitation, we assume a steady-state case (∂ n /∂ t = 0). Considering φ el as the main contribution to exciton drift, we obtain $$D\frac{{\partial }^{2}n}{\partial {x}^{2}}-\frac{D{p}_{z}}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial {E}_{z}}{\partial x}\right)+G-\frac{n}{\tau }=0$$ We simplify the model further by assuming two fundamentally different regions, shown in Extended Data Fig. 14 . First region is under constant homogeneous excitation so that the concentration reaches an equilibrium value with equal recombination and generation rates ( R + G = 0). The equilibrium concentration is then n 0 = Gτ . Outside of the pumping region, excitons diffuse away, driven by the concentration and potential gradients: $$D\frac{{\partial }^{2}n}{\partial {x}^{2}}-\frac{D{p}_{z}}{{k}_{{\rm{B}}}T}\frac{\partial }{\partial x}\left(n\frac{\partial {E}_{z}}{\partial x}\right)-\frac{n}{\tau }=0$$ The case of diffusion in the absence of an external field can be solved analytically, revealing exponential decay of exciton density from the pumping region with a characteristic distance that corresponds to the diffusion length \({l}_{{\rm{diff}}}=\sqrt{D\tau }\) : \({n}_{{\rm{free}}}(x)={n}_{0}{{\rm{e}}}^{-x/{l}_{{\rm{diff}}}}\) An applied non-homogeneous vertical electric field can alter the diffusion length (as demonstrated experimentally), which can be modelled as a change in the effective diffusion length. Numerical simulation of the exciton-energy profile We first calculate the electric-field distribution in our system using the COMSOL Multiphysics simulation software. All calculations were performed considering the dimensions of the device as follows: the top graphene gates are 1.1 μm wide and spaced 0.8 μm apart. The heterostructure is encapsulated between two h-BN crystals (10 nm thick on the top and 20 nm on the bottom), and the substrate is heavily doped Si with 270 nm of SiO 2 on top (see Extended Data Fig. 15a ). Extended Data Fig. 15b shows an example of the electrical field in the system in the confinement configuration, with −10 V applied to the central gate and the side gates grounded. Interlayer excitons have a built-in out-of-plane dipole moment directed upwards, with an absolute value of p z = ed = e × 7.5 × 10 −10 m, where e is the elementary charge and d = 7.5 Å is the layer separation in our heterostructure. They thus experience an energy shift of δ E = − p z E z in the presence of a vertical electric field E z . The resulting force applied on the exciton in the longitudinal direction is proportional to the first derivative of the vertical electric field E z with respect to the channel x axis: $${F}_{x}=-\frac{\partial ({\rm{\delta }}E)}{\partial x}=ed\frac{\partial {E}_{z}}{\partial x}$$ Example profiles of the confinement-well configuration are shown in Extended Data Fig. 15c . Data availability The data that support the findings of this study are available from the corresponding author on reasonable request.
Excitons could revolutionize the way engineers approach electronics. A team of EPFL researchers has created a new type of transistor—one of the components of circuits—using excitons instead of electrons. Notably, their exciton-based transistor functions effectively at room temperature, a hitherto insurmountable obstacle. They achieved this by using two 2-D materials as semiconductors. Their study, which was published today in Nature, has numerous implications in the field of excitonics, a promising new area of study alongside photonics and spintronics. "Our research showed that by manipulating excitons, we had come upon a whole new approach to electronics," says Andras Kis, who heads EPFL's Laboratory of Nanoscale Electronics and Structures (LANES). "We are witnessing the emergence of a totally new field of study, the full scope of which we don't yet know." This breakthrough sets the stage for optoelectronic devices that consume less energy and are both smaller and faster than current devices. In addition, it will be possible to integrate optical transmission and electronic data-processing systems into the same device, which will reduce the number of operations needed and make the systems more efficient. Higher energy level Excitons are actually quasiparticles, a term used to describe the interaction between the particles that make up a given substance rather than the substance itself. Excitons consist of an electron and an electron hole. The two are bound together when the electron absorbs a photon and achieves a higher level of energy; the "excited" electron leaves behind a hole in the previous level of energy, which, in band theory, is called a valence band. This hole, also a quasiparticle, is an indication of the missing electron in this band. Since the electron is negatively charged and the hole is positively charged, the two particles remain bound by an electrostatic force. This bond between the electron and the hole is called Coulomb attraction. And it is in this state of tension and balance that they form an exciton. When the electron finally falls back into the hole, it emits a photon. And with that, the exciton ceases to exist. Put more simply, a photon goes in at one end of the circuit and comes out the other; while inside, it gives rise to an exciton that acts like a particle. Double success It is only recently that researchers have begun looking at the properties of excitons in the context of electronic circuits. The energy in excitons had always been considered too fragile and the exciton life span too short to be of any real interest in this domain. In addition, excitons could only be produced and controlled in circuits at extremely low temperatures (around -173 degrees C). The breakthrough came when the EPFL researchers discovered how to control the life span of the excitons and how to move them around. They did this by using two 2-D materials: tungsten diselenide (WSe2) and molybdenum disulfide (MoS2). "The excitons in these materials exhibit a particularly strong electrostatic bond and, even more importantly, they are not quickly destroyed at room temperature," explains Kis. The researchers were also able to significantly lengthen the excitons' lifespan by exploiting the fact that the electrons always found their way to the MoS2 while the holes always ended up in the WSe2. The researchers kept the excitons going even longer by protecting the semiconductor layers with boron nitride (BN). "We created a special type of exciton, where the two sides are farther apart than in the conventional particle," says Kis. "This delays the process in which the electron returns to the hole and light is produced. It's at this point, when the excitons remain in dipole form for slightly longer, that they can be controlled and moved around using an electric field."
10.1038/s41586-018-0357-y
Biology
Studying species composition and community function of dinoflagellate-associated bacteria
Yunyan Deng et al, Abundant Species Diversity and Essential Functions of Bacterial Communities Associated with Dinoflagellates as Revealed from Metabarcoding Sequencing for Laboratory-Raised Clonal Cultures, International Journal of Environmental Research and Public Health (2022). DOI: 10.3390/ijerph19084446 Journal information: International Journal of Environmental Research and Public Health
https://dx.doi.org/10.3390/ijerph19084446
https://phys.org/news/2022-04-species-composition-function-dinoflagellate-associated-bacteria.html
Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections Adolescents Aging Air Anthropogenic Circularity Biosafety Chemoenvironment Children's Health Climate Change Digital Health Disabilities Disaster Medicine Disease Prevention Emerging Contaminants Environment and Applied Ecology Environmental Analysis and Methods Environmental Chemistry and Technology Environmental Earth Science and Medical Geology Environmental Ecology Environmental Health Environmental Microbiology Environmental Remediation and Management Environmental Science and Engineering Exercise and Health Global Health Health Behavior, Chronic Disease and Health Promotion Health Care Sciences & Services Health Communication and Informatics Health Economics Health Informatics Health-Related Quality of Life and Well-Being Industrial Ecology Infectious Disease Epidemiology Injury Prevention and Rehabilitation Mental Health Nursing Occupational Safety and Health Oral Health Public Health Statistics and Risk Assessment Reproductive Health Skin Health Sport and Health Toxicology and Public Health Traumas Water Science and Technology Women's Health All Sections Special Issue All Special Issues 2nd Edition of Environmental Impact Assessment by Green Processes 2nd Edition of Health Literacy, Nutrition and Public Health 2nd Edition of Oral Inflammation and Chronic Autoimmune Diseases 2nd Edition of Protecting, Supporting and Promoting Appropriate Breastfeeding in the 21st Century 2nd Edition of Sleep Quality and Health-Related Outcomes 2nd Edition of Social Determinants of Health 2nd Edition of Treatment of Foot and Ankle Injury and Public Health 2nd Edition of Trends in Sustainable Buildings and Infrastructure 2nd Edition of Water Sports Implications for Training, Environment and Health 2nd Edition: Advances in Maternal and Child Healthcare 2nd Edition: Advances in Personalized Exercise Prescription for Chronic Disease Prevention and Rehabilitation 2nd Edition: Evidence-Based Nature for Human Health 2nd Edition: Movement Studies for Individuals with Visual Impairments 2nd Edition: Tobacco Smoke Exposure and Tobacco Product Use A More Sustainable and Healthier Future for All: What Works? Access, Health, Regulations, and Policy: Exploring Rural and Remote Drinking Water Supplies in North America Actions in EU MS for Combating Health Threats in the Maritime Transport Sector Active Commuting and Active Transportation Addiction Behavior Addictions and Cognitive Behavioral Therapy Approaches Addictions in Children and Adolescents Addressing Environmental Health Inequalities – Proceedings from ISEE Conference 2015 Adolescent Depression Prevention Advances in Big Data Analytics and Intelligence Advances in Earth System Science Advances in Endodontic Pain Control Advances in Environmental Biology Advances in Environmental Chemistry Advances in Environmental Economics Advances in Environmental Geotechnics Advances in Environmental Justice Advances in Environmental Modelling Advances in Environmental Neurotoxicology Advances in Environmental Sensor Networks Advances in Environmental Sustainability, Resilience, and Health Advances in Epidemiology Advances in Foot Disorders and Its Treatment Advances in Gastroenterology, Hepatology and Clinical Nutrition Advances in LGBTQ+ People's Health Advances in Mental Health, PTSD and Moral Injury Advances in Physical Diagnosis, Physiotherapy and Rehabilitation in Medicine and Dentistry Advances in Squamous Cell Cancer of the Head and Neck Advances in Substance and Drug Abuse Prevention Advances in Telehealthcare Advances in the Diagnosis and Management of Renal Diseases Advances in the Use of eHealth for Pain Management Advancing Health Equity for Sexual and Gender Minority Populations Ageing Well: The Role of Age-Friendly Environments Aging and Cognition Aging and Work Agrochemicals in the Agri-Food Chain Air Pollution and Carbon Dioxide Emissions Air Pollution Exposure and Health Risks Air Pollution Modeling Air Pollution, Climate Change, and Public Health: The Unavoidable Path towards Decarbonization Air Quality and Healthcare Associated Infections Alcohol Abuse: Newer Approaches to an Old Problem Alcohol and Drugs of Addiction, Aggression and Violence Alcohol and Health Alcohol and Public Health Alcohol Policy and Public Health Alcohol-Related Violence Alcoholism Allergic Disease Epidemiology Allergy and Environment Ambient Air Pollution and Health Vulnerability Anemia and Patients Blood Management Patient in Critical Care Animal Assisted Interventions and Activities for Health and Wellbeing Anti-cancer Activity for Cancer Prevention and Treatment Antibiotic Resistant and Pathogenic Bacteria in the Food Production Environment: Epidemiological Evolution and Control Antimicrobial Resistance Prevention and Control Antimicrobials and Antimicrobial Resistance in the Environment Application of Bio-Based Materials in Environmental Governance Application of Biostatistical Modelling in Public Health and Epidemiology Application of Robotic Devices for Neurologic Rehabilitation Application of Statistical Methods in Public Health and Medical Research Applications of Statistical and Data Science Methods in Public Health Applied Physiology and Exercise Testing in Endurance and Ultra-Endurance Sports Applying Clinical Psychology to Medical Conditions Arsenic Contamination, Bioavailability and Public Health Arsenic in Drinking Water: Current Perspectives and Future Directions Asbestos and Cancers: Exposure from Neighboring Factories and Mines, Contaminated Soil, and Slate Roofs Asbestos Exposure and Disease: An Update Asbestos Exposure and Health Impact Assessment of Public and Occupational Electromagnetic Field Exposure in New and Emerging Connectivity Exposure Scenarios Assessment, Management, and Policy-Making for Environmental Pollution Related to Land Use and Land Cover Atmospheric Change and Impact on Health Auditory Experience, Music and Voice across the Perinatal Period: Innovations in Theory, Research and Clinical practice Back and Neck Pain Bariatric Surgery: Nutritional, Metabolic and Public Health Aspects Bayesian Design in Clinical Trials Beyond Amyloid and Tau - Targeting Lipid Metabolism for Alzheimer’s Disease Biomarkers and Therapies Big Data and Mathematical Modeling in Biomedicine Biochemical and Genetics Tools for Monitoring Health and Risk of Disease Biodegradability and Environmental Sciences Biological and Agricultural Engineering Biological and Health Effects of Low Level Non-Ionising Electromagnetic Fields Biomarkers: Environmental Research and Public Health Bioprocesses for Air Pollution Control Biostatistics Reveals New Insights in Sports Science, Sports Medicine, and Health Birth Defect Prevention Blood Cells, Hematopoiesis, Molecules, Nanomedicine and Diseases Research Bone Health: Nutritional Perspectives Boredom in Health, Education and Sports Breastfeeding and Infant Health Building Related Illnesses Caffeine in the Diet: Health Implications, Safety Issues, and Molecular Mechanisms Cancer Prevention, Treatment, and Survivorship Cannabis, Cannabis-Based Products, and Cannabinoids for Medicinal Use Carbon Capture and Storage Carbon Dioxide (CO2), Emissions, Environmental and Public Health Impact Carbon Emissions and Environmental Protection Cardiac Rehabilitation in the Time of COVID-19 Cardiovascular Disease Self-Care Interventions Cardiovascular Diseases and Public Health Challenges for Health Inequalities Research during COVID-19 Pandemic Challenges in Positive Organizational Psychology Child Injury Prevention Child Injury Prevention 2015 Childhood Obesity: Novel Approaches to a Global Problem Childhood Obesity: Prevention and Treatment Children, Adolescents and Nutrition Children’s Exposure to Environmental Contaminants Chronic Diseases and Multimorbidity in Primary Care Circular Economy & Social Inequalities Climate Change and Human Health Impacts and Adaptation Climate Change and Human Health Impacts and Adaptation Climate Change Impacts on Vector-borne Diseases Clinical and Translational Pathways for Cardiovascular Research College Student Health Behaviors Impact Their Food Security Influenced by Their Environment Community Health Intervention to Reduce Chronic Disease Community Participation in Health Promotion: Challenges and Successes Community Resilience and Recovery for Public Health Emergencies Community-Based Health Promotion Community-Based Solutions for Injury and Violence Prevention Community-Engaged Research to Address Health and Healthcare Disparities Complex Interventions for Public Health Improvement Computational Modeling in Biology and Medicine Contemporary challenges in public health Correctional Health COVID-19 and Health Education COVID-19 and NCDs: Emerging Trends in the Tropics COVID-19 in Low and Middle Income Countries COVID-19 Outbreak and Beyond: Psychological and Behavioral Responses and Future Perspectives COVID-19 Pandemic and the Environment COVID-19 Research COVID-19: Public Health Response Creativity and School Functioning Crisis Management and the Circular Economy for Healthier, Smarter and More Sustainable Cities Critical Issue on Lower Urinary Tract Symptoms and Urinary Incontinence Culture of Evidence-Based Practice and Quality Improvement in Nursing Cumulative and Integrated Health Impact Assessment Cumulative Health Risk Assessment Current Issues in the Neurological Rehabilitation of Children and Adolescents Current Research Trends in Transgender Health Current Therapeutic Trends and Challenges in the Management of Patients with Type 2 Diabetes Mellitus Cyber Pathology: Cyber Victimization and Cyber Bullying Cycling Medicine Data Analytics and Statistical Approaches Applied in Injury Risk, Illness, Well-Being, and Exercise Monitoring Decarbonization and the Benefits of Tackling Climate Change Decision-Making in the Health Care of Older Adults Decompensated Heart Failure Dental Hygiene and Oral Health Research: Lessons and Challenges Design and Development of Rehabilitation in Individuals with Parkinson’s Disease Determinants of Health and Well-Being in at Risk Populations Development and Evaluation of New Tobacco Control Interventions Development, Implementation and Evaluation of Gender-Specific, Community Based Health Promotion Interventions Diabetes Prevention: Challenges and Opportunities Diagnosis and Healthcare for Dementias Diet, Adiposity and Metabolic Health in Pregnant Women Different Views on a Child's Motor Development and Motor Performance Digital Governance and Low-Carbon Development Disabilities – Constant Quest in Medical, Social and Political Records Disability and Public Health Discoveries in Oral and Maxillofacial Surgery Diseases Etiology and Management: Towards a Precision Medicine Approach Drinking Water and Health Drug Abuse and Addiction Drug Resistance Early exposure to endocrine disrupting chemicals Eating Behaviour and Food Safety, Physical Fitness and Health Ecological and Human-Health Effects of Pesticides in the Environment Economic Causes and Impacts of Diabetes and Diabetes-Related Conditions Economics of New Drug Development and Approval Economics of the Prevention and Treatment of Obesity Edentulism and Oral Health Problems among Older Adults Editorial Board Members' Collection Series: Environmental Ecology and Management Effect of Air Pollution Exposure on Children and Elderly’s Health and Neurological Functions Effect of Insomnia on Human Health Effect of Sport Activity on Health Promotion Effecting a Safe and Healthy Environment in Construction Effects of COVID-19: Issues on Health Economics and Education Effects of Environmental Factors on Autism Effects of Hyperoxic Training on Acute Responses and Exercise Performance Effects of Parental Stress on Child Development Effects of Physical Activity on Cognitive Function in Young People Effects of Physical Activity on Cognitive Function in Young People: 2nd Edition Effects of Resistance Training on Strength and Muscle Thickness Effects of Stress Exposure on Mental Health and Well-Being Efficient Use of Acute Hospital Services for Older People Electromagnetic Fields and Health Electromagnetic Fields and Health—Effects on the Nervous System Electronic Cigarette Use and Public Health Electronic Cigarettes as a Tool in Tobacco Harm Reduction Eliminating Health Disparities to Achieve Health Equity Emergency Medical System and Emergency Medicine in the Time of COVID-19 Emerging Biological Threats and Public Health Preparedness Emerging Contaminants in the Environment Emerging Issues in Air Quality: Pollutants, Sources, Concentrations, Chemistry, and Health Risks Emerging Technologies for Treating Emerging Contaminants in Water/Wastewater Emerging Transportation Solutions for Safer and Greener Future Mobility Emotional, Behavioral, and Psychological Impact of COVID-19 Pandemic Endocrine Disruptors and Human Health Endocrine Disruptors and Public Health Endocrine Disruptors Leading to Obesity and Related Diseases Energy Conservation Measures, Indoor Air Quality and Health Energy Use and Environmental and Public Health Energy-Saving Technology in the Urban Sewage Treatment Process Environment and Behavior Environment and Health - Bridging South, North, East and West: Proceedings from the ISEE, ISES and ISIAQ Conference 2013 Environment and Patient Safety in Intensive Care Units Environmental and Food Allergy Environmental Bacterial Pathogens and Human Health Environmental Carcinogens Environmental Chemical Mixture at Low Concentration and Children’s Health Environmental Determinants of Infectious Disease Transmission Environmental Diseases Environmental Fate and Effect of Nanoparticles and Nanomaterials Environmental Geochemistry and Health Environmental Health Risk Assessment Environmental Hygiene Environmental Hygiene and Health Promotion Environmental Impacts of Food Consumption and Nutrition Environmental Legislation and Public Health Environmental Nanoparticles and Their Toxicity Environmental Neurotoxicology Environmental Pollution and Human Health Risk Environmental Research and Public Health: Featured Review Papers Environmental Research on Alcohol: Public Health Perspectives Environmental Sustainability: Knowledge, Attitudes and Behavior Environmental Systems Engineering Epidemiology and Determinants of Dental Caries in Children Epidemiology as a Tool for Thinking Critically about the Cause of Disease and Ill-Health Epidemiology of Iron Deficiency in Children and Its Consequences Epidemiology of West Nile Virus Epidemiology, Prevention and Control of Legionellosis: New Trends and Perspectives Epidemiology, Prevention and Control of Malaria Equity of Health and Health Care in Aging Societies Ethics, Social Responsibility and Quality of Life in Times of Crisis Ethnicity and Religiosity as Risk Factors for Health Etiology, Diagnosis and Treatment of the Temporomandibular Joint and Masticatory Muscle Disorders in the Era of Emotional Challenges Evaluation of Rural Water Systems and Public Health Evidence for Incorporating Green Exercise into Clinical and Public Health Practice Evidence-Based Nature Therapy: Advances in Physiological Evaluation Evidence-Based Prevention of Childhood Malnutrition Evolutionary Medicine in Sport and Exercise: A Preventative Approach Exercise as a Polypill: The Effects of Physical Activity among Healthy Individuals and Patients Exploring Clinical Outcomes in Diabetes Patients Exposure and Health Effects of Secondhand Smoke Extreme Weather and Public Health Extreme Weather-Related Morbidity and Mortality: Risks and Responses Factors Affecting Human Fat Distribution and the Metabolic Health Implications Family Health Family Violence Financial Stress and Mental Health First International Conference on Environmental Sciences and Sustainability in University of Venda Food Addiction and Binge Eating Food Allergy, Genes and Environment Food Insecurity: Evidence to Move the Needle Food Safety Food Safety and Public Health Food Supply Chain, Consumer Choice and Obesity Foods for Plant-Based Diets: Innovations, Technologies and Applications Fourth Edition of Health Emergency and Disaster Risk Management (Health-EDRM) Frailty and Aging From Understanding Suicide Risk to Preventing Suicide Frontiers in Mental Health and the Environment Future Challenges in Emergency Response and Public Health in Compound Disasters Gender and Geoethics in the Geosciences Gender and Health Gene-Environment Interaction in Chronic Diseases Gene-Environment Interactions and Disease Gene-Nutrient-Environment Interactions Genetic Epidemiology Genetic Impact on the Development of Allergic Disease Geostatistics in Environmental Pollution and Risk Assessment Global Children’s Environmental Health Global Climate Change and Contaminants Global Environment and Children Health Global Epidemics of Zika? Implications for Public Health Global HIV Prevention and Treatment: Public Health Considerations Global Public Health and Epidemiology Green Transportation Green-Blue Space and Health: Advances in Methods, Technologies and Applications Greening Urban Spaces: A Healthy Community Design Groundwater Contamination in Urbanized Areas Gulf War Illness, A Drug and Environmentally-Triggered Condition Gut Microbiota in Health and Disease Hair Disorders Hazardous Waste and Human Health Hazardous Waste and Human Health-2015 Head and Neck Cancers: What's New? Headaches HEAL: Transformational Change for Environmental, Planetary, and Human Health Health and Health Care for Homeless People in Various Contexts Health and Healthcare for People with Diabetes Health and Wellbeing of Children, Adolescents and Young Adults Health Behaviour and Lifestyle Health Benefits of Nature Health Care for Old People Health Disparities Arising from Inequitable Access to Water and Sanitation Systems in Developed Countries Health Economics Health Effects of Waterborne Contaminants: A Focus on Emerging Concerns Health Emergencies and Disasters Preparedness Health Environment and Sustainable Development Health Geography and Its Relevance for Future Public Health Health Humanities: Humanism in Health and Healthcare Delivery for Migrant and Minority Populations Health Impact Assessment Health Impact Assessment: Realizing Its Potential Health Inequalities in Urban Areas—Factors, Processes, and Dynamics Health Inequality and Spatially Distribution Health Informatics and Public Health Health Literacy and Equity – Interdisciplinary Perspectives and Recent Trends in Health Literacy Research and Action around the World Health Literacy, Nutrition and Public Health Health Market: Incentives and Competition Health Promotion and Quality of Life Improvement Throughout the Life Cycle Health Promotion, Physical Activity and Health Behaviors among Teenagers and Young Adults Health Systems and Services Health Technology Assessment and Public Health: Relation, Potentialities and Evidence Generation Healthy Workplaces, Employment and Chronic Conditions in Europe: Answering the Hidden Emergency with Innovative Strategies Heated Tobacco Products Heavy Metals and Health Heavy Metals: Environmental and Human Health HIV Prevention and Mental Health Disparities HIV Status of the World after Three Decades: Progress, Prevention Efforts, Challenges, and Strategies for the Way Forward HIV/AIDS: Social Perspectives Holistic Approaches to Understanding and Caring for Vulnerable Populations Hospitals’ Contribution to the Geographies of Social and Economic Innovation Housing and Health Human Factors and Ergonomics: Bridging the Gap between Research and Practice in Occupational Safety and Health Human Health, Risk Analysis and Environmental Hazards Human Microphysiology Systems Human Performance in Extreme Environments ICERPH 2020—Addressing Environmental Threats to Human Health from Pregnancy to Senility IJERPH: 10th Anniversary IJERPH: 15th Anniversary Impact of Clear Aligners and Custom-Made Mechanics on Children’s and Adults’ Oral Health: New Frontiers, Indications, Limitations, and Patient Perspectives Impact of Coronavirus (COVID-19) Impact of Genes and the Environment on Obesity and Cardiovascular Disease Impact of Out-of-Home Meals on Public Health Implementation of Interventions in Public Health Implementation Research in Chronic Disease Prevention and Control Improving the Health of Individuals Who Inject Drugs In-Silico Medicine in Diagnosis, Prevention and Treatment of Musculoskeletal Disorders Independent Mobility: Exploring Children’s Freedom to Travel and Play without Adult Supervision Indigenous Health in Canada: Integration of Community-Based Environmental, Health, Medical, Natural and Social Sciences Research to Mitigate Disparities in Wellness Indigenous Health, Environments and Wellbeing in Canada Indoor activities and health risks/protection Indoor Air Pollution and Health Indoor Air Pollution and Human Health Indoor Environmental Quality: Exposures and Occupant Health Indoor Radon Risk Assessment and Remedial Actions Inequalities among Families Involved with Child Welfare Services Inequalities in Health Infections in Nursing Homes: Evidence-Based Interventions Influence of Dairy Consumption and Production on the Environment Influence of Expectations, Psychological, Cognitive, and Emotional Factors on Pain Perception Innovations in Biostatistical Methods and Data for Public Health Research Innovations in Exposure Assessment for the Sound Management of Chemicals Innovations in Remote Sensing and GIS for Environmental, Urban and Public Health Sciences Innovative Research in Health Communication Insights into Defeating HIV-Associated Neurocognitive Disorders Integrated Assessment of Artisanal and Small-Scale Gold Mining (ASGM) in Ghana International Symposium on Environmental Physiology and Medicine Internet Based Obesity Prevention in the Time of COVID-19 Interval Training: Different Approaches and Designs Applied to Health and Fitness Intimate Partner Violence: Predictor Factors and Consequences for the Victim and Their Children Invasive Procedures in the Hands of Respiratory Physicians in the Diagnosis, Staging, and Treatment of Thoracic Cancer: Why, How, and When? Is Exercise the Best Medicine during the COVID-19 Pandemic? Latest Insights and Research Perspectives ISEE Commentaries Job Burnout: A Deep-Rooted Issue in Ever-Changing Workplaces Job Stress and Health Latin American Study of Nutrition and Health (ELANS) Lead: Risk Assessment and Health Effects Leaving no one behind: Equity and Eye Health Leptospirosis in the Animal—Human-Ecosystem Interface Lessons Learned from Research on Rare Diseases: Ethical and Legal Challenges Leukemia Arising from Chemical Exposures and Chemotherapeutic Drugs Lifecourse Environmental Epidemiology Lifestyle and Environmental Interventions to Promote a Healthy Ageing Lifestyle and Risk of Depression Lifestyle Intervention for Chronic Diseases Prevention Livelihoods Resilience and Sustainable Rural Development Long COVID and Post-COVID-19 Syndromes Lung Diseases Associated with Environmental Pollutants Malnutrition and Public Health Management of the Circular Economy in the Productive Sectors: Innovative Practices for a More Sustainable World Mass Spectrometry and Environmental Analysis Maternal and Child Health Maternal and Child Health and Nutrition in Latin American Populations Measurement and Evaluation of Occupational and Behavioral Exposures Measuring Disability and Disability Inclusive Development Measuring Inequalities in Health: Innovative Approaches and Applications Medical Ethics in the Time of COVID-19: Challenges and Opportunities Medical Ontologies Melanoma Epidemiology: Analytical Studies, Systematic Reviews and Meta-Analyses Addressing Environmental Risk Factors and Preventive Measures Mental Health and Cognitive Function in Elite Athletes around the Globe Mental Health and Health Psychology Mental Health and Social Care and Social Interventions Mental Health and Well-Being on School Campus in the Post-pandemic Era Mental Health Care Mercury and Health: Current Perspectives and Future Directions Metabolic Syndrome and Its Association with Biomarkers Methodological Innovations and Reflections-1 Methodological Study in Environmental Health and Public Health Microplastics: Hazards to Environmental and Human Health Migraine: Symptoms, Causes, Diagnosis and Treatment Migrant Health Migrant Health 2012 Migrant Health Burden: Emerging Challenges and Future Solutions Migration and Migration Status: Key Determinants of Health and Well-Being Migration, Population Mobility and Public Health: Focus on Vulnerable and Marginalised Populations Migration, Resilience, Vulnerability and Migrants’ Health Mindfulness-Based Practice for Health Benefits Modeling Tools for Occupational Exposure Assessment Modern Occupational Medicine Molecular Mechanisms of Helicobacter pylori Pathogenesis and Host Factors Mosquito Control Innovations into The 21st Century Motor-Vehicle Crashes and Occupant Protection Mountain Sports Activities: Injuries and Prevention Multidimensional Aspects of Oral Health-Related Quality of Life Multimorbidity, Polypharmacy, and Medication Appropriateness: Public Health Challenges and Research Priorities Multiple Criteria Analysis and Artificial Intelligence for Multidimensional Risk Management with Applications in Healthcare, Supply Chain and Sustainability Musculoskeletal System in Exercise and Rehabilitation: From Lab to Clinical Practice Mycotoxins in the Agri-Food Chain Nano-Bio Interactions: Nanomedicine and Nanotoxicology Nanomaterials-Based New Techniques, New Drugs, and Antibacterial Reagents Nanomaterials-Based New Techniques, New Drugs, and Antibacterial Reagents: An Update Nanotechnology to the Benefit of Environment and Public Health Neglected Diseases: Public Health and Environmental Studies Neurodiseases and Public Health Neuromuscular Control of Human Movement Neurorehabilitation and Neuroeducation of Developmental Dyslexia in Healthcare Systems of Different Countries New Advances in Cardiovascular Diseases New Advances in Cervical Cancer New Advances in Neurological Physiotherapy and Rehabilitation New Advances in Palliative Care New Advances in the Diagnosis and Treatment of Congenital Heart Disease New and Re-emerging Pathogens New Challenges and Crucial Topics for 2030 Public Health New Challenges in Prehospital Emergency Care New Directions in Environmental Communication Research New Frontiers in Rehabilitation New Horizons in Cerebellar Research New Techniques, Technologies and Materials for Dentistry and Maxillofacial Surgery New Trends in Intelligent Quantitative Analysis on the Effects of Energy Consumption on Environment and Public Health New Trends in Research on Training, Performance, Conditioning, Coaching, Evaluation and Health in Basketball (Second Edition) New Trends in Virtual World Addictions and Problematic Internet Use Next Generation of Interoperability in Health Care: Standards and Application Nicotine, Electronic Cigarettes, and Alcohol Use and the COVID-19 Pandemic Noise and Quality of Life Noise and Sleep: 2nd Edition Noise Induced Hearing Loss and Tinnitus, Public Health and Sustainability Effects Noise Pollution and Health Non-cigarette Tobacco Product Dependence Novel Insights of the Etiological Factors Involved in Oral Conditions of Children and Teenagers Novel Methodologies in Drug Use and Drug Use Disorder Epidemiology Nursing and Public Health Nursing Care of Older Adults and People with Disability Nutrient Removal and Recovery Nutrition and Diabetes: A Health Issue Nutrition and Exercise in the Health Sciences Nutrition and Health Equity: Revisiting the Importance of Fruit and Vegetable Availability, Purchasing, and Consumption Nutrition and Public Health Nutrition in the First 1000 Days Nutrition, Diets and Public Health Nutritional and Non-nutritional Supplements in Sports: The Public Health Impact Obesity and Public Health Obesity Prevention: Systems change for healthy environments and healthy people Obstructive Sleep Apnea Syndrome: From Symptoms to Treatment Obstructive Sleep Apnea: diagnosis, prevalence, treatment, clinical presentation, comorbidities, and responsible mechanisms Obstructive Sleep Apnoea Occupational Cancer Occupational Health Occupational Safety and Related Impacts on Health and the Environment Occupational Sedentary Behaviour Occupational Stress in Planetary Health: Explaining Multilevel Systems and Effects of Work, Time and Place Occupational Stress, Human Health and Wellbeing Occupational Therapies and Human Well-Being Occupational Therapy: Neurorehabilitation of Children and Adults Occurrence of Oral Epidemiology and Its Determinants Open Innovation in the Research and Industry about Natural Environment and Public Health after Pandemic of COVID-19 Operational Research to Tackle Antimicrobial Resistance in Ghana Operations and Innovations for the Environment Opioids: A Challenge to Public Health Optimal Mental Health for Optimal Academic Performance in University Students Optimising Drug Prescribing and Improving Medication Management: What Can We Do? Oral Cancer: Public Health Concern Oral Diseases: Prevention, Diagnosis and Treatment Oral Health among the Older Population Oral Health and Host Related Environmental Factors in Periodontal Status Oral Health-Related Quality of Life and Periodontal Health Status among Different Age Groups Oral Hygiene and Function: Lessons and Challenges for General Health Oral Inflammation and Chronic Autoimmune Diseases Oral Microbiota and Oral Health Oral Pathologies and Their Impact on Public Health Outcomes of Joint-Preserving Surgery for Rheumatoid Forefoot Deformity Overweight and Obesity—Diagnosis and Treatment Pain Neuroscience Education, Chronic Pain, and Health Care Parents and Children during COVID-19 Pandemic Passion, Grit, Mindset, Achievement and Well-Being Passive Exposure to Conventional, Heat-Not-Burn and Electronic Smoking Products and Related Health Effects: Second Edition Patient and Consumer Engagement in Health Care and Wellbeing: Challenges and Opportunities for a Participatory Health Approach Patient Satisfaction with Different Service in Rare Diseases Pediatric Infectious Diseases Pesticides and Health Pharmaceutical Policy and Practice Research Photocatalysis Assists Carbon Neutrality Physical Activities in the Water Environment: Drowning, Prevention and Rescue Physical Activity and Dietary Behaviors among Underserved Groups Physical Activity and Exercise Programs in Older Adults Physical Activity and Health Behaviors Physical Activity and Nutrition Among Border and Other Marginalized Populations Physical Activity and Psychosocial Health: A Two-Way Path to Improve Youth Wellbeing Physical Activity and Public Health- Physical Activity and Sedentary Behavior: Trends and Impact during the Pandemic Physical Activity and Well-Being in School Setting Physical Activity Programmes for the Elderly and Their Health Implications Physical Activity, Fitness, and Physical Education Physical Activity, Sedentary Behavior and Sleep at Different Stages of Life Physical Fitness and Health Improvement Physical Fitness and Health Outcomes throughout the Life Span Physical Performance and Recovery during Exercise-Induced Muscle Damage Physical Well-Being and Motor Development over the Life Span Positive Development in Neuropsychology for Young Population Poverty and Child Well-Being Preparedness and Emergency Response Prescription Medication Addiction/Abuse and Public Health Consequences Prevention and Treatment of Cerebrovascular Diseases Prevention Better than Cure for Long Term Conditions? Prevention of Mental Health Disorders in Children and Adolescents Prevention, Treatment and Rehabilitation of Musculoskeletal Disorders Preventive Medicine Primary Care and Health Behavior Change Proceeding of 'Ecosystems and Human Well-Being in the Transition towards Green Economy' Proceedings from 2014 Global Land Project (GLP) Asia Conference Proceedings from 9th International Conference on Managing Fatigue Proceedings from the 11th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 12th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 13th International Symposium on Recent Advances in Environmental Health Research Proceedings from the 2014 Minority Health & Health Disparities Grantees’ Conference (MHHDGC) Proceedings from the 3rd International Symposium on Environment and Health Proceedings from the Eighth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Fifth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Ninth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Seventh International Symposium on Recent Advances in Environmental Health Research Proceedings from the Sixth International Symposium on Recent Advances in Environmental Health Research Proceedings from the Tenth International Symposium on Recent Advances in Environmental Health Research Proceedings of 1st Latin American Congress of Clinical and Laboratorial Toxicology (Toxi-Latin 2014) Proceedings of 2nd International Symposium on Environment Science and Health Proceedings of ICOH Occupational Health for Health Worker (OHHW2019) Conference Proceedings of Research Centers at Minority Institutions (RCMI) Translational Science 2017 Conference Proceedings of the 2019 Research Centers in Minority Institutions (RCMI) Program National Conference Proceedings of the 2020 inVIVO Planetary Health Annual Conference: Project Earthrise Proceedings of the 2022 Research Centers in Minority Institutions (RCMI) Consortium National Conference Process-Oriented Data Science for Healthcare 2018 (PODS4H18) Processed Food: Nutrition, Safety and Public Health Promoting Health and Wellness: Implications for Physical Therapy Promotion of Big Data and Intelligent Transportation to Traffic Safety and Environment Promotion of Healthy Active Habits in Children, Adolescents and University Students Promotion of Healthy Foods: Effectiveness, Evaluation and Agenda Setting for Examining Intervention Techniques to Improve Dietary Intake Promotion of Healthy Work Providing Appropriate Health Care in Patients with Myelodysplastic Syndromes and Clonal Hematopoiesis Psychological Health and Benefits of Mindfulness-Based Interventions Psychological Impact of Stress- and Trauma-Related Events in Early Years of Parenthood Psychology of Addictive Behaviors Psychology of Eating: Understanding of Eating Behaviours Psychology of Learning in Higher Education Psychology, Behavior and Health Outcomes Psychology, Education and Sport in Children Psychosocial Aspects and Quality of Life in Bariatric Surgery: An Update and Directions for Future Research Psychosocial Factors and Health at Work: Evaluation and Intervention Public Health and Digital Technologies during Pandemic: A Multidisciplinary Perspective Public Health Consequences of Social Isolation and Loneliness Public Health Informatics Public Health Informatics Public Health: Feature Papers Public Health: How Safe Is Cardiac Imaging? Quality of Life and Mental Health of the Elderly in Nursing Homes Quality of Life in Orthopedic Diseases Quality of Life, Well-Being and Nurse-Patient Interaction in Late Life Quality of Life: The Interplay between Human Behaviour, Technology and the Environment Quantifying Atmospheric Ammonia and Its Impacts: Measurements, Modeling and Mitigation Radiation and Cancer Risk Real World Data for Population-Based Pediatric Studies Real-World Evidence for Resuscitation Science Reappraisal of Risk Factors for Cardiovascular Diseases Recent Advances and New Perspectives on the Multidisciplinary Management of COVID-19 and Long COVID in the Life Span Recent Advances in Environmental Research Recent Advances in Orthodontics and Clear Aligner Therapy Recent Advances in Public Health Recent Advances in the Management of Chronic Pain Recent Advances on Environmental and Toxicologic Pathology Recent Research in Health Psychology Recreation, Ecosystems and Social Wellbeing: Towards More Effective Synergy Reducing Exposure to Second-Hand Tobacco Smoke Regional Scale Industrial Contamination of Soils and Groundwater — From Risk Assessment to Risk Management Regulation of Muscle Mass, Exercise, Metabolism Rehabilitation in the COVID-19 Pandemic Remote Sensing, Crowd Sensing, and Geospatial Technologies for Public Health Research Workforce and Healthcare Disparities Resilience, Stress, and Risk Factors during the COVID-19 Pandemic Resistance Exercise/Training to Improve Physical Fitness and Health Responsible Risk Governance in Hazardous Industries Retail Strategies to Support Healthy Eating Risk Assessment and Preventive Child Health Care during the First 1000 Days from Conception Onwards Risk Factors for Oral Disease Road Safety: Public Health Challenge Roma Health Roma Health Disadvantage Routes to Improve Health Literacy during the Life-Course Salutogenesis and Coping: Ways to Overcome Stress and Conflicts Salutogenic Cities for Chronic Diseases Prevention Sarcopenia, Exercise and Quality of Life SARS-CoV-2 Variants, Where Is the End? School-Based Prevention Programs for Drug and Alcohol Misuse in Children and Adolescents School-to-Work Transitions: Developmental and Mental Health Outcomes Second Edition of Active Commuting and Active Transportation Second Edition of COVID-19: A Public Health Approach for Health Professionals Second Edition of Effects of Environmental Pollutants on Human Reproductive Health Second Edition of Recent Advances in Polycyclic Aromatic Hydrocarbons Research: Occurrence, Fate, Analysis and Risk Assessment Second Edition of Social-Emotional Development and Learning in Early Childhood across Cultures Second Edition of Teenage Reproductive Health: Pregnancy, Contraception, Unsafe Abortion, Fertility Second Edition of the COVID-19 Pandemic in Europe: Response to Challenges Second Edition of the Current Situation and Distribution of Rare Diseases: Challenges, Prevention, Healthcare, and Effects Second Edition of the Next Frontier in Health Geography: Context and Implications for Interventions Second Edition of the Nutrition Transition and Physical Inactivity and Health Outcomes thereof in Low- and Middle-Income Countries: From Preconception to Adulthood Second Edition of Urban Disaster Resilience and Sustainability Second Edition: Diagnosis and Treatment of ADHD in Adolescents Selected Paper from the 15th International Symposium on Recent Advances in Environmental Health Research Selected Papers from the 1st International Electronic Conference on Environmental Health Sciences Selected Papers from the 3rd International Electronic Conference on Environmental Research and Public Health--Public Health Issues in the Context of the COVID-19 Pandemic Self-Control, Compliance and Adherence to Health Prescriptions Self-Efficacy and Self-Management of Chronic Diseases Sepsis in a Changing Environmental and Technological Landscape Sexual and Domestic Violence and Adolescent Health Sleep Apnea Syndrome Sleep Health Sleep Medicine and Health Sleep Quality Research Small Solutions for Big Water-Related Problems—Innovative Microarrays and Small Sensors to Cope with Water Quality and Food Security Smart Coach and Injuries Prevention in Young Athletes Smoking and Tobacco Control Smoking Cessation Smoking Cessation in Pregnancy Social and Economical Determinants of Health Social and Environmental Determinants of Oral Health Social Determinants and Geographic Disparities in Health and Health Care Social Inequality and Health: Determinants, Mechanisms, and Consequences Social Justice and Administration and Public Health: Rights, Risks and Management Social Marketing’s Contribution to Public Health Social Network Analysis and Public Health Social Network Interventions for Health Behaviours Social Vulnerability and Frailty in Older People Societal Side Effects: The Wider Impact of Pharmaceuticals on Society Socioeconomic Circumstances and Mental Health Soil Pollution and Public Health Soil Pollution: Prevention and Mitigation Solving the Global Water Shortage Crisis: A Focus on Treatment and Reuse of Wastewater Sound and Health related Quality of Life Spatial Dimensions of Public Health: Identifying and Monitoring Vector-Borne Diseases and Their Geographic Diffusion Spatial Epidemiology Spatial Modelling for Public Health Research Spatio-temporal Frameworks for Infectious Disease Epidemiology Sport-Exercise and Stress: A Winning Combination Stress and Health Stress and Training Load Effects on Recovery, Well-Being and Sports Performance Stress Biomarkers Stress, Faith, Resiliency, and Health among Black Men Stroke: Athletes, Cardiac Risk, Physical Fitness, and Fatigue Studies and Advances to Evaluate the Impact of Epidemic Diseases in Modern Times Studies on Heavy Metals and Health Substance and Behavioral Addictions: Co-Occurrence and Specificity Substance and Drug Abuse Prevention Suicide Bereavement and Postvention: Advances in Research, Practice and Policy Suicide in Asia and the Pacific Suicide Prevention among Youth Suicide Prevention and Public Health Sunbathing Habits and Skin Cancer Sustainability of Wine Production and Food Systems in the Mediterranean Region Sustainability: Environmental Studies and Public Health Sustainable Healthy Working Life for All Ages—Work Environment, Age Management and Employability Sustainable Prosperity without Growth, or Damage to Nature Systematic Reviews and Meta-Analyses in Public Health Tackling Long-Term Care Needs in Ageing Societies in (Post) Pandemic Times: Multidisciplinary Research Approaches in an International Perspective Tactical Forces Injury Risk Management Teaching and Learning Process: Psychological Variables in Education, New Applied Technologies and Physical Activity Techniques in Renewable Energy Production and Their Future Aiming Sustainability and Commercialization Technology, Data, and the Assessment of Atmospheric Exposure on Finer Scales Test Your Limits: HIRT, HIIT, and Functional Training The (Un)Sustainable Side of Resilience in the Anthropocene The 2nd Edition: Land Use Changes and the Corresponding Ecological Risks The 2nd Edition: Mindfulness-Based Practice for Health Benefits The 2nd Edition: Stroke: Athletes, Physical Activity, and Resistance Training The 9/11 Disaster and Other Man-Made Trauma Events and Their Health Effects The Assessment of Alternative Interventions to Manage Health Emergencies The Burden of Orthopedic Surgery The Close Connection between Environmental Pollution and Medicinal Prescriptions The Combined Health Effects of Environmental Exposures The Complexity of Chronic Pain The Economics of Mental Illness The Effects of Advance Care Planning in Healthcare The Effects of Non-cigarette Tobacco Use on Health The Effects of Occupational Health and Safety Education and Training on Different Communities The Emerging Role of Sedentary Behaviour in the Health of Youth and Young Adults: Should There be a Recommended Threshold of Sedentary Behaviour Permissible? The Environment Risk of Autism The Environmental, Public Health, and Human Rights Impacts on Enhancing the Quality of Life of People with Intellectual Disability The Evolution of Dentistry in a Changing World between Technological Progress and Environmental Challenges The Evolving Relationship between Science and Disaster Risk Reduction The Health Consequences of Chronic Energy Imbalance The Health Effects of Water Fluoridation The Impact of Parasitology on Public Health The Impact of Sleep Loss on Human Behavior and Neural Activity The Impact of the COVID-19 Pandemic for Health Inequalities The Impact of the Gut Microbiota on Human Health The Impacts of the Built Environment on Public Health The Importance of Mentoring for Diversity, Equity and Inclusion The Importance of Statistical Analysis in the Field of Rehabilitation The Influence of Mediterranean Diet on Health and Environment The Injustices: Social Determinants of Vulnerability to Unhealthy Behaviours The Lived Experience of People Living with Dementia and Caregivers The Most Common Behaviors Associated with Substance Abuse The Negative Effects on Health Due to Noise Exposure The New Era of Treatment for Obesity The Nutrition Transition and Physical Inactivity and Health Outcomes thereof in Low- and Middle-Income Countries: From Preconception to Adulthood The Protection of Quiet Areas as a Public Health Aim Towards Sustainable Health: Approaches, Case Studies and Implementation The Relationship between Children’s Asthma and Air Quality The Role of Health Technology Assessment in Redesigning Chronic Disease Services The Role of Plants and Microorganisms in Ecological Restoration The Role of Science, Technology and Innovation in Ensuring Food Safety and Food Security The Role of Surgical Systems in Promoting Public Health The Social and Health Issues Facing HIV/AIDS Patients The Social Cost and Public Health Impact of Gambling and Online Game Playing Therapeutic Advances and Challenges in the Treatment of Multiple Sclerosis Thermal Comfort and Safety TikTok and Public Health Tobacco Control Tobacco Control 2015 Tobacco Control and Priority Groups Tobacco Control in Vulnerable Population Groups Tobacco Harm Reduction: Policy Considerations to Mitigate Risk to Youth and Adults Tobacco Smoking and Public Health Tobacco Smoking: Public Health, Science and Policy Tobacco Use and Treatment among Cancer Survivors Tobacco Use Research in Youth and Young Adults Tobacco-Related Diseases and Their Impact on Individual and Public Health Issues Together in the Fight against Arthropod-Borne Diseases: A One Health Perspective Topical Advisory Panel Members’ Collection Series: Environmental Science and Environmental Factors That Affect Health Towards More Sustainable Food Systems Toxicology of Xenobiotic Mixtures and Health Toxicology, Exposure Assessment and Epidemiology of Primary and Secondary Ultrafine Particles Traffic Safety and Injury Prevention Transport Impacts on Public Health Trauma, Addiction and Criminality Treating Alcoholism between Harm Reduction and Immediate Abstinence Ultrafine Particles and Potential Health Effects Urban Geochemistry and Human Health Urban Place and Health Equity Using Big Data to Advance Knowledge in Child Maltreatment Using Fuzzy Multi-Criteria Decision-Making Methods for Improving the Performance of Public Emergency Departments during the COVID-19 Outbreak UV-Radiation: From Physics to Impacts Vaccination and Health Outcomes Vaccine Safety and Public Health Violence against Women and Intimate Partner Violence Vitamin D and Public Health Water Desalination Water Microbial Pollution and Disinfection WHO Framework Convention on Tobacco Control. Are Countries Fully Implementing It? WHO Noise and Health Evidence Reviews Whole Systems Approaches to Process Improvement in Health Systems Winter Sports Implications for Training, Environmental and Health Women's Health and the Environment Work and Addictions: From Biology to Practice Work Engagement and Job Crafting Work Environment and Cardiovascular Diseases: From Evidences of Causal Associations to Workplace Interventions Workplace Aggression Workplace Interventions for the Prevention or Amelioration of Musculoskeletal Problems in the Working Population Youth and Child Development and Health Youth Sports, Young Athletes Evaluation, Implications for Performance and Health Youth Violence as a Public Health Issue All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text
Interactions between primary producers and bacteria impact the physiology of both partners, alter the chemistry of their environment, and shape ecosystem diversity. Several studies have documented that dinoflagellates-bacteria interactions have the potential to dramatically influence population dynamics. However, species-level information about the bacterial consortia characteristically associated with dinoflagellates still remains obscure. Recently, a research team led by Prof. Tang Yingzhong from the Institute of Oceanology of the Chinese Academy of Science (IOCAS) has provided new insights into the fundamental functions of bacteria consortia associated with the phycospheres of dinoflagellates and other harmful algal blooms (HABs)-forming microalgae. The study was published in International Journal of Environmental Research and Public Health on April 7. The researchers characterized the bacterial assemblages associated with 144 clonal cultures of harmful algae that have been established and cultured in the laboratory, including 130 strains of dinoflagellates (covering all major taxa of dinoflagellates) and 14 strains from other classes. The long-lasting bacterial associations to laboratory-raised algal cultures hinted bilaterally (i.e., mutualism) or at least unilaterally (i.e., commensalism) beneficial to the two partners. Bacterial communities of dinoflagellates displayed strong conservation across strains with an enrichment of Methylophaga from the class γ-proteobacteria and implied a potentially functional group of methylotrophs. "While bacterial associations with thecate and athecate dinoflagellates displayed compositional and functional similarities, athecate dinoflagellates showed a more preferred niche for aerobic cellulolytic members in Actinobacteria phyla. This implies a plausibly proneness to utilize cellulose as energy source," said Dr. Deng Yunyan, first author of the study. "Our results provide insightful understanding of the species composition and community functional profiles of dinoflagellate-associated bacterial assemblages," said Prof. Tang.
10.3390/ijerph19084446
Earth
Cracking open diamonds for messages from the deep earth
"Highly saline fluids from a subducting slab as the source for fluid-rich diamonds." Nature 524, 339–342 (20 August 2015) DOI: 10.1038/nature14857 "High-density fluids and the growth of monocrystalline diamonds," Geochimica et Cosmochimica Acta, Volume 141, 15 September 2014, Pages 145-159, ISSN 0016-7037, dx.doi.org/10.1016/j.gca.2014.05.050 Journal information: Nature , Geochimica et Cosmochimica Acta
http://dx.doi.org/10.1038/nature14857
https://phys.org/news/2015-08-diamonds-messages-deep-earth.html
Abstract The infiltration of fluids into continental lithospheric mantle is a key mechanism for controlling abrupt changes in the chemical and physical properties of the lithospheric root 1 , 2 , as well as diamond formation 3 , yet the origin and composition of the fluids involved are still poorly constrained. Such fluids are trapped within diamonds when they form 4 , 5 , 6 , 7 and so diamonds provide a unique means of directly characterizing the fluids that percolate through the deep continental lithospheric mantle. Here we show a clear chemical evolutionary trend, identifying saline fluids as parental to silicic and carbonatitic deep mantle melts, in diamonds from the Northwest Territories, Canada. Fluid–rock interaction along with in situ melting cause compositional transitions, as the saline fluids traverse mixed peridotite–eclogite lithosphere. Moreover, the chemistry of the parental saline fluids—especially their strontium isotopic compositions—and the timing of host diamond formation suggest that a subducting Mesozoic plate under western North America is the source of the fluids. Our results imply a strong association between subduction, mantle metasomatism and fluid-rich diamond formation, emphasizing the importance of subduction-derived fluids in affecting the composition of the deep lithospheric mantle. Main Ancient sections of continental lithospheric mantle (CLM) are characterized by multi-stage evolution, involving strong depletion and melt removal followed by variable degrees of ephemeral refertilization 1 , 2 . Refertilization, or enrichment, occurs by mantle metasomatism, whereby invading fluids or melts transport mobile components between different mantle reservoirs. This process plays a major part in shaping the mineralogical and geochemical variation in the CLM, as well as in determining its long-term stability, rheology and oxidation state 1 , 8 . While many mantle samples reflect the action of metasomatism, including mantle xenoliths and mineral inclusions in diamonds, the nature of the fluids involved can normally only be constrained indirectly from geochemical proxies or calculated using mineral/melt partition coefficients. Carbonatitic fluid or silicic melts have been proposed as the key metasomatic agents 9 , 10 . Direct samples of mantle metasomatic fluids are encased as microinclusions in fast-growing diamonds, known as ‘fibrous diamonds’ ( Fig. 1a ). These high-density fluid (HDF) inclusions are shielded from late-stage alteration, encapsulating a unique chemical and physical record that can trace the sources of deep mantle fluids and constrain the processes that shape their nature. Their study has revealed that, along with carbonatitic and silicic melts, saline compositions, with very high Cl, K, Na and H 2 O contents 5 , 6 , are involved in the metasomatic alteration affecting the deepest parts of the CLM. Figure 1: Microinclusion compositions in fibrous diamonds from the Fox kimberlite, Ekati mine. a , Photomicrograph of diamond E191 with the location of the microinclusions analysed by electron probe micro-analyser (EPMA). Filled symbols indicate HDFs; open symbols indicate olivine and orthopyroxene (OPX). b , Composition of HDFs and micro-mineral inclusions associated with specific Fox diamonds coded by colour. The global compositional range of HDFs (delineated by average compositions for individual diamonds) and the wide range of compositions shown by individual diamonds PAN4 (ref. 5 ) and ON-DVK-294 (ref. 6 ) from neighbouring central Slave kimberlites are also shown. Shaded arrows define the compositional evolution trajectories of HDFs due to fluid–rock interaction and melting in carbonated peridotite (taupe arrow) and hydrous eclogite (pink arrow) lithologies (see also Fig. 2 ). c , Primitive-mantle (PM) normalized trace element and chondrite-normalized (CN) REE patterns of saline and silicic HDFs in fibrous diamond from the Fox kimberlite. Full analyses and additional figures are in Supplementary Tables 1 , 2 , 3 , 4 and Supplementary Fig. 1 . PowerPoint slide Full size image Diamond HDFs vary between four major compositional types: saline, silicic, and high-Mg and low-Mg carbonatitic 7 ( Fig. 1b ). A strong connection was established between high-Mg carbonatitic HDFs and a carbonated peridotite source, either lithospheric or asthenospheric in origin 7 , 11 , 12 , while silicic and low-Mg carbonatitic HDFs have been related to hydrous eclogite (plus or minus carbonate) 7 . The saline fluid endmember sampled by diamonds is more enigmatic and its source in the deep lithosphere has remained ambiguous 5 , 6 , 7 , 12 , 13 . Here we analysed 11 microinclusion-bearing fibrous diamonds from the Fox kimberlite, Ekati mine, Northwest Territories, Canada. They have either coated or cubic-like morphologies, contain nitrogen in A centres (pairs of nitrogen atoms replacing two adjacent carbon atoms) and encapsulate a variety of fluid compositions plus inclusions of their host rocks ( Fig. 1a, b and Extended Data Figs 1 , 2 ), which shows a strong association between fluid composition and mantle host lithology. The majority of diamonds (9 of 11) contain saline HDFs solely associated with peridotite on the basis of their microinclusions of olivine, orthopyroxene, Cr-diopside and chromite. Silicic fluid compositions are related exclusively to eclogitic inclusions of omphacitic clinopyroxene. Both saline and silicic HDFs are enriched in incompatible elements ( Fig. 1c ); they have fractionated rare-earth element (REE) patterns with elevated Ba, U, Th and light-REEs but depleted Nb, Ta and alkalis (K, Rb and Cs). However, the fractionated nature of these patterns and the light-REE/medium-REE and Th/U ratios in particular are more pronounced in saline HDFs than silicic fluids, indicating different sources. The most striking differences between the two HDF compositions are the positive Eu and Sr anomalies within saline fluids versus no Eu anomaly and negative Sr anomalies in the silicic fluids. Initial 87 Sr/ 86 Sr (( 87 Sr/ 86 Sr) i ) values in the saline HDFs are 0.7039–0.7090 compared with 0.7064 and 0.7111 in the two diamonds with silicic fluids. The physical and chemical characteristics of fibrous diamonds and their HDFs sampled by the Fox kimberlite are identical to previously studied fibrous diamonds from both the neighbouring (8 km northeast) Panda kimberlite at Ekati 5 , 14 and to those from kimberlites at Diavik mine 6 , 12 (30 km southeast). Combining these localities reveals that the vast majority of these fibrous diamonds (84%) trapped saline HDFs, which are strongly associated with peridotite hosts. In a single Diavik diamond, the HDFs continuously change from saline to high-Mg carbonatitic compositions, from centre to edge ( Figs 1b , 2 and Extended Data Fig. 3 ); olivine, chromite and Cr-diopside microinclusions in this diamond demonstrate its peridotitic association 6 . A Panda diamond containing HDFs falling between saline to silicic compositions has omphacite microinclusions 5 , providing strong evidence that silicic HDFs may evolve from saline fluids due to wall rock reaction with eclogite ( Figs 1b and 2 ). An absence of included minerals prevented paragenetic typing in only one diamond, from Diavik, containing silicic to low-Mg carbonatitic HDFs ( Fig. 2 ). However, the continuous global compositional array between silicic to low-Mg carbonatitic HDFs, and the similarity of these fluids to the products of low-degree partial melting experiments of carbonated eclogite, suggest a strong genetic link to eclogite 7 . The relative abundance of HDF endmembers in fibrous diamonds, the compositional relationships between the HDFs and their co-existing mineral microinclusions, plus the observed evolutionary trends from saline HDFs to other compositional types ( Fig. 2 ), provide a means of tying the various metasomatic fluids to a common parental saline fluid endmember. A key issue is then the ultimate origin for saline HDFs. Figure 2: MgO and SiO 2 versus Cl content of HDF microinclusions in fibrous diamonds from the central Slave craton. The complete data set shows clear evolution trajectories (shaded arrows) from a parental saline fluid to high-Mg carbonatitic and silicic compositions, formed due to wall rock reaction and local melting induced in peridotite and eclogite, respectively. If carbonate is present in the eclogite, increasing reaction could lead to the formation of low-Mg carbonatitic HDFs (dashed black arrow); however, this trend has not yet been constrained by mineral microinclusions paragenesis. Data points for saline HDFs are from this study and refs 5 , 6 ; silicic HDFs from this study; saline to silicic from ref. 5 ; saline to high-Mg carbonatitic and silicic to low-Mg carbonatitic compositions from ref. 6 . Calculated compositions are assumed to be free of H 2 O and CO 2 and are in weight per cent (wt%). PowerPoint slide Full size image Positive Eu anomalies in low-pressure Cl − -rich hydrothermal fluids are typically interpreted to result from plagioclase control during fluid–rock interaction at high temperatures 15 . Such signatures can also originate from the strong aqueous complexes formed between dissolved Cl − and Eu 2+ , compared to other REE ions 16 . The lack of clear correlation between Cl content and the size of the Eu anomaly in global saline HDFs precludes a simple fluid–rock interaction process being the sole driver for generating the positive Eu anomalies. In Ekati and Diavik diamonds, the pronounced Eu anomalies of saline HDFs are associated with positive Sr anomalies ( Figs 1c and 3a ), mimicking the plagioclase accumulation signature of both oceanic and ophiolitic gabbros 17 , 18 . This correlation suggests a low-pressure crustal origin for the saline HDF elemental signature, through prograde metamorphic reaction of plagioclase to garnet and eclogite formation. Exactly how the saline chemistry of these fluids develops within subducting oceanic crust is not yet clear. One possibility is that they were originally pore fluids trapped in the crust during low-pressure hydrothermal alteration by sea water. Their initial salinity and K/Na ratios evolve as H 2 O and Na are consumed during spilitization, producing hydrated basalt. The highly saline nature of these new solutions potentially prevents dehydration during shallow subduction, allowing the formation of stable Cl − -rich phengite at high pressure and tempreture 19 , 20 . If dehydration should occur, residual chlorides with high K/Na ratios can be subducted and water originating from dehydration of underlying serpentinized peridotite at 150–200 km depth 19 can regenerate highly saline fluids at depth. Figure 3: Trace-element ratios and Sr isotopic signature in HDFs from the central Slave craton. a , Relationship between Eu* (expressed here as Eu/Sm) and Sr* (Sr/√(Pr × Nd)) anomalies in the saline HDFs constrain the subducted endmember, least influenced by interaction with lithosphere wall rock. b , Eu* versus La/Pr ratios. c , Eu* versus ( 87 Sr/ 86 Sr) i (corrected to kimberlite eruption age of 55 Ma). The positive trend formed by saline fluids varies between the Sr isotopic signature of sea water 150–200 Ma (ref. 26 ) and 87 Sr/ 86 Sr measured in megacrystalline clinopyroxene (CPX) from the Diavik CLM 21 . Higher ( 87 Sr/ 86 Sr) i values in silicic HDFs are probably inherited from old phlogopite in the eclogitic lithology within the Slave CLM. HDF composition symbols as in Fig. 2 , large symbols are data from the present study, where each colour represents an individual diamond; small symbols are data from refs 5 , 6 , 12 and 14 . PowerPoint slide Full size image Having established a possible evolutionary link between the spectrum of fluid compositions observed in Slave diamonds and subduction-related crustal protoliths, we can deduce the metasomatic history of the central Slave lithospheric root leading to fibrous diamond formation ( Fig. 4 ). The inherited positive Eu and Sr anomalies in saline diamond HDFs suggest direct ingress of fluids into the lithosphere from a subducting slab closely underlying the continental root. We explain the Eu and Sr anomalies, light-REE/medium-REE enrichment levels and variably radiogenic 87 Sr/ 86 Sr in these fluids ( Fig. 3a–c ) as representing the interaction with peridotite in the lithospheric root. This interaction altered the elemental chemistry of the invading saline fluids, flattening both the Sr and Eu anomalies and lowering the ratios of the most incompatible elements. The 87 Sr/ 86 Sr signature of the saline HDFs experiencing the most extensive fluid–rock interaction were buffered to local peridotite compositions that were relatively low ( ∼ 0.704) (ref. 21 ). Fluid–rock interaction is also reflected by an increase of SiO 2 , MgO and CaO and decrease of Cl and K in the saline HDFs ( Figs 1b and 2 ). The refractory nature of cratonic peridotite dictated that partial melting during saline fluid infiltration only occurred when carbonate metasomes (that is, magnesite) were intersected, leading to the formation of high-Mg carbonatite HDFs with low 87 Sr/ 86 Sr (ref. 12 ). Infiltration of saline fluids into eclogite hosts is tracked by the compositional variation in eclogite-related HDFs, from saline to highly silicic ( Figs 1b and 2 ), leading to the formation of in situ silicic melts. Both of the daughter high-Mg carbonatitic and silicic melts could then crystalize metasomatic phases in their host rocks such as the Cl − -rich phlogopite and apatite documented in an eclogitic xenolith from Diavik 22 . Overall, the Slave CLM was enriched with K, Cl, Ba and incompatible trace elements by the invading saline HDFs, while the oxidation gradient between the evolving fluids and the local lithosphere initiated ephemeral redox processes leading to diamond formation 23 . Figure 4: Schematic illustrating the evolution of saline fluids with increasing fluid–rock interaction as they interact with cratonic mantle lithosphere. The discovery of fluids carrying strong oceanic protolith geochemical signatures (that is, positive Eu and Sr anomalies and 87 Sr/ 86 Sr signature of Mesozoic sea water) in continental lithospheric diamonds suggests that the Slave CLM was directly overlying the subducting slab at the time of Mesozoic metasomatism. Numbers refer to stages in the compositional evolution depicted in the inset MgO–Cl and SiO 2 –Cl trends. When the parental saline fluids (1) ingress into the CLM they react and their oceanic signature is diluted. The melt-depleted nature of cratonic lithospheric peridotite prevents notable melting unless the saline fluids traverse either carbonated-peridotite or eclogite lenses, leading to in situ formation of high-Mg carbonatitic (2) and silicic melts (3), respectively. The possible presence of carbonate in eclogite may lead to formation of low-Mg carbonatite fluids with increasing melting (see Fig. 2 and Extended Data Fig. 5 for data). Rapid diamond formation occurs due to the oxidation gradient between the evolving fluids and local lithosphere, either as new fibrous diamonds or as fibrous coats on previously formed older octahedral diamonds. PowerPoint slide Full size image The issue is the timing of the fluid metasomatism and the nature of the event that triggered the process. The short mantle residence time of fibrous diamonds in the central Slave CLM (<200 million years ago (Ma); Extended Data Fig. 4 ), indicated by their low-aggregated nitrogen impurities, translates into young formation ages for both diamonds and their HDFs. Active subduction zones were a key feature of the complex tectonic setting of western North America and the high Arctic during the Mesozoic era 24 , providing several options for the fluid source in the ideal time window to allow saline HDF generation, diamond formation and eruption of the diamonds in Eocene epoch kimberlites. The low-angle subduction that has been suggested for some of these plates, such as the Farallon slab 25 , provides an opportunity for the direct transfer of slab-derived fluids into the base of the cratonic lithosphere. The most pristine saline HDFs that interacted least with the lithospheric root have Sr isotopic signatures corresponding with early Jurassic period seawater 87 Sr/ 86 Sr values 26 , strengthening the temporal connection between subduction and metasomatism ( Fig. 3c ). The lithosphere beneath western North America was extensively hydrated by shallow subduction 27 , 28 and mantle xenoliths from the Wyoming craton provide direct evidence for chlorine enrichment 29 , 30 . Saline HDFs trapped in fibrous diamonds from the central Slave craton are a deeper manifestation of this lithospheric hydration process, expressed as young diamond formation in the CLM root. The full spectrum of HDF compositional varieties, from saline, silicic and carbonatitic, are present in fibrous diamonds from various cratonic roots 6 , 7 , 12 , 13 , including intermediate composition between saline to silicic found in an eclogitic zoned diamond from Guinea 7 ( Extended Data Fig. 5 ). We suggest that deep mantle saline fluids are directly related to subduction events—they are key metasomatic agents from which the whole spectrum of diamond-forming fluids evolve, and they play a major part in impacting the composition of the deep lithospheric mantle globally. Methods Samples and methods A suite of eleven diamonds from the Ekati mine, Slave Craton, Canada, was selected for EPMA, Fourier-transform infrared (FTIR) and off-line laser ablation inductively coupled plasma mass spectrometry (ICP-MS) analyses. The diamonds have a large range in size with weight varying between 3–83 mg. Each diamond was laser-cut and polished on both sides to create a thin slab that permits the transmittance of light. It was then cleaned ultrasonically in HF 60% and HNO 3 69% for 2 h and washed with ethanol and distilled water before analysis. FTIR Analyses were performed using a Bruker IRscope II microscope coupled to a Nicolet 740 FTIR spectrometer (Globar source, KBr beamsplitter, MCT detector, He–Ne laser). Spectra were taken in the range of 550–4,000 cm −1 with a resolution of 4 cm −1 . Nitrogen concentration and aggregation states were determined using a computer program supplied by D. Fisher and the absorption coefficients of A centres (double substitution of carbon by two nitrogen atoms, type IaA spectrum), B centres (clusters of four nitrogen atoms substituting five carbon atoms, type IaB spectrum) and C centres (single nitrogen replacing a carbon atom, type Ib spectrum) 31 , 32 , 33 , 34 . After baseline correction and subtraction of the diamond bands, the concentrations of water and carbonate were determined using the maximum absorbance of water and carbonate and their absorption coefficients 35 . These concentrations were used to calculate the carbonate mole fraction (CMF = carbonate/(water + carbonate) molar ratio) of the trapped fluids ( Supplementary Table 1 ). EPMA The major element compositions of the microinclusions were determined using a JEOL JXA 8600 EPMA equipped with a Pioneer-Norvar EDS (133 eV) detector. Backscattered electron imaging was used to detect shallow, subsurface microinclusions (<2 µm depth). Each inclusion was analysed for 100 s using an acceleration voltage of 15 kV and a beam current of 10 nA. The spectral data were reduced using the ZAF/PROZA correction procedure software supplied by Noran 36 . The total amount of oxides and Cl in each analysis varied between 1 and 12.4 wt% with an average of 3.3 wt% for all 327 analysed HDF microinclusions and between 1.8 and 78 wt% with an average of 11 wt% for 68 analysed mineral microinclusions. Precision (2 σ (%) = 2 × 1/oxide in wt%) is <20% for oxide concentrations of 0.05 wt%, <10% for 0.25 wt%, <6% for 0.5 wt% and <2% for 1 wt% (M. Jablon and O. Navon, unpublished data). The low and variable sums reflect the small size of the inclusions, their depth and their high content of undetected water and carbonates. The ZAF/PROZA processing assumed that the difference to 100 wt% is composed of pure carbon. Later, all oxide and chlorine concentrations were normalized to 100 wt% on a carbon-free and volatiles-free basis (where Cl is present, excess calculated oxygen leads to a normalized total of more than 100%) and the average composition of the HDF in the diamond was calculated. Offline laser ablation Diamonds were ablated in a custom-designed, sealed PTFE ablation cell capped with a laser window that had been previously cleaned with UpA 6 N HCl and 2 N HNO 3 . Ablations were performed with a UV-213 New-Wave Laser ablation system, with the custom cell replacing that provided by the manufacturer. A pre-weighed diamond was brought into focus and an ablation was performed using a raster pattern. Ablation conditions were: scan speed 50 μm s −1 ; raster spacing 80 μm; energy output 5–6 J cm −2 ; repetition rate 20 Hz; spot size 160 μm; and pass depth 2 μm. Ablation time varied from 3–5 h. After ablation, the laser cell is opened in an ultraclean environment and all ablated material was collected in UpA 6 N HCl before being dried down before further chemistry. The diamond was rinsed in MQ water and dried. Diamonds were re-weighed and the weight loss (0.32–0.71 mg) resulting from the ablation was calculated. Weighing uncertainty is ±0.0007 mg, estimated from 100 repeat weighs of both a gem-quality and a fibrous diamond. The dried ablation product was taken up in 2 N HNO 3 . A 20% aliquot is taken by volume for trace element analysis. The remaining sample was processed for Sr isotopic analysis. The Sr separation procedure is based on the method described previously 37 , using Sr-spec resin but with modifications as outlined 38 for sub-ng samples. Quantifiable data and background corrections We use the limit of quantification (LOQ), as defined previously 39 , as a measure of our ability to quantitatively measure elemental abundances because this parameter is considerably more robust than defining the ‘limits of detection’, or LOD, which merely define the ability to qualitatively detect an analyte. The LOQ for a procedure with a well-characterized blank is defined 39 as: LOQ = 10 σ , where σ is the standard deviation of the blank for the process (here defined as the total procedural blank (TPB)). This approach places clear limits on our ability to quantitatively report concentration data in the diamonds studied. We use a data set of 20 TPBs performed using the same ablation cells and reagents as used for samples, to determine the LOQ for trace element abundances. Within each batch of samples, between five and ten additional TPBs were also run to monitor whether our LOQ estimate was applicable from one batch of samples to another. Any analyte below the LOQ is flagged in the data and not used on a concentration plot. In the previous definition 39 , data can only be quantitative if it exceeds 10 σ of the blank, hence the analyte/blank ratio is a critical parameter to measure. The total amount of analyte and hence the analyte/background ratio is simply a function of the length of the ablation, with the ratio increasing with time. Multi-element ICP-MS TPBs and aliquot sample solutions were analysed for trace element concentrations on the Thermo-Electron Element II ICPMS at Durham University. Each sample aliquot was made to 500 μl with 3% HNO 3 . Instrumental conditions were similar to those described previously 40 . Solution concentrations were measured against 9-point calibration lines constructed from appropriately dilute solutions of the international standards AGV-1, BHVO-1 and W-2. All concentrations were corrected for instrument drift using an 115In internal spike. Oxide correction coefficients were determined by running standard solutions of Ba, La, Ce, Pr, Nd, Sm, Gd and Tb at the beginning of each analytical session to correct for the daily changes in the oxide production rate. All trace element concentrations were normalized to the diamond weight loss during ablation Thermal ionization mass spectrometry With each batch of samples processed for isotopic analysis, between five and ten TPBs were carried out to determine the average size of the blank contribution and its effect on the isotopic composition of the sample. During the course of this study Sr blanks averaged 5 pg ( n = 12). A Sr isotope blank correction was performed using a measured blank isotopic composition based on combining the equivalent of over 60 TPBs to yield sufficient Sr ( ∼ 500 pg) for a precise and accurate thermal ionization mass spectrometry (TIMS) analysis. The average 87 Sr/ 86 Sr composition of the laboratory blank during the course of this work was 0.710853 ± 0.000194 and all Sr samples were blank-corrected based on this value and the average blank set at 5 pg. Sr samples were loaded using procedures described in detail previously 37 , 40 , employing a purified TaF 5 activator. Sr isotope ratios were measured on a ThermoFisher Triton TIMS at Durham University. Sr isotope measurements were carried out using a static multi-collection routine. Each sample measurement achieved between 50 and 300 ratios with an integration time of 4 s per ratio; total analysis time was approximately 3–20 min. Mass fractionation was corrected using an exponential law and an 86 Sr/ 88 Sr ratio of 0.1194. Multiple loads ( n = 43) of NBS987 of between 0.5 and 3 ng size gave an average value of 0.710260 ± 0.00002 (2 standard deviations; n = 43), which compares well to the long-term values reported from the Durham University laboratory for similar sized standards from the same laboratory 37 , 38 , 40 , 41 . As the Durham laboratory reports Sr data relative to an 87 Sr/ 86 Sr ratio of 0.710240 no additional normalization was performed. Average signal size for the 88 Sr for the 0.5 ng and 3 ng standards were 0.8 ± 0.4 V and 5 ± 1.3 V, respectively. Signal sizes for samples were on average 0.2 ± 1 V. We have previously documented in detail the levels of accuracy and repeatability for samples and standards at these low signal intensities 38 . There is no systematic relationship between analyte size and Sr isotope composition after blank correction. Hence we conclude that our blank correction procedures adequately correct for our systematic TPB. Uncertainties in the magnitude and isotopic composition of the blank are incorporated into the reported errors on isotopic compositions at the 2 σ level. Previous experiments 38 indicate that for blanks of ∼ 5 pg, it is possible to make accurate blank corrections to samples containing as little as 20 pg and therefore that level was used as a cut-off for accepting accurate data in this study because similar levels of blank reproduction were achieved.
Geochemist Yaakov Weiss deals in diamonds. Not the brilliant jewelry-store kind, but the flawed, dirty-looking ones used more for industry than decoration. Gem-grade diamonds are generally pure crystallized carbon, but many lower-grade stones contain so-called inclusions–chemical intruders bottled up inside the crystal. Inclusions lower the stone's value; but they contain worlds of information about the deep, inaccessible regions where diamonds come from. Their compositions speak to not only how diamonds form (and maybe how to find them), but other basic processes below. "They are the most pristine samples we can get from underlying depths," says Weiss, who works at Columbia University's Lamont-Doherty Earth Observatory. "After a diamond captures something, from that moment until millions of years later in my lab, that material stays the same. We can look at diamonds as time capsules, as messengers from a place we have no other way of seeing." Some of his recent studies are providing new insights to these regions. For most of history, almost everything about diamonds was a mystery; no one even knew where they came from. In the late 19th century, geologists figured out that they erupt in small, oddball volcanic spouts, called kimberlites. These eruptions usually punch through the centers of ancient continents, made of rocks that date back billions of years. The diamonds themselves may or may not be that old. Scientists now believe they crystallize in earth's mantle, 140 to 250 kilometers (about 90 to 150 miles) below. A few may come from as deep as 700 kilometers (430 miles)–the deepest direct samples we have from those depths. At the surface, kimberlites are tiny–usually just a few acres–and hard to find. They are also mostly barren of diamonds; of around 1,500 or 2,000 known, only 50 or 60 have ever been found that are worth mining. Because diamonds are so valuable, many scientists are working to better understand them. But many questions remain. Exactly what raw materials and processes go into making diamonds? What causes kimberlites to erupt? Why are kimberlites, and diamonds, found in some areas, and not in others? Weiss's latest study, on the cover of the leading journal Nature, gets at some of these questions. In it, he and colleagues studied diamonds from the tundra of Canada's Northwest Territories. Prospectors have hunted diamonds across the United States and Canada for centuries, but it was not until the 1990s that the continent's first viable mines were discovered here. Some of the surface rocks are billions of years old, but the kimberlites that penetrated them are the youngest known–as young as 45 million years (others elsewhere can be hundreds of millions). Working with colleagues from the University of Alberta and Durham University, Weiss investigated so called fibrous diamonds–inferior stones that consist of multiple layers instead of a single gem-grade crystal–from the rich Ekati Mine. Inside, they found tiny droplets of liquid–apparent remainders of raw material from which the diamonds crystallized. Most researchers believe that diamonds solidify out of some kind of fluid or fluids; but exactly what those fluids are, and what processes are involved, are controversial. Analyses of these inclusions, and separate research on stones from a neighboring mine, showed them to be rich in carbon, and highly saline–plenty of chlorine, potassium and sodium, much like seawater. Weiss thinks this is not a coincidence. In recent years, other researchers have shown that the complex evolutions of the far north has included repeated opening and closing of ocean basins. A few have wondered if these events could be related to the formation of diamond-bearing kimberlites. Weiss and his colleagues connected the dots. Their research suggests that a slab of watery oceanic crust subducted at a shallow angle under the far more ancient continental rocks 150 million to 200 million years ago. The slab, they say, could have slid more or less intact, underneath what is now the present-day Canadian tundra, where the mines are located. There, they say, fluids from the long-traveled ocean crust reacted with solid continental rocks just above them, in exactly the zone where pressure and temperature conditions are right for forming diamonds. To bolster the case, in addition to the salts in the inclusions, there are trace element and isotope fingerprints that match the composition of seawater from this time, they say. Whether the reactions had something to do with driving the kimberlite eruptions to the surface is an open question. The diamonds most useful to geochemists are the least commercially valuable, containing chemical impurities. This one, from northern Canada, contains inclusions of coesite (a form of quartz) and tiny bubbles of fluid. The rough outer coating probably also contains items of interest. Credit: Yaakov Weiss Among other things, the study may help open the way to reconsidering the source of carbon for diamonds. As far as anyone can tell so far, most of the carbon seems to come from the depths of the mantle. But in recent years evidence has been building that at least some of it was once on the surface, and was shoved down by subducting tectonic plates like the ones Weiss proposes. A recent study by Lamont geochemist Peter Kelemen argues that the carbon can come from either the surface or the deep earth, though very little from either source gets turned into diamond. Weiss's current study does not examine this question. Are there more deposits to be found? Since the 1800s, scattered single diamonds have been found in many U.S. states and Canadian provinces, but almost none can be traced back to kimberlite sources. Some kimberlites have been uncovered, but most don't contain diamonds. One small mine operated in rural Arkansas in the early 1900s was quickly worked out; it is now a state park, where amateur diggers occasionally still find diamonds. Diamondiferous kimberlite was found in Colorado in the 1970s, but it was too poor for mining. The processes described in the Northwest Territories might have taken place elsewhere, but that remains to be seen. "Now it's time to look at fluid inclusions from other places," says Weiss. "Maybe the same things are happening in other areas. Maybe not." Weiss continues to work on related questions. At any one time, he has about 100 diamonds used for research. They are generally fingernail-clipping-size chips from larger stones. He keeps them wrapped up in elaborately folded small papers, labeled with origin and other information. In addition to Canadian diamonds, he has stones from Zimbabwe, Guinea, South Africa, Siberia and Brazil. Most have been loans or gifts from friends or colleagues, though a few years back he paid about $700 to a dealer in his native Israel for a half-dozen 1.5-carat African stones. Most of his investigations do not harm the diamonds; inclusions often can be analyzed by passing microscopic light beams or X-rays through them. However, in one new project aimed at diamonds from unusually deep regions, Weiss plans some destruction. To analyze isotopes of helium gas trapped within, he has to pulverize the diamonds to release the gas. (Diamond is the world's hardest substance, almost impossible to wear down–but a direct whack with a hammer will shatter one. Repeated beating turns it to something resembling fine granulated sugar.) "It seems crazy to crush diamonds, right?" he admits. "But it's the only way to get at that particular question." The Ekati diamond mine, on the tundra of Canada’s Northwest Territories, source of some of Weiss’s samples. The geochemist is interested in the origins of North American diamonds. Last year, Weiss published another paper about tiny droplets of fluid found encased within African gem-grade diamonds. Such droplets are fairly common in boart diamonds–inferior specimens like the fibrous type–but not in gems. Many scientists contend that boart and gem-grade stones crystallize out of two different kinds of fluids. To test the idea, Weiss obtained two very rare single-crystal stones containing fluids–one from South Africa's Finsch diamond mine, and one from a river deposit in Kankan, Guinea. The gems' fluids turned out to be similar to those in boart–a challenge to conventional theory. Such research could have practical applications. For one, greater knowledge of trace elements in diamond inclusions could lead to chemical "fingerprints" that would tell where commercial gems originated. This would allow better enforcement of the Kimberley Process, the 2003 UN agreement to blacklist so-called "blood diamonds" from nations where mining is controlled by warlords or corrupt governments. The process currently depends on paperwork that can be easily faked. In the lab, with a mass spectometer, used to analyze minute bubbles of gases trapped within diamonds. Beyond this, "understanding diamond formation can tell us about the deep earth's carbon cycle, which we have very little knowledge about," says Weiss. This is the long-term movement of vast amounts of carbon from the atmosphere and surface down into earth's interior, via biological processes, chemical weathering, subduction of tectonic plates, and then back up again via large, more conventional volcanic eruptions. The cycle is thought to play key roles in controlling climate and biological evolution, and in unseen processes far below the surface. For all his expertise, Weiss has to admit: he has yet to visit a diamond mine. As a student in Israel, he considered collecting samples in conflict-ridden areas of west Africa, but his adviser discouraged him. "He wanted me to stay in the lab and stay alive–not get killed in the field," he says. The mines in northern Canada are safe, but hard to get to–far out on roadless tundra, accessible only by charter aircraft. "I'm still hoping, some day," he says. "Diamonds–they're a very nice stone. It would be fun some day to see where people are finding them." Most of the techniques used to analyze diamonds are harmless–but to get at gas compositions, the diamond has to be crushed. This is all that remains of one previously studied stone.
10.1038/nature14857
Medicine
Giving antibodies to infant macaques exposed to an HIV-like virus could clear infection
Early short-term treatment with neutralizing human monoclonal antibodies halts SHIV infection in infant macaques, Nature Medicine, DOI: 10.1038/nm.4063 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.4063
https://medicalxpress.com/news/2016-03-antibodies-infant-macaques-exposed-hiv-like.html
Abstract Prevention of mother-to-child transmission (MTCT) of HIV remains a major objective where antenatal care is not readily accessible. We tested HIV-1–specific human neutralizing monoclonal antibodies (NmAbs) as a post-exposure therapy in an infant macaque model for intrapartum MTCT. One-month-old rhesus macaques were inoculated orally with the simian-human immunodeficiency virus SHIV SF162P3 . On days 1, 4, 7 and 10 after virus exposure, we injected animals subcutaneously with NmAbs and quantified systemic distribution of NmAbs in multiple tissues within 24 h after antibody administration. Replicating virus was found in multiple tissues by day 1 in animals that were not treated. All NmAb-treated macaques were free of virus in blood and tissues at 6 months after exposure. We detected no anti-SHIV T cell responses in blood or tissues at necropsy, and no virus emerged after CD8 + T cell depletion. These results suggest that early passive immunotherapy can eliminate early viral foci and thereby prevent the establishment of viral reservoirs. Main Recent advances in the discovery of human HIV NmAbs that have high potency and breadth of coverage have rekindled an interest in their use as pre-exposure prophylaxis, as well as therapeutic agents, including in the setting of MTCT, in which the time of exposure is known 1 , 2 . A combination of measures—including antiretroviral treatment (ART) of the mother and the infant, cesarean section and formula feeding—have diminished the rate of MTCT from 35% to less than 3% (ref. 3 ). Despite this reduction, HIV infects approximately 200,000 children yearly, primarily in places where ART is not available 4 . Treatment of babies with ART during both the early peripartum and the breast-feeding timeframes is recommended 5 , but risks remain, including the toxicities associated with long-term use and the development of drug-resistant viral variants 6 . Therefore, discovering less toxic methods to limit transmission to newborns would be advantageous 2 . In mucosal HIV and SIV transmission, the virus establishes a small founder population of infected cells after it has traversed the vaginal mucosal barrier 7 . This localized infection rapidly expands and spreads to local draining lymph nodes (LN), before disseminating systemically by 1 week after exposure 8 , 9 . Similarly in nonhuman primate (NHP) models of oral SIV exposure, the oral and esophageal mucosa and the tonsils are sites of early viral infection within 1 d post-exposure (d.p.i.), with rapid systemic dissemination, via the regional lymphatics, occurring within 1 week after exposure 10 , 11 . Because IgG from the circulation contributes substantially to the immunoglobulin pool in tissue and genital tract secretions, passively transferred neutralizing antibodies (NAbs) may have a protective effect by interaction with the virus at the mucosal level 12 , thus preventing systemic spread. In adult NHP models of mucosal SHIV transmission, there is abundant evidence for protective prophylactic efficacy with passively transferred human NmAbs 13 , 14 , 15 , 16 , 17 , 18 . In vitro , NmAbs have been shown to block HIV infection of dendritic cells and subsequent transmission to T cells 19 . Direct vaginal application of NAbs before challenge is protective in macaques 20 , and in HIV-exposed but uninfected humans, mucosal IgA can block transcytosis in vitro 12 . Vaccine-induced protection from vaginal challenge correlates with levels of glycoprotein 41 (gp41)-specific cervicovaginal IgA and IgG that have antiviral and transcytosis-blocking activities 21 . However, the tissue localization and the kinetics of passively transferred antibodies are still not well defined 13 , 22 . There is evidence for an impact by NmAbs in lowering plasma virus levels in established infections in NHP models 23 , 24 , 25 and in humans 25 , 26 . In NHP models, post-exposure prophylaxis using cocktails of the first-generation human NmAbs b12, 2G12, 2F5 and 4E10 partially prevented oral SHIV infection in newborns 24 . A single dose combining the newer, more-potent NmAbs VRC07-523 and PGT121 delivered 10 d after intravenous SHIV infection suppressed acute viremia and limited seeding of viral reservoirs in adult macaques 27 . We have shown that neutralizing polyclonal IgG purified from SIV- or SHIV-infected macaques that are injected subcutaneously (s.c.) can effectively control viremia and accelerate B cell responses, resulting in reduced pathogenesis in SIV-infected adults 28 and in SHIV-infected newborn macaques 29 , 30 . We hypothesized that a cocktail of two potent and broadly cross-reactive NmAbs, VRC07-523 and PGT121, would slow the initial virus expansion and reduce the chance of rapid escape in infant macaques exposed to pathogenic SHIV. We show that combined doses as low as 10 mg per kg body weight (mg/kg) administered 24 h after exposure can intercept replicating viral foci established by day 1 and prevent orally administered virus from establishing permanent viral reservoirs. Results Titration and biodistribution of subcutaneously administered antibodies in macaques We initially conducted studies to define the protective dose and kinetics of the CD4-binding site–directed NmAb VRC01 in blocking newborn macaques from oral SHIV SF162P3 infection after s.c injection and to determine the kinetics of passively transferred IgG in naive and infected macaques. First, we administered VRC01 to a total of seven male and female one-month-old macaques at 20 mg/kg ( n = 2) or 5 mg/kg ( n = 5) 24 h before SHIV exposure. We measured SHIV SF162P3 envelope–specific binding and neutralizing antibody kinetics in vivo . The time to maximal concentration in the plasma was 24 h, independent of dose, and the serum (plasma) half-life of VRC01 was 3.9–4.2 d ( Supplementary Fig. 1 ). Neither of the two macaques injected with the 20 mg/kg dose and only one of five macaques injected with the 5 mg/kg dose became infected. In this macaque, the magnitude and kinetics of virus in the plasma, termed the plasma virus load (PVL), were indistinguishable from that of control animals treated with IgG purified from naive macaques 30 ( Supplementary Fig. 1 ). These data are consistent with results from passive protection studies using VRC01 in juvenile and adult macaques 18 and guided the therapeutic range we used for infant macaques. Next, in a separate study designed to determine whether the kinetics of passively transferred IgG is altered in the presence of viral antigen, we assessed the distribution of purified polyclonal Ig from SIV-infected macaques (referred to as SIVIG) in a total of four male and female macaques, and we compared SIVIG kinetics in SIV-infected macaques to that in naive macaques. SIVIG was rapidly distributed in the plasma and tissues of infected and naive animals ( Supplementary Fig. 2 ). We used in situ hybridization to localize SIV in tissue samples collected at 24 h and at 2 weeks after oral challenge with SIV smE660 . SIV was undetectable in 24-h tissue samples but was detectable after 2 weeks in tissues both adjacent to and distant from the site of challenge ( Supplementary Fig. 3 ). Thus, IgG delivered subcutaneously is rapidly and widely distributed, and is unimpeded by viral antigen. NmAb cocktail immunotherapy in the presence of SHIV We next assessed the effectiveness of HIV-1 NmAbs as post-exposure prophylaxis in one-month-old infant inoculated orally with SHIV SF162P3 . For in vivo therapy, we tested a cocktail of VRC07-523 and PGT121, two potent NmAbs that target different regions of the HIV-1 envelope and that have been shown to have additive effects in vitro 31 . VRC07-523 is an engineered clonal relative of VRC01 that shows increased neutralization of most HIV strains and improved in vivo protection capabilities 32 . Therefore we used VRC07 instead of VRC01 for these therapeutic studies. PGT121 interacts with the variable regions and glycans of HIV-1 gp120 (refs. 33 , 34 ) and protects adult macaques from mucosal challenge at very low plasma titers 35 . Cocktails of PGT121 and VRC07-523 were prepared at total doses of 10 mg/kg (5 mg/kg each antibody) and 40 mg/kg (20 mg/kg each antibody) and delivered subcutaneously. We inoculated 20 one-month-old rhesus macaques orally with SHIV SF162P3 on day 0 and followed them for up to 28 weeks to assess virological, immunological and disease outcomes, with or without NmAb treatment starting on day 1. Pairs of animals were killed at days 1, 2 or 14 after exposure to monitor the development of SHIV SF162P3 infection in the blood and tissues of treated and untreated macaques ( Table 1 ; groups 1–4). We delivered NmAbs on days 1, 4, 7 and 10 after SHIV exposure ( Fig. 1a and Table 1 ; groups 4–6). SHIV SF162P3 infection by the oral route in one-month-old macaques results in reproducible, sustained PVL at >10 7 copies/ml plasma and ∼ 10 4 copies per μg DNA for at least 24 weeks in all of the animals 30 . To conserve animals, historical controls were used as a comparison group for the 24-week follow-up ( Table 1 ; group 7). Table 1 Experimental design for testing a human NmAb cocktail as a therapy Full size table Figure 1: NmAb cocktail dosing and kinetics in plasma. ( a ) Experimental design of the early NmAb therapy experiment. ( b ) ELISA assays using recombinant proteins RSC3 (resurfaced stabilized core gp120 protein) 48 and ST0A9 (scaffold protein displaying the V1V2 region and PGT121 epitope) 18 for the specific detection of VRC07-523 and PGT121, respectively (5 mg/kg and 20 mg/kg each NmAb, respectively) (top). VRC07-523 and PGT121 were combined in a 1:1 ratio by mass (μg/ml) to generate a cocktail for s.c. injection at doses of 10 mg/kg and 40 mg/kg (bottom). The NmAb cocktail was assayed by an SHIV SF162 gp140–specific ELISA (bottom left, 10 mg/kg cocktail; bottom right, 40 mg/kg cocktail). Data shown are NmAb concentrations in the plasma of 12 macaques. The concentrations were determined using nonlinear regression and the half-maximal effective concentration (EC 50 ) of the NmAb cocktail or the individual NmAb, and were graphed in GraphPad Prism. Error bars indicate s.d. The individual NmAbs and the NmAb cocktail were used as standard curves. Pre-treatment plasma (day 0) was used as a negative control for the assay. Source data Full size image We evaluated the kinetics of the individual NmAbs and the cocktail in plasma from all 12 treated infants that were on the study for at least 2 weeks ( Table 1 ; groups 4, 5 and 6). Peak NmAb cocktail concentrations in plasma occurred by 24 h after the s.c. injection in all animals at both NmAb doses. In the four macaques that received the 10 mg/kg dose, the average cocktail concentration during the first 2 weeks was 44 μg/ml, and in the eight macaques that received the 40 mg/kg dose, it was 113 μg/ml ( Fig. 1b ). By using reagents designed to specifically detect each NmAb independently, we found that PGT121 concentrations in the plasma were consistently higher at both doses than those of VRC07-523. Multiple dosing prevented us from calculating the in vivo half-lives of each NmAb, but PGT121 was detectable in plasma for 2 weeks longer than VRC07-523 in several macaques. PGT121, administered at 5 mg/kg, was maintained for >20 weeks in the plasma from a single macaque, 33537 ( Fig. 1b , bottom left). An unusually slow antibody decay rate in the plasma of subjects that were passively infused with PGT121 has been recently reported, in which plasma concentrations of 5–20 μg/ml were still present after 10 weeks 27 . We assessed SHIV SF162P3 -neutralization activity in the plasma of all infant macaques and found that it decayed by 6–7 weeks in all of the animals except macaque 33537, in which declining neutralization of SHIV SF162P3 was detected at titers of 10 2 –10 3 before becoming undetectable at week 20 ( Supplementary Fig. 4 ). Calculation of the average 50% inhibitory concentration in plasma (IC 50 ) of the NmAb cocktail during the first 2 weeks after SHIV SF162P3 exposure was 0.0134 and 0.0120 μg/ml in the 10 mg/kg and 40 mg/kg groups, respectively, which is close to the IC 50 (0.0128 μg/ml) obtained from purified NmAbs specific for SHIV SF162P3 in the TZM-bl standardized cell line that expresses luciferase in the presence of HIV or SIV Tat and that is used to quantify NAbs in vitro ( Supplementary Fig. 4 ). To measure transudation of NmAb into tissues and to assess neutralization potency within tissues and organs during the first 2 weeks, we extracted specimens from six macaques at different necropsy time points. Analysis of the antibody extracted from two macaques (group 4) that were sacrificed at 14 d after four doses of NmAbs showed that NmAbs were systemically distributed at concentrations from 48–700 ng/ml of tissue lysate ( Supplementary Table 1 ). Two macaques (group 2a) exposed to SHIV SF162P3 , treated once with a NmAb cocktail dose of 10 mg/kg 1 d later and sacrificed on day 2 had NmAbs in tissue lysates in concentrations up to 791 ng/ml. Two macaques (group 2b) treated once with 10 mg/kg, without SHIV exposure, had NmAbs in tissues at levels similar to those in macaques 34263 and 34290, which were pre-exposed to SHIV. We assessed the neutralizing activity to SHIV SF162P3 in tissue homogenates from ∼ 100 mg of necropsy samples from the animals sacrificed at 1, 2 and 14 d after exposure (groups 2 and 4). The 50% neutralization titers (ID 50 ) for SHIV SF162P3 of tissue lysates averaged ∼ 1:50 in the tested samples at 1 d after s.c. NmAb injections, increased to an average of ∼ 1:100 by day 2 and were 1:150 at day 14, with good agreement between the titers observed for macaques that were sacrificed at the same time points ( Table 2 ). However, NmAb titers in the colon and reproductive tract from macaque 34290 were about 3–5 times higher than those from macaque 34263, which was necropsied at the same time point. Tissue-associated IC 50 concentrations of the NmAb cocktail in the samples tested ranged from 0.5–10.0 ng/ml, which is similar to the IC 50 of the NmAb cocktail ( Supplementary Fig. 4 ). We conclude that the presence of SHIV during the first day after oral exposure did not affect NmAb distribution or levels in vivo and that the NmAb cocktail was rapidly distributed to tissues. Table 2 Neutralizing activity in tissue homogenates of infant rhesus macaques colocalizes with virus Full size table SHIV SF162P3 dissemination with and without NmAb therapy To determine the in vivo transudation kinetics of SHIV SF162P3 in blood and tissues early in infection with and without NmAb cocktail therapy after exposure, we quantified the amount of virus in blood from macaques killed on days 1, 2 or 14 after oral SHIV SF162P3 exposure ( Table 1 , groups 1–4). The 2-week time point was anticipated to be nearest to the time of peak PVL. Plasma viremia was detected by day 4, increased rapidly and peaked between 1 × 10 8 to 5 × 10 8 copies/ml in macaques that were not treated with NmAbs and necropsied at day 14 ( Fig. 2a ), consistent with results from the VRC01 study ( Supplementary Fig. 1 ) and prior studies 29 , 30 . In stark contrast, no virus was detected in plasma or peripheral blood mononuclear cells (PBMC) from NmAb-treated macaques killed at day 14 ( Fig. 2a,b ). Figure 2: Viral kinetics and tissue distribution during the first 2 weeks after oral SHIV exposure. SHIV SF162P3 viremia was quantified in eight male and female macaques that were either treated or untreated with NmAbs. ( a , b ) PVLs (as assessed by measurements of SIV viral RNA in blood using a qRT-PCR) ( a ) or CAVLs in PBMC (as assessed by qPCR) ( b ). ( c ) Anatomic locations of tissues collected at necropsy following oral inoculation. ( d – g ) Viral DNA in tissues of untreated macaques ( d , f ) or in macaques treated with 10 mg/kg ( e ) or 40 mg/kg ( g ) NmAb cocktails and killed at the indicated times, as detected by an ultrasensitive nested qPCR and RT-PCR 37 assay targeting a highly conserved region in SIV gag encoded in the SHIV. Each sample was assayed in 12 replicates (5 μg each). Virus copy numbers were derived from the frequency of positive replicates using the Poisson distribution and calculated as copies per μg of DNA or copies per 10 6 cell equivalents using the input nucleic acid mass and by assuming a DNA content of 6.5 μg per million cells. Infected tissues are colored to indicate virus amounts, quantified as SIV gag copies/μg of DNA, according to the scale shown at the bottom. Source data Full size image We collected multiple tissues from all of the macaques at necropsy ( Fig. 2c ), and in samples collected within 2 d of SHIV SF162P3 exposure, we measured low levels of SHIV SF162P3 DNA in mucosa and LN that were proximal and distal to the oral exposure site ( Fig. 2d,e ) in treated and untreated animals. In comparison to treated animals, the virus was widespread at day 14 in untreated animals ( Fig. 2f ) and peaked at >3,000 copies/μg of DNA throughout the LN and gut, consistent with levels of DNA in the tissues of adult and six-month-old Macaca nemestrina with high levels of plasma viremia 36 . As seen in the blood, following NmAb treatment on days 1, 4, 7 and 10 at 40 mg/kg, virus was not detectable in any tissue at day 14 ( Fig. 2g and Supplementary Table 2 ). To determine whether the viral DNA–positive tissues contained replicating SHIV, we measured viral RNA in several tissue samples taken from these same macaques that were sacrificed on 1, 2 or 14 d after virus exposure. Viral DNA and RNA levels, tested as blinded samples from these tissues, show that productive SHIV SF162P3 infection had begun in multiple tissues by day 1, increasing exponentially by day 14 ( Table 2 ). However, in two macaques that were treated with 10 mg/kg on day 1 (group 2a; sacrificed on day 2), there was no viral RNA detected in the samples tested. Notably, the NmAbs were colocalized in these virus-positive tissues, suggesting the potential for antibody effects as early as 1 d after treatment. Moreover, the results suggest that virus present early after exposure can be intercepted and cleared by NmAbs present in the same tissues ( Table 2 ). Prevention of productive infection, viral rebound and pathogenesis To evaluate the effect of early short-term NmAb therapy on viral control, we monitored SHIV SF162P3 in blood, LN and tissues in animals followed for 24–28 weeks. PVL in the controls routinely peaked at 2 weeks post-infection (w.p.i.) and persisted at levels that ranged from 10 6 –10 8 copies/ml. In newborns that were treated with 10 mg/kg ( Fig. 3a ) or 40 mg/kg ( Fig. 3b ) NmAbs, there was no plasma viremia detected in any of the samples collected over the course of the study. A single time point in the 40 mg/kg group was positive for only one of two replicates, and additional material to retest this sample was not available. Longitudinal cell-associated viral loads (CAVL) in PBMC DNA were negative for each of the >300 samples tested from the ten macaques in groups 5 and 6 ( Fig. 3c,d ). In short, all of the NmAb-treated infants had undetectable PVL or CAVL in blood. Figure 3: SHIV SF162P3 -associated viremia is not established in plasma or PBMC of NmAb-treated infants. ( a – d ) Quantification of virus in blood ( a , b ) and in peripheral blood cells ( c , d ) in both NmAb dosing groups of male and female infant rhesus macaques ( n = 10). Plasma viral loads were assessed by measurements of SIV viral RNA in blood using a qRT-PCR assay ( a , b ) and in PBMC by qPCR ( c , d ). CD8 + T cell–depletion study timeline is shown in red. Data shown in gray indicate mean levels of virus in plasma (±s.d.) from eight historical controls from an earlier study 18 , 30 . Source data Full size image We also measured the levels of SHIV SF162P3 DNA in >300 homogenized tissue samples obtained at 24–28 w.p.i. and in inguinal LNs collected 12 w.p.i. from all ten macaques, using ultrasensitive qPCR 37 . Tissue samples from all SHIV-exposed infants that received the 2-week course of NmAb cocktail were tested as coded samples and were negative for virus in both dosage groups ( Supplementary Table 2 ). As discussed above, only very low levels of virus were detected in tissue specimens from the group 2a macaques 34263 and 34290, which were sacrificed at 2 d.p.i. after a single NmAb dose ( Fig. 2e ). Similarly to that seen in blood, virus was widespread in tissues at 14 d.p.i. in group 3 control macaques that did not receive NmAbs ( Fig. 2f ). As early as 1 d.p.i., tissue-associated virus in mucosal tissue adjacent to the exposure site, the draining LN and the gut tissue was evident in macaques from group 1 (untreated controls) ( Fig. 2d and Supplementary Table 2 ). As compared to the untreated (group 1) macaques killed at 1 d.p.i., NmAb-treated (group 2a) macaques that were sacrificed 2 d.p.i had significantly lower amounts of tissue-associated virus ( P = 0.0061; Fig. 4 ). The discovery of traces of virus at 2 d.p.i ( Fig. 2e ) and none at 14 d.p.i. ( Fig. 2g ) implies that NmAbs intercepted and neutralized SHIV SF162P3 replication, cleared infected cells and halted the spread of infection in these macaques and in all ten NmAb-treated infants that were followed for 6 months. Consistent with these data, pathology results also showed the absence of organ or tissue pathology ( Supplementary Fig. 5 ). Figure 4: NmAb cocktail lowers tissue-associated viremia within 24 h after s.c. delivery. SHIV DNA quantified by ultrasensitive nested qPCR and RT-PCR 37 in each tissue sample shown from four control animals ( Table 1 , groups 1 and 2a) at either 1 d after SHIV exposure with no NmAb treatment or 1 d after s.c. injection with 10 mg/kg NmAb cocktail and 2 d after SHIV inoculation. P = 0.0061; by Wilcoxon-signed rank test (statistics performed in SAS 9.4 software). Source data Full size image No evidence for T cell immunity or viral rebound To evaluate T cell immunity in the SHIV-exposed macaques, we used intracellular cytokine staining (ICS) to measure PBMC, spleen and mesenteric LN responses specific for the SIV mac239 proteins Gag and Vif (present in the SHIV chimera) in the ten macaques that were studied for 6 months. No SHIV-specific T cell responses were detected in PBMC at week 20 or in tissue samples after necropsy ( Supplementary Fig. 6 ). To determine whether there were any reservoirs controlled by CD8 + T cell–mediated suppression, we depleted CD8 + cells to undetectable levels in the four animals in the 10 mg/kg group and monitored them for viremia for 4 weeks ( Supplementary Fig. 6 ). There was no evidence of virus rebound in the plasma of these animals during the CD8 + depletion phase ( Fig. 3a,c ), further supporting the concept that early passive NmAb therapy with the cocktail of VRC07-523 and PGT121 disrupted establishment of virus reservoirs, thereby preventing exposure to antigen and development of cellular immunity. Discussion Pre-exposure prophylaxis (PrEP) with ART is effective in limiting transmission in the setting of MTCT, as well as in healthy adults 38 . One of the major goals in treating HIV-1 infection is to discover methods that can clear the established viral reservoir 39 . To date, only a single case of a 'functional cure' has been documented following bone marrow transplantation 40 . Vaccine-induced, persistent T cell responses cannot prevent infection but can reduce established SIV reservoirs to undetectable levels in about half of vaccinated macaques 41 . NHP studies with ART suggest that treatment as early as 3 d.p.i. is too late to prevent the establishment of the reservoir, as the virus rebounded after cessation of drug treatment 42 . These data are consistent with the case of the 'Mississippi baby', in which ART therapy was started within 30 h of birth but did not prevent HIV infection 43 , 44 . Thus, the time for intervention is extremely limited, and ART alone may not be effective at eliminating a founder infection. Viral RNA has been detected in macaques as early as 1 d following vaginal exposure to SIV 45 , but there is a common view that HIV and SIV may have a short 'eclipse phase' of limited, localized viral replication that lasts a number of days, in which the spread of the founder infection is dependent upon target cell availability to spread to lymphatic tissues 45 . Our data show that, at least in this model system of oral SHIV exposure in infant macaques, virus replication is detected in lymphatic tissues at 24 h after infection and is not locally restricted. Here we found evidence of an immediate impact of a single dose of passively transferred NmAbs on seeding of the virus, with a significant difference in early tissue-associated viral RNA and DNA in treated versus nontreated infants. Early post-exposure, short-term administration of powerful NmAbs effectively cleared the virus in vivo by 14 d and prevented viral rebound after decay of the passive NmAbs. We present three lines of evidence that the ten macaques studied for >24 weeks were clear of virus. Firstly, all of the macaques failed to develop adaptive immune responses. Secondly, we showed that >300 coded tissue samples from these macaques were virus negative, using an ultrasensitive PCR methodology based on detection of the SIV gag gene. Thirdly, we depleted the CD8 + T cells in the four lower-dose macaques (group 6) and observed no viral rebound. These experiments show that NmAbs delivered subcutaneously are swiftly distributed to blood and tissues and that they maintain neutralizing activity at distal sites. They further indicate that NmAbs are effective at clearing viral foci in blood and tissues during the earliest stages of HIV penetration of the tissues, a different mechanism from that of ART. We hypothesize that antibody-mediated effector functions, including cytotoxicity and phagocytosis, are required for killing infected cells that express HIV envelope gp160 on their surface 46 . If so, then NmAbs present for an extended period of time after exposure would have the capability to destroy infected cells and neutralize virus particles emanating from cells in which infection was established within the first few hours following an exposure event. In this setting, post-exposure therapy, using a NmAb cocktail administered 24 h after SHIV exposure and continuing for 2 weeks, resulted in the maintenance of high NmAb concentrations in vivo for at least 2 weeks. The importance of repeated dosing is not known, but in the case of breast-feeding mothers, there is continued opportunity for transmission of HIV-1. It will be important to understand whether repeated NmAb dosing in babies could expand the protective window. Several relevant questions remain unanswered for the treatment of HIV-infected newborns and children born to HIV-positive mothers, including the practical and cultural issues of treating breast-feeding mothers and babies, as well as a determination of optimal antibody cocktail formulations. Any future use of human NmAbs in the clinic will presumably require several antibodies, or engineered antibodies with multiple specificities, to avoid the potential for the emergence of viral escape mutants. Because ART has a short half-life and requires strict adherence to the drug regimen to be effective, supplementation with passive NmAbs with relatively long half-lives may widen the therapeutic window. Identifying human NmAbs that can impact infection in a macaque model for MTCT can provide a proof of principle for the value of using antibodies to augment ART. In fact, safety trials in HIV-exposed newborns for treatment with the NmAb VRC01 have begun in the USA and South Africa ( ), following a safety trial in adults 47 . Our findings begin to define the window of opportunity for effective treatment after intrapartum exposure. If these results can be applied to clinical settings, then there is optimism that early passive immunotherapy may provide protection from HIV infection, even in the absence of ART. Methods Animal models and humane-care guidelines. The Oregon Health and Science University West Campus Institutional Animal Care and Use Committee approved all macaque studies. Studies were performed at the Oregon National Primate Research Center in Beaverton, Oregon, USA (ONPRC). The ONPRC is accredited by the American Association for the Accreditation of Laboratory Animal Care International and adheres to the Guide for the Care and Use of Laboratory Animals 49 and the United States Public Health Service Policy on the Humane Care and Use of Laboratory Animals ( ). The initial study with SIVIG used four M. mulatta (male and female) of varying ages that were obtained from the breeding colony. For the studies in one-month-old macaques, 27 7-d-old M. mulatta (rhesus macaques, male and female) were obtained from the breeding colony and raised for 3 weeks in the animal biosafety level (ABSL)-2 infant nursery. Time of birth determined animal allocation, so that animals were randomly assigned to the study groups as they accrued. Protection studies with VRC01 ( n = 7 infants) were pilot studies and were not designed for statistical analyses (all or none effects of virus acquisition). Group sizes of six had been previously shown to allow statistically distinguishable measurements in plasma and cell-associated virus loads at 6 months as the primary study outcome for antibody treatment. Serial sacrifice studies included groups of two animals each, and viral quantification analyses included ∼ 30 tissue samples per animal. Infants were excluded if the sire or dam could not be confirmed to be absent of M. mulatta B*08 and B*17 major histocompatibility complex (MHC) class I alleles. At 1 month of age, after adaptation to formula feeding, animals were transferred to ABSL-2+ containment for study procedures. Infants were paired with another macaque of the same age for nursery care and containment housing. In all studies, infants were monitored for clinical signs of disease. Clinical evaluation included measurement of peripheral LN size and body weight, as well as evaluation of appetite, attitude and stool quality. All animals were euthanized under IACUC guidelines and consistent with the recommendations of the American Veterinary Medical Association (AVMA) Guidelines for Euthanasia 50 . IgG and NmAb preparations. Normal IgG was purified from 1 liter of pooled plasma from simian retrovirus (SRV)-negative and SIV-negative adult rhesus macaques as previously described 30 . VRC01, VRC07-523 and PGT121 were expressed as IgG1 antibodies by transient transfection of Expi293F cells (ThermoFisher Scientific Inc.) and purified over protein A columns 32 , 51 . The V H and V L regions of PGT121 were synthesized based on the published sequence 33 . Purified polyclonal anti-SIV smE660 IgG (SIVIG) was pooled from two SIV smE660 -infected animals from a prior experiment 28 . All antibody preparations were delivered subcutaneously at multiple sites around the dorsal cervical and thoracic regions of the animals in the doses described in the text. Virus inoculations. Infant macaques were administered a 50% animal infectious dose (AID 50 ) ( ∼ 7 × 10 8 viral RNA copies) of a macaque cell–grown stock of SHIV SF162P3 (ref. 52 ), divided into two 1-ml oral doses given ∼ 15 min apart. AID 50 was determined in a titration experiment described previously 36 . Virus detection in plasma, PBMC and tissue homogenates. Nucleic acid from plasma, cell culture supernatant or PBMC was purified using a Maxwell 16 instrument (Promega, Madison, WI) according to the manufacturer's protocol, using the LEV Viral Nucleic Acid Kit and the LEV Whole-Blood Nucleic Acid Kit, respectively. SHIV viral loads in plasma and cell culture supernatant were determined by quantitative RT-PCR using the methods developed by Piatak et al . 53 , except for a slightly modified master mix to increase sample input per reaction. SHIV viral loads in PBMC DNA were determined by quantitative PCR using Fast Advanced Mastermix on an Applied Biosystems QuantStudio 6 Flex instrument (Life Technologies, Carlsbad, CA). Reactions were performed with 2 μg nucleic acid input for 45 cycles using the FAST cycling protocol (95 °C for 1 s, 60 °C for 20 s) in a 30-μl reaction volume. Virus copy numbers were estimated by comparison to a linearized pBSII-SIV gag standard curve and calculated per cell equivalent using the input nucleic acid mass and by assuming a DNA content of 6.5 μg per million cells. Primers and probe used for plasma and PBMC assays were those described by Piatak et al . 51 : SGAG21 forward (GTCTGCGTCATPTGGTGCATTC), SGAG22 reverse (CACTAGKTGTCTCTGCACTATPTGTTTTG), and pSGAG23 (5′-(FAM)-CTTCPTCAGTKTGTTTCACTTTCTCTTCTGCG-(BHQ1)-3′). For viral RNA and DNA reservoir detection in tissues, a recently developed ultrasensitive nested quantitative PCR and RT-PCR approach 37 targeting a highly conserved region in SIV and SHIV gag was used. Primers used for DNA pre-amplification were SIVnestF01 (GATTTGGATTAGCAGAAAGCCTGTTG) and SIVnestR01 (GTTGGTCTACTTGTTTTTGGCATAGTTTC). The reverse-transcription step in the RNA assay used the SIVnestR01 primer instead of random hexamers, in order to facilitate priming of specific target sequences. Primers used for quantitative PCR were SGAG21 forward, SGAG22 reverse, and pSGAG23 as described above. PCR reaction conditions for both rounds were as described with minor modifications 54 . Briefly, samples were heated at 95 °C for 5 min and then put on ice. Each sample was assayed in 12 replicates (5 μg each), with two of the reactions including a spike of 10 or 20 copies of DNA or RNA, respectively, containing the SIV gag target sequence in order to assess PCR reaction efficiency. None of the tested RNA and DNA samples showed significant amplification inhibition, which was defined as a 5-cycle amplification delay as compared to the amplification kinetics of reactions containing solely 10 copies of standard. First-round amplification involved 12 cycles (95 °C for 30 s and 60 °C for 1 min) in 50-μl reactions. Then, 5 μl of each pre-amplified replicate was assayed by quantitative PCR using Fast Advanced Mastermix in a 30-μl reaction volume in the QuantStudio 6 Flex instrument. Reactions were performed for 45 cycles using the FAST cycling protocol. Virus copy numbers were derived from the frequency of positive replicates using the Poisson distribution and calculated as copies per μg of DNA. Staff members performing the RNA and DNA assays were blinded to the plasma and tissue samples that were being tested for virus. Antibody detection in tissues and secretions. Tissue samples were sectioned and transferred into radio-immunoprecipitation assay (RIPA) buffer (PI89900, Thermo Fisher Scientific) with protease inhibitor cocktail (P8340, Sigma-Aldrich). Tissue disruption was accomplished with zirconia-silica beads (1.0 mm, Biospec Products) in a Beadbeater (Biospec Products) device with two cycles of 2-min intervals with brief incubations on ice between each cycle. Supernatants were aspirated and centrifuged for 5 min to pellet residual debris. Mucosal secretions were collected on Weck-cel spears and extracted as previously described 55 . Secretions were stored at −80 °C until assayed. Homogenates and secretions containing transudated antibody were used in ELISA and neutralization assays as described below. CD8 + T cell depletion and staining. Four animals were given the CD8-α–depleting antibody M-T807R1 (US NIH, Nonhuman Primate Reagent Resource). Peripheral blood was monitored for the presence of CD8 + T cells for 4 weeks. 1 × 10 5 PBMC were stained with anti-CD3–AlexaFluor 700, anti-CD8–Pacific Blue (BD Biosciences), and anti-CD4–PE-Cy7 (Biolegend). Intracellular cytokine staining (ICS). CD4 + and CD8 + T cell responses were measured from blood and tissues by flow cytometric ICS, as previously described 56 . Briefly, 1 × 10 6 mononuclear cells were incubated with Gag or Vif open-reading-frame pools and the co-stimulatory molecules CD28 and CD49d (BD Biosciences) for 1 h, followed by addition of brefeldin A (Sigma-Aldrich) for an additional 8 h. Co-stimulation without antigen served as a background control, while incubation with staphylococcal enterotoxin B (Toxin Technology) served as the positive control. The cells were then labeled with anti-CD4–PE-Cy7 (Biolegend) and anti-CD8–PerCP-Cy5.5 (BD Biosciences) and fixed with 2% paraformaldehyde. After permeabilization, the cells were stained with anti-CD3–Pacific Blue, anti–IFN-γ–APC, anti–TNF-α–FITC (BD Biosciences, all), and anti-CD6– PE-Texas Red (Beckman Coulter). The cells were fixed and flow cytometric analysis was performed on an LSR-II instrument (BD Biosciences). Analysis was done using FlowJo software (Tree Star, Ashland, OR). In some cases, cells were CD25-depleted before setting up the ICS experiment to remove T reg cells (Miltenyi Biotec). In situ hybridization (ISH). In the SIVIG transudation experiments, measurement of in-situ hybridization for SIV antigen was performed on tissues collected from the two animals that underwent oral SIV challenge. Formalin-fixed, paraffin-embedded tissues were assayed for SIV viral RNA expression by ISH as previously described 57 . Briefly, following deparaffinization, the sections were hybridized overnight at 45 °C with either a sense or an antisense SIVmac239 digoxigenin-UTP–labeled riboprobe. The hybridized sections were blocked with 3% normal sheep and horse serum in 0.1 M Tris, pH 7.4, and then incubated with sheep anti-digoxigenin–alkaline phosphatase (Roche Molecular Biochemicals) and nitroblue tetrazolium-5-bromo-4-chloro-3-indolyl-β- D -galactopyranoside (BCIP; Vector Labs). ISH-stained tissues from submandibular LNs, tonsils and the ileum were visualized and photographed with a Zeiss Axiophot microscope. Enzyme-linked immunosorbent assays (ELISAs). ELISA was used to detect total IgG and SIV gp130–specific antibodies in the SIVIG transudation experiments. Briefly, half-well EIA plates (Costar) were coated with either a goat anti–rhesus IgG (H+L) (unlabeled; Southern Biotech) (for total IgG ELISA) or recombinant SIV smE660 gp130, purified as described 28 (for gp130 ELISA), at 2 μg/ml in carbonate-bicarbonate buffer and incubated overnight. Plates were washed three times (0.1% Triton X-100 in 1× PBS) and blocked with 1% normal goat serum and 5% nonfat dried milk in PBS for 1 h at RT. SIVIG standards and homogenates were diluted in 1% Triton X-100, 2% bovine serum albumin, 5% FBS in PBS. After washing, a 1:4,000 dilution of goat anti–rhesus IgG (H+L)-horseradish peroxidase (HRP) (Southern Biotech) was added and incubated for 1 h at RT followed by TMB substrate (Southern Biotech). Plates were read on a SpectraMax 190 at an absorbance wavelength of 650 nm. Data were reported as the slope of absorbance over time. Concentrations of SIVIG in tissue disruption supernatants were calculated by comparing the average slope numbers to those from the SIVIG standard curve. ELISA was used to assess for the presence of gp140-specific antibodies as previously described 56 in plasma and tissue homogenates. Plasma NmAb levels were quantified using plates coated with either RSC3 (re-surfaced stabilized core gp120 protein) 48 (VRC07-523) or ST0A9 (scaffold protein displaying the V1V2-region and PGT121 epitope) (PGT121) 58 . Briefly, Nunc MaxiSorp (Thermo Fisher) plates were coated overnight with 200 ng/well of RSC3 in PBS, washed with PBST five times, and blocked with TBST with 5% milk and 2% BSA for 1 h at RT. Serial dilutions of all samples were plated in duplicate. Each NmAb (for standard curves) and positive and negative controls were included on each plate. Plasma was incubated for 1 h at RT, followed by a PBS–Tween 20 wash. Bound NmAbs were probed with a HRP-labeled goat anti–human IgG (1:5,000 dilution; Jackson Laboratories) for 30 min at RT. The plate was washed and TMB (Pierce) substrate was added. Once color was developed, stopping buffer was added and the optical density at 450 nm was read. GraphPad Prism and Microsoft Office software was used to calculate NmAb concentrations. TZM-bl neutralization assay. Plasma samples from each animal were tested at all available time points for neutralizing activity using the 96-well TZM-bl neutralization assay described previously 59 . Statistics. The data in Figure 4 shows SIV DNA copies measured in 17 anatomical sites in two paired groups of four control animals. A two-stage sequential approach was used to calculate the averages of SIV DNA copies of two biological replicates for each site in each group and apply a Wilcoxon-signed rank test to account for the matched pair nature of anatomical sites.
Scientists at the Oregon National Primate Research Center today revealed that infant rhesus macaques treated with antibodies within 24 hours of being exposed to SHIV, a chimeric simian virus that bears the HIV envelope protein, were completely cleared of the virus. The study, published today in Nature Medicine shows that antibodies given after a baby macaque has already been exposed to SHIV can clear the virus, a significant development in the HIV scientific community. SHIV-infected nonhuman primates can transmit SHIV to their offspring through milk feeding, just as humans can transmit HIV from mother to child through breastfeeding and during childbirth (and only rarely during pregnancy). In humans, a combination of measures for mothers and infants, including antiretroviral therapy (ART), Cesarean section delivery and formula feeding (rather than breastfeeding), have decreased the rate of mother-to-child HIV transmission from 25 percent to less than 2 percent since 1994. Despite this decrease, approximately 200,000 children are infected with HIV each year worldwide, primarily in developing countries where ART is not readily available. "We knew going into this study that HIV infection spreads very quickly in human infants during mother-to-child transmission," said Nancy L. Haigwood, Ph.D., senior author of the paper, and director and senior scientist, Oregon National Primate Research Center at Oregon Health & Science University. "So we knew that we had to treat the infant rhesus macaques quickly but we were not convinced an antibody treatment could completely clear the virus after exposure. We were delighted to see this result." Haigwood and colleagues administered the anti-HIV-1 human neutralizing monoclonal antibodies (NmAb) subcutaneously on days 1, 4, 7 and 10 after the macaques were exposed to SHIV orally. The SHIV virus was found in multiple body tissues on day 1 in macaques without antibody treatment. Conversely, they observed an immediate impact of a single dose of antibodies at the start of the infection, with a significant difference in treated versus non-treated macaques. Early short-term administration of powerful antibodies effectively cleared the virus by day 14, with no virus detected at this time. Using highly sensitive methods, they did not detect the virus in any part of the body in 100 percent of the antibody-treated infant macaques for at least six months. Typically, HIV infection rapidly expands and spreads in humans to local draining lymph nodes before disseminating throughout the entire body one week after a person is infected. This study showed that, at least in this model system of oral SHIV exposure in newborn macaques, virus replication is detected in lymphatic tissues 24 hours after exposure and is not locally restricted, as has been suggested previously for humans, due to delays of 5 to 7 days before detection in the blood. The study showed that: 1) antibodies delivered subcutaneously are swiftly distributed to blood and tissues and maintain neutralizing activity at various sites, and, 2) that antibodies are effective at clearing the virus, a different mechanism than that of ART, which is a combination of several antiretroviral medicines used to slow the rate at which HIV makes copies of itself in the body. "Other nonhuman primate studies with antiretroviral therapy suggest that treatment as early as three days after infection is too late to prevent establishment of the HIV reservoir," said Jonah B. Sacha, Ph.D., study co-author and assistant scientist, Oregon National Primate Research Center at OHSU. "So using antibodies to clear the virus after infants have already been exposed could save thousands of lives" if the approach works in human infants. The researchers noted that treating human babies with ART during the last month of gestation, the few days after delivery, and during breastfeeding timeframes, is recommended. However, risks remain, including toxicities associated with long-term ART use, the development of drug-resistant viral variants, and lack of access to prenatal care prior to delivery. This discovery indicates that using new methods, such as antibodies, to limit infection after exposure in newborns could be advantageous. The study authors acknowledge that several relevant questions remain unanswered for treatment of HIV-infected newborns and children born to HIV-positive mothers. These include practical and cultural issues of treating breastfeeding mothers and babies, if the antibodies will work in human infants exposed to HIV, as well as what the optimal antibody formulations will be. Clinical trials in which HIV-exposed newborns are treated with antibodies have begun in the U.S. and South Africa, following a phase I clinical trial in HIV-negative adults that showed the antibodies to be safe and well-tolerated in these individuals. The authors' findings help define the window of opportunity for effective treatment after exposure to HIV during birth. If these primate model results can be applied to human beings in a clinical setting, researchers are hopeful that treating infants who have already been exposed to HIV within 24 hours may provide protection from viral infection, even in the absence of ART.
10.1038/nm.4063
Biology
Illinois researchers are first to count growth factors in single cells
Phuong Le et al, Counting growth factors in single cells with infrared quantum dots to measure discrete stimulation distributions, Nature Communications (2019). DOI: 10.1038/s41467-019-08754-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-08754-5
https://phys.org/news/2019-02-illinois-growth-factors-cells.html
Abstract The distribution of single-cell properties across a population of cells can be measured using diverse tools, but no technology directly quantifies the biochemical stimulation events regulating these properties. Here we report digital counting of growth factors in single cells using fluorescent quantum dots and calibrated three-dimensional deconvolution microscopy (QDC-3DM) to reveal physiologically relevant cell stimulation distributions. We calibrate the fluorescence intensities of individual compact quantum dots labeled with epidermal growth factor (EGF) and demonstrate the necessity of near-infrared emission to overcome intrinsic cellular autofluoresence at the single-molecule level. When applied to human triple-negative breast cancer cells, we observe proportionality between stimulation and both receptor internalization and inhibitor response, reflecting stimulation heterogeneity contributions to intrinsic variability. We anticipate that QDC-3DM can be applied to analyze any peptidic ligand to reveal single-cell correlations between external stimulation and phenotypic variability, cell fate, and drug response. Introduction Single-cell analytical techniques are reshaping our understanding of biology by revealing the distribution of gene expression and phenotype across a population of cells 1 , 2 . Applied together with systems biology models and information theory, it is now becoming clear that any population of genetically identical cells naturally exhibits substantial cell-to-cell variability that is integral to the emergence of ensemble biological functions 3 . This heterogeneity has important consequences, as rare cells, rather than cells near the ensemble mean, often dominate clinically meaningful pathogenic processes and drug resistance 4 , 5 , 6 . However, a void exists in experimental techniques to measure how cellular decision-making processes underlying population variability derive from extracellular biochemical signals, such as peptide growth factors and cytokines 7 , 8 , which cannot be easily measured at the single-cell level. Biochemical stimulation, the induction of an intracellular biochemical signal (e.g., receptor activation and translocation) by binding of an exogenous biochemical factor, is usually inferred indirectly from the resulting change in gene expression or cell phenotype 8 . Moreover, input factors are typically applied at stimulation extremes (zero and near saturation) 9 , whereas physiologically relevant tissue concentrations are in intermediate regimes ( c ~ 1–100 pM) 10 , 11 over which cells exhibit sensitive and heterogeneous dose–response relationships (EC 50 ~ 1–100 pM) 12 , 13 . At these concentrations, relevant tissue microdomain volumes (~10 pL) contain just tens to hundreds of factors 14 , 15 , such that signal stimulation is temporally and spatially stochastic 16 . Accurate quantification of initiating signals is therefore very challenging 17 and requires single-molecule sensitivity. Here we describe a technology platform to digitally count growth factors in single cells using fluorescent quantum dots (QDs) and calibrated three-dimensional (3D) deconvolution microscopy (QDC-3DM). As a prototypical example, we focus on epidermal growth factor (EGF) and EGF receptor (EGFR)-positive cells. Fluorescent QDs are used as tags for EGF due to their extremely high fluorescence intensity that is homogeneous and stable at the single-QD level 18 . For maximum signal detection and comprehensive counting of EGF with rapid image acquisition, wide-field excitation is used to collect complete 3D images of cells, and deconvolution is used to reassign photons to their originating focal volumes. We observe that this methodology is only accurate when applying QDs with infrared emission due to interfering fluorescence from cellular components across the visible spectrum. We apply QDC-3DM to analyze EGF-induced cell signaling variability in triple-negative breast cancer cells (MDA-MB-231) grown on micropatterned islands to spatially register signaling events across separate cells. Our results show proportionality between stimulation and both receptor internalization and inhibitor response, reflecting stimulation heterogeneity contributions to intrinsic variability at the single-cell level. Results Imaging and image analysis Figure 1a shows the overarching approach to measure the distribution of stimulation events of growth factors binding to cognate receptors, yielding a response distribution that plays an important role in the variability of signals and behavior between cells. Figure 1b summarizes the imaging and analysis methodology to measure absolute counts of growth factors, using two sequentially collected image stacks. A deconvolved high-resolution 3D epifluorescence image of cells is collected in three colors to distinguish QD-EGF conjugates (in red) spatially registered to the cell location by its fibronectin matrix (in green) and nucleus (in blue). The second image stack is a high temporal resolution video in the QD-EGF color channel. As described in detail in Methods, a three-step process is applied to count EGF molecules per cell: (1) Single QD-EGF spots are identified in videos by distinctive time-course intensity traces, I ( t ), for which two discrete intensities are present in two-dimensional (2D) images, \(I_{1{\mathrm{QD}}}^{2{\mathrm{D}}}\) and \(I_{\mathrm{B}}^{2{\mathrm{D}}}\) , respectively, corresponding to the intrinsic QD intensity and its background due to on-and-off intermittency of emission (i.e., blinking) 19 , 20 . (2) Volumetric intensities of single QDs from deconvolved 3D images are averaged to yield \(\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}}\) , the average intensity of a single QD-EGF. (3) The number of contributing QDs to each spot in 3D images, N QD,spot , is calculated by dividing the volumetric spot intensity by the single-QD intensity. Finally, the total number of QDs is then calculated across each cell to determine the number of EGF per cell, N EGF,cell : $$N_{{\mathrm{EGF}},{\mathrm{cell}}} = \mathop {\sum }\limits_{{\mathrm{spots}}} N_{{\mathrm{QD}},{\mathrm{spot}}} = \mathop {\sum }\limits_{{\mathrm{spots}}} I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} \cdot \overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} ^{ - 1}.$$ (1) Fig. 1 Quantum dot (QD) calibrated three-dimensional (3D) deconvolution microscopy (QDC-3DM). a Schematic representation of the contribution of single-cell stimulation distribution (growth factor binding) to signaling response distribution (measured by receptor internalization). b Depiction of the QDC-3DM image analysis methodology to count growth factors in single cells. The process begins with acquisition of 3D fluorescence images of single cells to localize single QDs and spatially register their locations. A representative 3D image shows a cell stimulated with QD-epidermal growth factor (QD-EGF) (red) on an Alexa Fluor 488-labeled fibronectin substrate (green) with nucleus labeled with Hoechst (blue). Each 3D image is deconvolved and spatially correlated to two-dimensional (2D) videos in the QD color channel. In the first step shown at right, time traces of spot intensities are used to identify single QDs by their distinctive two-component intensity distributions. In the second step, the average intensity of these single QDs from 3D deconvolved images, \(\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}}\) , is measured. In the third step, the 3D intensity of each spot, \(I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}\) , is measured and registered to the average single-QD intensity, to calculate the number of QD-EGF per spot, N QD,spot . The number of EGF per cell, N EGF,cell , is then calculated as the sum of all N QD,spot Full size image Quantum dot probe engineering Accuracy of Eq. ( 1 ) requires that each QD bound per cell corresponds to a single EGF, thus requiring that each QD is bound to a single EGF. Monovalency between QDs and growth factors is further important to prevent artificial cross-linking between receptors that would not reflect the intrinsic monomeric nature of EGF. We optimized QD-EGF conjugates to ensure functional monovalency using an EGF engineered with a single N-terminal biotin, which self-assembles with covalent QD conjugates of streptavidin (SAv) with near covalent bond strength 21 . We adopted a previous strategy to generate monovalent conjugates by tuning the ratio between QDs and EGF (Fig. 2a ) 22 and used a functional assay to count the number of QD-EGF conjugates per cell. The discrete number of QDs bound to cells followed a linear trend with increasing EGF conjugation up to a 1:1 ratio, at which point multiple EGFs per QD no longer proportionally increased the number of QDs bound (Fig. 2b ). We thus used an EGF:QD ratio of 0.3 to ensure that we were within the linear regime of functional monovalency, and that binding led to endocytosis (Supplementary Figure 1 ). This conjugation scheme left a substantial fraction of the QD population unbound to EGF, which is non-consequential for these studies, as QD binding events were EGF-specific based on a competition assay and absence of bare QD binding to cell (Supplementary Figure 2 ). Fig. 2 Functional and optical characterization of epidermal growth factor (EGF) conjugates of dyes and quantum dots (QDs). a Schematics show assembly of QD-EGF conjugates with controlled valency through biotin-streptavidin conjugation. b Experimental relationship between QDs bound per cell versus EGF:QD conjugation ratio. The red dashed line indicates the average number of QDs per cell for QDs alone, with red shade indicating the background standard deviation. MDA-MB-231 cells were treated with QD-EGF conjugates at 0.5 nM for 10 min on ice. N ≥ 10 cells. c Specific binding isotherm for dye-EGF and QD744-EGF to MDA-MB-231 cells measured by flow cytometry. Raw data are shown in Supplementary Figure 3 . N = 2. d Fluorescence spectra of dye and QDs used in this work and mean autofluorescence, in arbitrary units. For autofluorescence, N ≥ 13 cells. e Representative images of autofluorescence, dye-EGF, QD565-EGF, QD605-EGF, and QD744-EGF bound to cell, measured in their respective spectral bands. All QD-EGF were bound to the same cell. The black square in each brightfield (BF)/nucleus micrograph indicates the zoom-in area shown in the fluorescence images. Yellow arrows indicate autofluorescence; red arrows indicate dyes or QDs. f Two-dimensional intensities of autofluorescence, single dye and single QDs. The box indicates 25/75th percentile; red lines are the mean value; whiskers are s.d.; N ≥ 30 for dye and QDs; N > 20,000 spots for autofluorescence at each wavelength. g Receiver operating characteristic (ROC) curve showing higher detection accuracy of single QD744 (dark red) compared with dye (blue), and QD565 (green) or QD605 (orange) in the presence of autofluorescence shown in f . Numbers indicate areas under the ROC curves (AUROC). h Representative BF/nucleus micrograph with orthogonal 3D fluorescence images of dye-EGF and QD744-EGF bound to the same cell, imaged from the bottom of the cell to the top. A z-projection of summed fluorescence intensity of dye-EGF and QD-EGF is shown at right. i Same as h , but fluorescence images were acquired from the top of the cell to the bottom. In b - d , points indicate mean ± s.d. In e , h , and i , scale bars, 5 μm Full size image We use compact alloyed QDs that we recently developed with hydrodynamic dimensions near 10 nm (Supplementary Figures 4 – 5 ) 23 , compared with 15–35 nm sizes for commercial variants, to be near the 8-nm spacing between adjacent EGF molecules in EGFR oligomers so as to avoid steric hindrance impacts on signaling 24 . Binding isotherms on MDA-MB-231 cells measured by flow cytometry showed nearly identical affinity for the QD-EGF conjugates ( K D = 3.1 ± 0.6 nM) compared to EGF conjugated to a single tetramethylrhodamine dye (dye-EGF; K D = 3.2 ± 0.4 nM) (Fig. 2c ), which has a similar binding affinity as unlabeled EGF 25 . Measured K D values were in the range of those reported previously for EGF-EGFR binding on other cell types 26 , 27 . The similar affinity is logical as the k on for EGF-EGFR binding is orders of magnitude smaller than that of a diffusion-controlled reaction 28 , and the diffusion coefficient of QD-EGF is just 4–5 times larger than that of dye-EGF. In addition, dye-EGF and QD-EGF conjugates resulted in similar number of fluorescent endosomes (Supplementary Figure 6 ), which is consistent with previous findings and indicates similar degrees of receptor activation 29 . Compact QDs with fluorescence in the visible spectrum have similar intensities to cellular autofluorescence in epi-illumination mode (Fig. 2d–f ), making absolute quantification in 3D impossible. Measurements using fluorescent dyes are much worse; single dye-EGF conjugates were eight times dimmer than mean cellular autofluoresence (Fig. 2e, f ) and more than 10 times dimmer than QD605. We thus tuned the QD emission through ternary compositional alloying of the core to the near-infrared, beyond 700 nm, where autofluorescence is substantially attenuated (Fig. 2d ), resulting in facile identification of individual QDs with 40-fold higher mean intensity than autofluorescence (Fig. 2f ) for high-accuracy quantification with an area under receiver operating characteristic (AUROC) of 0.99 (Fig. 2g ). Notably, in this spectral window, the dye intensity is still comparable in intensity to autofluorescence. This unique utility of QDs to emit with high intensity in the near-infrared for low-background cellular imaging adds to the growing value of these materials for applications such as deep-tissue imaging 30 . Absolute quantification of EGF bound to a cell with a thickness of ~10 μm requires photostable labels because imaging the entire 3D cell volume requires repeated excitation to acquire sequential image planes at sufficient z -axis resolution. QDs are expected to outperform dyes for this application due to their exceptional photostability that exceeds that of dyes by orders of magnitude 31 , 32 . We labeled cells with a combination of dye-EGF and QD744-EGF and reconstructed 3D images from slices that were either acquired from the bottom to the top of a cell (Fig. 2h ) or from the top to the bottom (Fig. 2i ). The observed distribution of intensity for the dye-EGF conjugate was substantially different between the two acquisition processes, with photobleaching clearly apparent in the slices acquired at later times in both cases. In contrast, QD-EGF showed similar intensity distributions for both acquisition routines, highlighting the benefit of QD photostability. Fluorescence intensity quantization While absolute quantification of QDs and fluorescent molecules on flat surfaces (e.g., coverslips, basolateral membranes, and microbes) with sparse labeling is well established 33 , counting in 3D presents unique challenges for intensity calibration due to autofluorescence, out-of-focus signals, and random single-QD blinking. Compared with 2D images, the intensity of a single near-infrared QD in 3D overlaps substantially with background (AUROC = 0.96), even in the absence of cells (Fig. 3a ). Deconvolution reassigns out-of-focus light back to its point of origin to increase signal-to-noise ratio 34 , which we observe increases QD intensity significantly over background, with unity AUROC. Shown in Fig. 3b , deconvolved 3D intensities of single QDs verify that multiple QDs in a single diffraction-limited 3D spot can be accurately counted. This deconvolved intensity was independent of the distance across the thickness of a cell (Supplementary Figure 7 ). We analyzed 1, 2, and 3 QD spots, identified by their distinguishable 2D intensity time–trace distributions resulting from blinking, with example data shown in Fig. 3c–e . The quantized number of QDs contributing to each distribution was determined by fits validated by Akaike information criteria (AIC) 35 , with examples shown in Fig. 3e . This outcome is important because EGFR oligomers and coalesced receptors within endosomes will contain numerous EGFs within a diffraction-limited volume. Fig. 3 Intensity calibration for counting growth factors. a Representative images show QD744 imaged in two-dimensional (2D), three-dimensional (3D), and 3D after deconvolution. Histograms depict noise (gray) and quantum dot (QD) intensities (red). Voxel sizes are 3 × 3 for 2D and 3 × 3 × 11 for both 3D and deconvolved 3D. N = 48. Scale bar, 5 µm. b Intensity calibrated deconvolved 3D analysis of diffraction-limited spots containing 1, 2, or 3 QDs. Points indicate mean ± s.e.m. with N = 72, 216, and 27 spots for 1, 2, and 3 QD spot −1 , respectively. The number of QDs per diffraction-limited spot is calculated from distributions of intensity from time traces based on the number of Gaussians fitted to brightness histograms. The number of QDs per spot is the number of fitted Gaussian minus 1 (noise) . c Examples of intensity time traces of 1, 2, and 3 QD spot −1 . d Gaussian fitting for intensity histograms of 1, 2, and 3 QD spot −1 corresponding to examples shown in c . e Minima of Akaike information criteria (AIC) are indicated by red arrows, corresponding to the optimal number of Gaussians to fit each intensity histogram Full size image EGF quantification We used QDC-3DM to count the number of EGF molecules in individual MDA-MB-231 cells using a commonly applied temporal-pulse stimulation experiment 36 . Cells were exposed to QD-EGF conjugates across four log(10)-spaced concentrations for 5 min on ice, and unbound conjugates were washed away. Figure 4a shows example images of cells treated with 0.1, 1, and 10 nM concentrations of QD-EGF, across which counts per cell ranged from 0 to 4000, with mean values linearly correlated with concentration between 0.1 to 10 nM (Fig. 4b and Supplementary Figure 8a ), and some binding saturation at 100 nM (Supplementary Figure 9 ). Importantly, mean stimulation numbers were reproducible between two independent experimental replicates (Fig. 4b and Supplementary Figure 8a ) and EGF distributions fit well to gamma distributions ( p ≥ 0.79 by χ 2 test) (Fig. 4c , red regressions), insinuating a correlation with intrinsic distribution in receptor number 2 , 5 , 37 , 38 . We simulated the bound ligand distribution by applying a ligand–receptor kinetic binding model 12 , 28 , 39 to known distributions of EGFR expression in MDA-MB-231 cells 37 , 38 , 40 , with ligand binding further distributed by a Poisson to simulate ligand binding probabilities per cell (see Methods). The experimental EGF binding distribution matched simulations well both at 37 °C (Supplementary Figure 10 ) and at 4 °C (Fig. 4c, d and Supplementary Figure 8b ), with <13% deviation in mean stimulation magnitude between theory and experiment (Supplementary Table 1 ). Deviations between simulation and experiment were largest for the distribution width at 0.1 nM QD-EGF, for which the coefficient of variation was measured to be 95% but predicted to be 65%, suggesting strong merit in empirical measurements at low physiological stimulation levels, likely deriving from intrinsic noise effects such as local fluctuations in ligand and receptor concentrations 39 , 41 . Fig. 4 Counting and simulating growth factor binding. a Representative z-projections of maximum intensity (left) and three-dimensional (3D) images (right) of cells treated with indicated quantum dot-epidermal growth factor (QD-EGF) concentration for 5 min on ice. b Number of QD-EGF bound per cell at indicated QD-EGF concentrations, showing two independent replicates with N ≥ 17 cells for each condition. c Distributions show the number of EGF per cell measured experimentally at the indicated QD-EGF concentration. Maximum likelihood estimation regressions of gamma distributions are shown as red lines and simulation results are shown as blue lines. For regression, p = 0.79, 0.88, and 0.85 for 0.1, 1, and 10 nM QD-EGF concentrations, respectively. For simulation, p = 0.24, 0.49, and 0.67 for 0.1, 1, and 10 nM QD-EGF concentrations, respectively. All p values were calculated using χ 2 tests. d EGF per cell is shown as experimental results (gray) and simulation results (blue) across different concentrations. Simulation results in d were obtained by sampling cells from the EGFR number gamma distribution (see Methods). e Representative z-projections of maximum intensity of breast cancer cell lines MCF-7, MDA-MB-231, and MDA-MB-468 in order of increasing EGFR expression. Cells were treated with 1 nM QD-EGF for 5 min on ice. Yellow arrow indicates a single QD-EGF bound to an MCF-7 cell. f Number of QD-EGF bound per cell for conditions in e with N ≥ 40 cells for each condition. QDs are shown in red and nuclei are blue. In a , e , QDs are shown in red and nuclei are blue; scale bars, 10 µm. In b , d , f , the box indicates 25/75th percentile; red lines are means; whiskers are s.d. Full size image QDC-3DM also allows absolute counting of EGF binding events on cells with widely ranging EGFR expression. Figure 4e shows images of three human breast cancer cell lines after a pulse of 1 nM QD-EGF on ice. An increase in the number of EGF bound is evident as EGFR expression increases from low (MCF-7; ~10 4 cell −1 ) 42 , 43 to medium (MDA-MB-231; ~10 5 cell −1 ) 42 , 43 to high (MDA-MB-468; ~10 6 cell −1 ) 42 . There were a mean of 1, 80, and 922 EGF per MCF-7, MDA-MB-231, and MDA-MB-468 cells, respectively (Fig. 4f ), and a large fraction of MCF-7 cells were bound to 0 or 1 EGF bound (Fig. 4e, f ), highlighting the necessity of single-molecule counting to assess absolute stimulation. Importantly, EGF binding was almost exclusive to EGFR, as EGF binding was reduced to <0.2% per cell after treatment with Cetuximab, which specifically blocks the EGFR binding site for EGF (Supplementary Figure 11 ) 44 . EGF spatial distribution We demonstrate an example of how QDC-3DM provides empirical correlations between input stimulus and output signaling in single cells based on the localization of EGF, reflecting receptor internalization over time. As a prototypical receptor tyrosine kinase, EGFR undergoes phosphorylation on intracellular domains upon ligand binding, as well as internalization by endocytosis and intracellular trafficking, while signals propagate to downstream kinase cascades 45 . To correlate translocation across multiple cells, micro-contact printed hydrogel matrices were used to normalize adhesive footprints for cultured cells and ensure uniform distributions of size, shape, and organelle location 36 , 46 . Figure 5a shows representative images of individual cells at early (10 min) and late (30 min) stages after 5 min pulsed EGF stimulation, with EGF labeled in red, showing translocation from surface regions to perinuclear regions consistent with late endosomes or lysosomes over time. Example images also show inhibition of internalization by the EGFR-blocking drug gefitinib at two concentrations 47 . These 3D images of patterned cells can be reduced to 2D heat map averages across populations of cells (Fig. 5b ) and 1D projection histograms (Fig. 5c ) to depict the ensemble average of how receptor-ligand signaling events propagate spatially within cells. Notably EGFR was substantially redistributed across the cell with EGF treatment (Supplementary Figure 12 ). Fig. 5 Single-cell epidermal growth factor (EGF) binding correlates with single-cell receptor translocation and drug response. a Representative three-dimensional (3D) images of MDA-MB-231 cells after stimulation with quantum dot-EGF (QD-EGF) in the absence or presence of EGFR inhibitor gefitinib. Times after the start of a stimulation pulse are indicated. QDs are shown in red, nuclei are blue, and Alexa Fluor 488-conjugated fibronectin micropatterns are green. b Two-dimensional (2D) z -projections on xy fibronectin micropattern planes and c one-dimensional (1D) projections on x -axes indicate the localization of single EGF averaged across cells. d Representative image of cell membrane measured through fluorescence imaging of fluorescently labeled receptors (top) and membrane reconstruction using alpha shapes (bottom). e Correlation between EGF number and fraction of EGF internalized in individual cells at 10 and 30 min after the start of a QD-EGF stimulation pulse. f Western blots and g relative pEGFR abundance in MDA-MB-231 whole-cell lysates immediately after stimulation with QD-EGF in the presence of indicated gefitinib concentrations. Uncropped western blots with molecular weight markers are shown in Supplementary Figure 16 . h Fraction of EGF internalized in single cells at different gefitinib concentrations, 30 min after the start of a QD-EGF pulse. The box indicates 25/75th percentile; red lines are means; whiskers are s.d. i Coefficient of variation (CV) of the fraction of EGF internalized in h . j Number of EGF bound impacts the fraction of EGF internalized 10 min after the start of a QD-EGF pulse in the presence of gefitinib at 0, 51, and 5,100 nM concentration. The gray line shown in 51 nM (middle) and 5100 nM (right) gefitinib plots is the linear fit for 0 nM gefitinib condition (left). Data fits are shown in Supplementary Figure 14 . N = 20 and 12 cells for 10 and 30 min after QD-EGF stimulation onset without gefitinib, respectively; N = 12, 10, 20, 23, 12, and 14 cells for 30 min after QD-EGF stimulation onset in the presence of gefitinib at 0, 0.51, 5.1, 51, 510, and 5100 nM concentrations, respectively. All stimulation pulses used 1 nM EGF-QD for 5 min. All scale bars indicate 10 µm Full size image Stimulation magnitude correlation with signaling With these cell imaging tools together, we can extract a wealth of single-cell signaling analytics that are both absolute in molecular number and spatially resolved across any cell across the stimulation distribution. EGF translocation metrics, in particular, are intrinsic signal propagation outputs of the analysis which can be readily correlated to the distance from the membrane using 3D membrane maps which we reconstruct as surfaces using alpha shapes 48 derived from membrane labels in a separate QD channel (Fig. 5d and Supplementary Figure 13 ). We observed that in the absence of inhibitors, the number of internalized EGF after a 5 min pulse was linearly proportional to the number of EGF bound with a slope near 1 and R 2 ≥ 0.99 (Supplementary Figure 14a ). Trends are enhanced when plotted as fraction internalized in Fig. 5e , showing 10 and 30 min after stimulation. The y -axis spread in values indicates that the heterogeneity of internalization is greater at shorter time periods, while the differences in x-intercepts indicate the internalization rate. Between 10 and 30 min, the number of internalized EGF increases by ~120 independently of stimulation value between ~300 and ~2700. We also observed that the absolute number of QD-EGF per cell decreased from ~1300 to ~800 between 10 and 30 min after a 5 min pulse (Supplementary Figure 15a ) likely due to dissociation of QD-EGF from the receptors after the pulse, an outcome that can be directly probed with QDC-3DM. Stimulation impact on pharmacological inhibition Receptor internalization was attenuated by pharmacological EGFR inhibition in a dose-dependent manner that was coupled to EGF stimulation magnitude. Importantly, EGFR is a widely pursued proto-oncogene drug target, and blocking activation correlates both with inhibition of phosphorylation and translocation, which blocks downstream signals driving chemotaxis and proliferation 49 . Unfortunately, drugs targeting EGFR activation such as gefitinib have limited clinical efficacy in cancers such as triple-negative breast cancer, in which up to 76% of cases overexpress EGFR 49 . Western blot analysis of MDA-MB-231 cells exposed to 1 nM QD-EGF conjugates (1267 ± 788 EGF per cell) showed substantial receptor phosphorylation and a dose-dependent decrease in phosphorylation with increasing drug concentration (Fig. 5f–g and Supplementary Figure 16 ) with half-maximal inhibitory concentration (IC 50 ) of 195 nM. Figure 5h shows the dose dependence of EGFR internalization fraction, exhibiting a population-averaged potency of inhibition (IC 50 = 380 nM) similar to that of phosphorylation inhibition and slightly higher than the average equilibrium binding constant to the receptor K D (51 nM) 50 . At the single-cell level, an increase in drug concentration led to higher variability in response of individual cells (Fig. 5i ), with coefficient of variation monotonically increasing from 7.0 to 43% between 0 and 5.1 μM, an effect that has been widely reported for many classes of inhibitors 51 . Note that there was no significant difference in EGF binding for drug concentrations between 0 and 5.1 μM (Supplementary Figure 15b, c ). Figure 5j shows how cell-to-cell variability of EGFR inhibitor response derives substantially from the magnitude of EGF stimulation. At a drug concentration near the K D (51 nM), EGF internalization remained similarly proportional to EGF bound across all cells, but with shifted internalization fraction that was equally diminished in magnitude across the population (see also Supplementary Figure 14b ), suggesting uniform deactivation of membrane-localized EGFR. Moreover, these correlations demonstrated that excess stimulation was sufficient to overcome the biological effect of inhibition, with only 5% drug effect measured for 1500 EGF bound, compared with a 44% drug effect for 200 EGF bound. At 100-fold higher inhibitor concentration (5100 nM), internalization was further reduced, with 25% drug effect at 1500 EGF and 100% effect for 200 EGF bound. From these correlations it is apparent how stimulation can overcome signaling depletion thresholds imposed by inhibitors, and how heterogeneity arises from the proportionality between internalization and stimulation. By mapping the stimulation distribution ( x -axes in Fig. 5j ) to the internalization fraction slope, it can be seen that the low stimulation fraction, where slope is highest, has a dominant contribution to heterogeneity. The slope decreases for higher drug concentrations, so a greater percentage of the stimulation then contributes to the spread of drug effects, as the number of active receptors become depleted to yield a more stochastic population response. Discussion Counting individual growth factors using QDC-3DM requires a combination of molecular probe properties that is uniquely provided by QDs. Tuning emission to the infrared while retaining efficient blue excitation eliminates the vast majority of cellular autofluorescence needed to boost signal-to-noise and increase the accuracy of single-molecule identification from 74 to 99% (Fig. 2g ). In comparison, fluorescent dyes are too dim and do not provide photostability needed to withstand continuous excitation during volumetric image acquisition of 100–200 z-planes (Fig. 2h, i ). QDs further provide a convenient means to internally calibrate spot intensities to discrete ligand numbers due to distinctive binary emission signatures of single QDs derived from on-and-off single-QD blinking (Fig. 3b, c ). However, these intensities measured in 3D only correlate well with discrete QD numbers when 3D images are deconvolved to boost spot intensities and compensate for light from outside the focal plane (Fig. 3a ). Blinking can impact each QDC-3DM step depicted in Fig. 1b , so the analytical performance can depend on both the QD photophysical properties and the image acquisition conditions. The primary interference is that QDs may transition between an “on” and “off” state during the image acquisition time window, so measured intensities can be intermediate between the two states. For the first step of QD time–trace acquisition to identify single QDs, this intermediate intensity could lead to misidentification of single QDs by an automated algorithm, particularly for QDs in the population with short on-time probabilities. We apply stringent criteria 18 so that single-QD exclusion is more common than inclusion of QD multiplets. QDs with higher on-time fractions may correlate with brighter QDs in the population 52 , which could propagate to a calibrated 3D QD intensity that is skewed toward higher values in QDC-3DM step 2. The 3D intensities also depend on blinking in step 3, as some fraction of QDs will remain off over some of the 3D slices, contributing to the 3D intensity distribution width in Fig. 3a, b . Importantly, while off-time probabilities are largely independent of image acquisition conditions and QD structure, on-time probabilities can deviate due to a number of variables, particularly excitation intensity and QD surface passivation 53 , 54 , so different integration times may yield different relative intensities of specific QDs across a population with a distribution of blinking kinetics. For this reason, we used QDs and conditions for which deviations are expected to be minimal, using a thick insulating shell (4.7 monolayer (ML) CdZnS), low laser power (photon flux <10 mW cm −2 ), and short exposure time (~100 ms). Further increasing the shell thickness would reduce the number of QDs with low on-time probabilities and truncated on-time kinetics 55 , 56 . We found that the structure applied here provides an excellent fluorescence intensity together with a balanced physical size, yielding, together with the polymeric coating and a nanoparticle that is 12.6 nm in hydrodynamic diameter, comparable to common biological macromolecules such as antibodies. An increase in shell size to further reduce blinking could be offset by using a smaller core but at the expense of a wider emission band 18 , or by using a thinner coating, which could destabilize conjugates or lead to nonspecific binding. Efforts are underway to further optimize both nanocrystals and coatings to yield still smaller, brighter probes 57 , and further exploit wavelengths deeper in the infrared where Stokes shift is further increased and autofluorescence is further reduced 30 . Using QDC-3DM, it is now possible to directly measure biochemical input signals in single cells where inference from output metrics was only possible previously 8 . Single-cell studies in both prokaryotic and eukaryotic cells have shown that most protein expression number distributions are described by a gamma distribution 37 , 38 , 40 . Likewise, we observe that single-cell growth factor binding distributions to MDA-MB-231 cells fit gamma distributions across three orders of magnitude of concentration (Fig. 4c ). From simulations, distribution widths derive primarily from receptor number distributions convolved with a lesser contribution from intrinsic noise of random binding. Importantly, the contribution from intrinsic noise becomes larger at lower ligand concentrations, insinuating that for experiments under such conditions that simulate relevant physiological tissue states 9 , stimulation magnitudes cannot be directly inferred from receptor numbers. Simulations matched the experimentally measured mean ligand number quite well, but experimental distributions were consistently wider by a small margin (Supplementary Table 1 ), likely deriving from a combination of uncertainty in kinetic rate constants and receptor number distributions, as well as distributions of receptor states involving oligomerization and inhomogeneous localization in membrane microdomains. Notably, autocrine stimulation of cells by secreted factors will be undetectable by this technique, so it is important to determine whether such contributions may confound the analysis of the biological system under study. While MDA-MB-231 cells do not secrete EGF, they do produce noncanonical ligands for EGFR, but at levels far lower than what would contribute during the brief pulsed experiments applied here 58 . QDC-3DM allows empirical mapping between quantized single-cell stimulation and single-cell signaling, allowing extraction of single-cell signaling metrics that are otherwise unobtainable. Because localization and translocation are core contributors to signaling 59 , we used receptor internalization as an easily measured physical corollary of ligand-induced signal propagation downstream of receptor activation by phosphorylation. We find that the correlation between internalized EGF and the number of EGF bound shifts uniformly with time across the cell population (Fig. 5e and Supplementary Figure 14a ). The stimulation distribution further modulates the response to the EGFR inhibitor gefitinib (Fig. 5j and Supplementary Figure 14b ), diminishing drug effects at high EGF stimulation, and mediating heterogeneity at low EGF stimulation. These outcomes suggest that the concentration of growth factors in cell culture medium and local concentrations within tissue microenvironments will dictate drug–response sensitivity and heterogeneity based on how stimulation distributions map to sensitivity curves. These observations are most relevant to human cancers that develop diverse mechanisms to dysregulate EGFR signaling, including overexpression of receptors and overproduction of ligands, resulting in a resistance to signaling inhibition by targeted drugs 60 , 61 , 62 . In conclusion, we developed a functional near-infrared QD suitable for single-molecule counting in autofluorescent cells, as well as a detailed methodology for absolute quantification of growth factors on single cells using 3D fluorescence microscopy. We applied this approach to count growth factors under physiologically relevant stimulation conditions spanning three log(10)-spaced stimulation magnitudes. As a microscopy-based assay, this technology is well suited for pairing with downstream analyses of signaling and phenotype through live-cell fluorescent protein imaging, immunofluorescence, fluorescence in situ hybridization, and high-content microfluidics, and can further be adapted to long-term tracking and steady-state stimulation experiments beyond the acute pulsed experiments used here. The combined capabilities of spatially registered signaling events through cellular micropatterning and highly multiplexed fluorescent color-coding using QDs can form the components of a toolbox for elucidation of signaling biology to connect individual molecular events to comprehensive cell response and population distributions. We expect that this toolbox can be applied to any peptide ligand and used broadly to provide a more comprehensive understanding of the origin of cell heterogeneity and drug effect variability. Methods Chemicals and reagents Cadmium acetate hydrate (Cd(Ac) 2 ·H 2 O, 99.99+%), mercury acetate (Hg(Ac) 2 , 99.999%), selenium dioxide (SeO 2 , ≥99.9%), selenium powder (Se, ~100 mesh, 99.99%), sulfur powder (S, 99.98%), octanethiol (OT, ≥98.5%), behenic acid (BAc, 99%), 1,2-hexadecanediol (HDD, 97%), tetramethylammonium hydroxide solution (TMAH, 25wt.% in methanol), N -methylformamide (NMF, 99%), N , N , N ′, N ′-tetramethylethylenediamine (TEMED, 99%), (3-aminopropyl)triethoxysilane (APTES, 99%), glutaraldehyde, sodium periodate (99%), and 2-azidoacetic acid (97%) were purchased from Sigma-Aldrich. Anhydrous cadmium chloride (CdCl 2 , 99.99%) and zinc acetate (Zn(CH 3 COO) 2 , 99.98%) were obtained from Alfa Aesar. 1-Octadecene (ODE, 90% tech.), oleylamine (OLA, 80–90% C18 content), oleic acid (OAc, 90% tech.), and hydrazine hydrate (55%) were purchased from Acros Organics. DBCO-sulfo-NHS ester was purchased from Click Chemistry Tools. Sodium bicarbonate and glycine were purchased from Thermo Fisher Scientific. Polydimethylsiloxane (PDMS) was purchased from Polysciences. Acrylamide and bisacrylamide were purchased from Bio-Rad. Glacial acetic acid (99.7%) was purchased from JT Baker. Solvents including chloroform, hexane, toluene, methanol, acetone, and diethyl ether were purchased from a variety of sources, including Acros Organics, Thermo Fisher Scientific, and Macron Fine Chemicals. All chemicals above were used as purchased. Dulbecco’s modified Eagle’s Medium (DMEM), fetal bovine serum (FBS), Hank’s balanced salt solution (HBSS), and cell cultured grade bovine serum albumin (BSA) were purchased from VWR. SAv was purchased from ProSpec. Biotinylated EGF, dye-EGF, Hoechst, Alexa Fluor 488 NHS Ester, Alexa Fluor 647-conjugated goat anti-mouse antibodies, and goat serum were purchased from Thermo Fisher Scientific. Paraformaldehyde (PFA, 32% v/v in water) was purchase from Electron Microscopy Sciences. Dimethyl sulfoxide (DMSO), fibronectin from human plasma, Accutase cell detachment solution, and Tris hydrochloride (Tris-HCl, 1 M) were purchased from Sigma. Biotinylated DNA was prepared by Integrated DNA Technologies. MemBrite Fix 640/660 Cell Surface Staining Kit was purchased from Biotium. Phosphate-buffered saline (PBS) was purchased from Corning. His-tag protein A and Cetuximab were purchased from BioVision. Mouse monoclonal immunoglobulin G (IgG) antibody against human EGFR (EGFR.1 clone) was purchased from BD Biosciences. EGF and rabbit monoclonal IgG antibody against EGFR used in western blotting were purchased from Abcam. Mouse monoclonal IgG antibody against phosphorylated EGFR was purchased from R&D Systems. Horseradish peroxidase-conjugated antibodies against mouse and rabbit IgG were ordered from Jackson ImmunoReserach Laboratory. Gefitinib (>99%) was purchased from LC Laboratories. Western blotting reagents including Tris, sodium chloride (NaCl), ethylenediaminetetraacetic acid (EDTA), Triton X-100, sodium dodecyl sulfate (SDS), deoxycholate, sodium fluoride (NaF), sodium metavanadate (NaVO 3 ), Tween-20, glycerol, bromophenol blue, and tris(2-carboxyethyl)phosphine (TCEP) were purchased from various sources including Sigma, Thermo Fisher Scientific, and Bio-Rad. Synthesis of quantum dots QD cores composed of core/shell CdSe/Cd y Zn 1− y S (QD565 and QD605) or Hg x Cd 1 −x Se/Cd y Zn 1 − y S (QD744) were synthesized in-house 18 and coated with the multidentate polymer polyacrylamindo(histamine- co -triethyleneglycol) (P-IM) or polyacrylamindo(histamine- co -triethyleneglycol- co -azido-triethylene-glycol) (P-IM-N 3 ). These polymers yield particles with compact hydrodynamic diameter (7–12 nm) with nearly monomeric size distributions by gel permeation chromatography (>98%). The QDs are functionalized with azides for P-IM-N 3 coatings 23 . The QD565 and QD605 cores were reported in our previously published manuscript 23 while QD744 was synthesized using the process described below. CdSe QDs with 3.2 nm diameter were prepared by a heat-up synthesis method and then exchanged with mercury to yield an alloyed Hg x Cd 1− x Se core. Cd(BAc) 2 (0.2 mmol), SeO 2 (0.2 mmol), HDD (0.2 mmol), and ODE (4 mL) were mixed in a 50-mL round bottom flask and dried under vacuum at ~100 °C for 1 h. The temperature was raised to 230 °C at a rate of ~20 °C min −1 under nitrogen gas and maintained at 230 °C for 15 min. The solution was then cooled to ~110 °C by removing the heating mantle, and the QDs were purified by dilution with chloroform (10 mL) containing OAc (1 mL) and OLA (0.6 mL), and precipitation with a mixed solvent of methanol (15 mL) and acetone (15 mL). The QDs were redispersed in hexane and extracted twice with methanol followed by precipitation with excess methanol. Finally, the QDs were dispersed in a chloroform solution containing OAc and OLA (20 mL, chloroform:OAc:OLA = 20:1:1 by volume). Mercury exchange was initiated by injecting a mercury stock solution (Hg(Ac) 2 in OLA, 0.1 M) into the CdSe solution at room temperature with vigorous stirring. The ratio between total Cd atoms in the CdSe QDs and the injected Hg cations was 1:2. The reaction was allowed to continue for 5 min and then quenched by adding excess OT (~20 eq. to Hg 2+ ). Aliquots (0.2 mL) were collected before mercury addition and 3 min after OT addition, and absorption spectra were measured to analyze spectral shifts and extinction coefficient changes. The resulting Hg x Cd 1 − x Se QDs were purified by precipitation with a methanol/acetone mixture (50% v/v, ~30 mL) containing OAc (~0.2 mL) and OLA (~0.2 mL). The QDs were redispersed in chloroform (~15 mL) containing OAc (~0.2 mL) and OLA (~0.2 mL) and precipitated again by the addition of methanol/acetone (~30 mL). This dissolution–precipitation process was repeated three times to completely remove unreacted Hg(Ac) 2 and any reaction byproducts. Finally, the pure Hg x Cd 1 − x Se QDs with band edge absorption at ~640 nm were dispersed in hexane. A Cd x Zn 1 − x S shell was deposited epitaxially over the Hg x Cd 1 − x Se QD cores 23 . Purified QDs in hexane (~100nmol) were transferred to a 50-mL round bottom flask and the solvent was evaporated under nitrogen flow at 40–50 °C. The dried QDs were immediately redispersed in a mixed solvent of ODE (2 mL) and OLA (1 mL) containing sulfur precursor (S in ODE, 0.1 M) for the first 0.8 MLs of shell. The temperature was raised to ~120 °C under nitrogen and maintained at this temperature for 10 min. Then Cd x Zn 1 − x precursor ( x :1 − x mixture of Cd and Zn precursors, Cd(Ac) 2 and Zn(Ac) 2 in OLA, 0.1 M) in an equivalent mole quantity to the previous sulfur precursor was added dropwise while raising the temperature to ~130 °C. The reaction was allowed to proceed for 10 min at this temperature. This 0.8-ML shell growth cycle was repeated while controlling the composition ( x ) and raising the reaction temperature. Detailed reaction parameters for QD744 are summarized in Table 1 for a nanocrystal with band edge absorption wavelength of 702 nm, peak fluorescence emission of 744 nm with a full width at half maximum of 75 nm. Electron microscopy characterization as well as absorption and fluorescence emission spectra are shown in Supplementary Figure 5 . Table 1 Shell growth conditions for QD744 Full size table Polymer coating of QDs QD565, QD605, and QD744 (~18 µM) were coated with either P-IM or P-IM-N 3 using a two-step process 23 . First, QDs in hexane (0.5 mL) were purified by precipitation by mixing with chloroform (1.5 mL) and acetone (4 mL). The QD pellet was redispersed in hexane (4 mL) and extracted three times with methanol. The purified QDs (~2.5 µM, 3 mL) were mixed with NMF (2 mL) and TMAH solution (195 µL) in a glass vial and vigorously stirred for 1 h until all QDs transferred to the NMF phase. The transparent QD dispersion in NMF (1 nmol, ~280 µL) was diluted with DMSO (750 µL) in a glass vial equipped with a magnetic stir bar. P-IM or P-IM-N 3 dissolved in DMSO (11.3 mg mL −1 , ~159 µL) was added dropwise to the QDs while stirring. This mixture was then bubbled with nitrogen for 2 min and stirred for 2 h at 110 °C. The solution was then cooled to room temperature and the QDs were precipitated with the addition of ether (5 mL) and chloroform (2 mL). The QD pellet was dispersed in sodium borate buffer (50 mM, pH 8.5). Excess polymer was removed by filtration with a 50 kDa molecular weight cutoff (MWCO) Amicon ultra-centrifugal filter (Millipore), and finally dispersed in sodium borate buffer. Homogeneity and hydrodynamic size were analyzed through gel permeation chromatography, shown in Supplementary Figure 4a . Conjugation of QDs to EGF P-IM-N 3 -coated QD565, QD605, and QD744 were conjugated to DBCO-functionalized SAv by click-mediated triazole formation, and then conjugated to EGF through a single N-terminal biotin. QDs were conjugated to SAv using the following protocol 23 . SAv (180 µL, 0.5 mg mL −1 ) was first mixed with a 5-fold molar excess of DBCO-sulfo-NHS ester (1.6 µL, 5 mM in DMSO) and incubated on ice for 2 h. The reaction was quenched by dilution with a Tris-HCl (9 µL, 1 M) solution. Unreacted DBCO-sulfo-NHS ester was removed by filtration using a 0.5 mL Amicon centrifuge filter with 30 kDa MWCO. It was previously verified that these reaction conditions yielded nearly 1:1 conjugates between QDs and SAv 23 , further confirmed by nearly complete shifts of agarose gel electrophoresis bands of the QDs after SAv conjugation; bands further shifted after addition of biotinylated 90-mer single-stranded DNA, shown in Supplementary Figure 4b . The DNA sequence was 5′-Biotin/(T) 68 TAG CCA GTG TAT CGC AAT GAC G-3′. DBCO-SAv was then mixed with P-IM-N 3 -coated QDs at a 1:1 molar ratio (0.5 µM) at 4 °C for 12 h. Then, a 50-fold molar excess of 2-azidoacetic acid was added and unreacted reagents were removed by filtration with a 0.5 mL Amicon centrifuge filter with 100 kDa MWCO. QD-SAv was then conjugated to EGF-biotin by mixing EGF-biotin with QD-SAv at specific ratios to a final QD concentration of 0.2 µM in PBS at 4 °C for 4 h. Gel electrophoresis with a hybrid polyacrylamide (PA)-agarose gel (2% PA and 0.5% agarose) was used to characterize the conjugates 23 , 63 . To ensure that the conjugation between the QD-SAv and biotin-EGF was functionally monovalent, we varied the ratio of biotin-EGF:QD-SAv and observed that a dose response in cells followed a linear trend with increasing conjugation ratio until saturation (Fig. 2b ). Thus by choosing a biotin-EGF:QD-SAv of 0.33:1, well within the linear regime, we could ensure that the QD-EGF complex was largely monovalent. We have also verified that these QD-EGF conjugates are highly specific and functional (Supplementary Figures 1 – 2 ). Conjugation of QDs to IgG P-IM-coated QD605 was conjugated to a monoclonal IgG antibody against EGFR (EGFR.1 clone) through a protein A linker. Protein A contained a single his-tag, allowing rapid, efficient, and functional conjugation to QDs with P-IM coatings by metal chelation of the QD surface 23 . First, the QDs were mixed with a 4-fold molar excess of his-tag protein A in PBS at a QD concentration of 1 µM at room temperature for 2 h. Then, anti-EGFR IgG was added at a molar ratio of 4:1 IgG:QD in PBS to reach a QD concentration of 0.8 µM. The mixture was incubated at room temperature for 3 h and then stored at 4 °C until use. Thirty minutes prior to use, the IgG conjugates were diluted in serum-free, phenol red-free DMEM supplemented with 0.8% BSA. Fibronectin labeling Alexa Fluor 488-labeled fibronectin was prepared by mixing Alexa Fluor 488 NHS ester and fibronectin from human plasma (1 mg mL −1 , 1 mL) at a 10:1 molar ratio in 0.1 M sodium bicarbonate buffer (pH 8.3) at room temperature for 1 h in the dark. Unreacted dye was quenched by the addition of glycine (20 mM), followed by 10 min of incubation and purification using a MiniTrap Sephadex G-25 column (GE Healthcare) with PBS mobile phase. After purification, there was a mean 4.5 Alexa Fluor 488 molecules per fibronectin based on ultraviolet–visible absorption spectrophotometry. Immediately before use, Alexa Fluor488-labeled fibronectin (25 µg mL −1 , 1 mL) was oxidized with sodium periodate (3.5 mg mL −1 ) for 45 min at room temperature to form ketones. The oxidized protein solution was then filtered through a 0.2 µm syringe filter. Hydrogel substrate preparation PA hydrogels were fabricated on glass coverslips (18 mm, Thermo Fisher Scientific) 64 , 65 . First, coverslips were washed with ethanol and deionized water. Each coverslip was placed in a well of a 12-well plate and amine-functionalized with 1 mL APTES (0.5% v/v in deionized water) at room temperature for 3 min. Coverslips were then washed three times with deionized water, followed by 1 mL glutaraldehyde (0.5% v/v in deionized water) at room temperature for 30 min to generate aldehydes. A stock PA solution prepared by mixing acrylamide (25 mL, 20%) and bisacrylamide (4.9 mL, 2%) was passed through a 0.2 µm cutoff filter and degassed by bubbling with nitrogen. For each sample, ammonium persulfate (0.1%) and TEMED (0.1%) were added to the PA solution to initiate cross-linking. The PA solution (20 µL) was then sandwiched between the functionalized glass coverslip and glass slide with hydrophobic surface for 20 min. The hydrogel-coated glass coverslips were then detached from the glass slide and placed in wells of a 12-well plate. The hydrogel surfaces were treated with hydrazine hydrate for 2 h, rinsed with 5% glacial acetic acid for 1 h, and finally incubated in deionized water overnight. Immediately before use, PA hydrogels were dried at room temperature for 1.5 h and sterilized under ultraviolet light for 15 min. Micro-contact printing of fibronectin on hydrogels Microislands of fluorescent fibronectin were deposited by stamping onto PA hydrogels as 500µm 2 rectangles with specific aspect ratios (5, 1.5, and 1). A PDMS stamp was fabricated by polymerization on a patterned master of photoresist (SU-8, MicroChem) coated on a silicon wafer by photolithography. PDMS stamps were cleaned with ethanol and sterile water immediately before use. Oxidized Alexa Fluor 488-labeled fibronectin (25 µg mL −1 , 150 µL) was then added to the top of the patterned PDMS stamp and allowed to adsorb for 30 min. Excess fibronectin solution was quickly removed under nitrogen air stream and the fibronectin-coated PDMS surface was immediately transferred to the dried PA hydrogel by stamping. The fibronectin printed PA hydrogel was then submerged in PBS in a 12-well plate and was ready for cell seeding. Isolated QDs on glass coverslips P-IM-N 3 -coated QD744 in PBS (1 nM) were spin coated (2500 rpm, 30 s) onto #1.5 glass coverslips that were cleaned with ethanol, methanol, and acetone. Unpatterned cells without QD treatment MCF-7 cells (ATCC, HTB-22), MDA-MB-231 cells (ATCC, HTB-26), or MDA-MB-468 cells (ATCC, HTB-132) (50,000 cells mL −1 , 0.5 mL) were cultured on Lab-Tek II eight-well chamber slides (Nunc) in phenol red-free DMEM supplemented with 10% FBS. After 8 h, the cells were starved overnight in serum-free, phenol red-free DMEM containing 0.8% BSA. The cells were then fixed with 4% PFA in PBS on ice for 15 min, washed three times with ice-cold PBS, and permeabilized with methanol on ice for 6 min. Cells were then stained with 1 µg mL −1 Hoechst at room temperature for 10 min and washed three times with PBS. Unpatterned cells treated with QD-EGF or dye-EGF Samples were prepared similarly to unpatterned cells without QD treatment with the following changes: after overnight starvation, the medium was removed and replaced with ice-cold serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing different concentrations of QD-EGF and/or dye-EGF. Cells were incubated on ice for 5 or 10 min, and then washed three times with ice-cold PBS and then fixed, permeabilized, and stained with Hoechst as described above. Unpatterned cells for EGF internalization assay Samples were prepared similarly to unpatterned cells without QD treatment with the following changes: after overnight starvation, the medium was removed and replaced with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing different concentrations of QD-EGF, dye-EGF, or QD-SAv. Cells were incubated at 37 °C for 5 min, and then washed three times with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA. Cells were further incubated at 37 °C for 25 min, then fixed, permeabilized, and stained with Hoechst as described above. Patterned cells treated with QD-EGF To visualize the spatial localization of QD-EGF across multiple cells, cells were shaped to specific geometries by growth on islands using the micro-contact printing methodology described above. MDA-MB-231 cells (30,000 cells mL −1 , 1 mL) in phenol red-free DMEM supplemented with 10% FBS were seeded into each well of a 12-well plate containing coverslips with fibronectin patterned PA hydrogels. After 2.5 h, cells were starved in serum-free, phenol red-free DMEM supplemented with 0.8% BSA for 5 h. Cells were then treated with QD744-EGF (1 nM) in the same medium for 5 min. Cells were then washed three times with serum-free medium and maintained for specific time periods in serum-free medium. The cells were washed three times with ice-cold serum-free medium and incubated with QD605-IgG (20 nM) on ice for 6 min. Cell were washed three times with ice-cold PBS, fixed, and stained with Hoechst as described in Protocol 2. Patterned cells treated with QD-EGF and gefitinib Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were treated with different concentrations of gefitinib as indicated for 40 min in serum-free DMEM. The medium was removed and replaced with ice-cold serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing QD744-EGF and the same concentration of gefitinib for 5 min. Cells were then washed three times with serum-free medium and maintained in serum-free medium with the same concentration of gefitinib for the indicated time. The cells were then treated with the QD605-IgG membrane stain according to Protocol 5, and the remainder of the protocol was followed. Patterned cells treated with QD-EGF and Cetuximab Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were treated with Cetuximab (20 nM) as indicated for 1.5 h in serum-free DMEM. The medium was removed and replaced with pre-warmed serum-free, phenol red-free DMEM supplemented with 0.8% BSA containing 1 nM QD744-EGF and the same concentration of Cetuximab for 5 min. Cells were then washed three times with serum-free medium and maintained in serum-free medium with the same concentration of Cetuximab for the indicated time. Cells were then then fixed, permeabilized, and stained with Hoechst as described above. Patterned cells with membrane stain Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were stained with the MemBrite Fix Cell Surface Staining Kits following the manufacturer’s protocol. Briefly, cells were treated with pre-staining solution in HBSS at 37 °C for 5 min. Cells were then treated with staining solution diluted in HBSS (1:1000 dilution) at 37 °C for 5 min. The cells were washed three times with ice-cold serum-free medium, and treated with the QD605-IgG membrane stain according to Protocol 5. The remainder of the protocol was followed. Patterned cells with EGFR stain Samples were prepared similarly to patterned cells treated with EGF-QD with the following changes: after starvation, cells were washed three times with PBS, fixed, and permeabilized as described above. Cells were then blocked with 1% BSA in PBS at room temperature for 15 min and stained with mouse anti-EGFR antibody (1 μg mL −1 ) in 1% BSA at 4 °C overnight. After incubation, cells were washed three times with PBS and blocked with 1% BSA and 2% goat serum in PBS at room temperature for 15 min. Cells were then stained with Alexa Fluor 647-conjugated goat anti-mouse secondary antibody (1:300 stock dilution) and Hoechst (1 μg mL −1 ) at room temperature for 1 h. Western blot MDA-MB-231 cells (300,000 cells) were seeded in each well of a 6-well plate for 72 h in DMEM supplemented with 10% FBS. Cells were then starved in serum-free DMEM supplemented with 0.8% BSA for 5 h. Serum-starved cells were then treated with gefitinib at the indicated concentrations for 40 min in serum-free DMEM containing with 0.8% BSA. Cells were then stimulated with QD744-EGF (1 nM) in the presence of different concentrations of gefitinib for 5 min and washed three times with ice-cold Tris-buffered saline (TBS; 50 mM Tris, 150 mM NaCl, pH 7.5). Cells were lysed by treatment with radioimmunoprecipitation assay buffer (50 mM Tris, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, 0.1% SDS, 0.5% deoxycholate) supplemented with Halt Protease Inhibitor Cocktail (Thermo Fisher Scientific) and phosphatase inhibitors (50 mM NaF, 1 mM NaVO 3 ) on ice for 15 min. Cell lysates were collected after centrifugation for 15 min at 14,000 g at 4 °C; a small fraction was aliquoted for protein concentration measurement using the bicinchoninic acid assay. Protein concentrations for each sample were adjusted to ~0.9 mg mL −1 . Cell lysates were then mixed with 5× sample buffer (1 M Tris, pH 9, 10 g SDS, 12.5 mL glycerol, 100 µL 0.5 M EDTA, 50 mg bromophenol blue, 100 mM TCEP) to a final concentration of 1×, heated at 75 °C for 20 min, aliquoted, and stored at −80 °C until use. Samples were loaded into wells of an SDS-polyacrylamide gel; electrophoresis was performed, and gels were transferred to a polyvinylidene difluoride membrane (Immubilon-P membrane, Millipore). The membrane was washed three times with deionized water followed by Tween-20 (0.1%) in TBS for 5 min each. The membrane was then blocked with 5% milk and 0.1% Tween-20 in TBS for 1 h. The membrane was treated overnight at 4 °C with a solution of primary antibodies in 1% milk and 0.1% Tween-20 in TBS. Primary antibodies used were rabbit anti-EGFR (1:500 dilution), mouse anti-human pEGFR (1:250 dilution), and rabbit anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (1:1000 dilution; Cell Signaling). Membranes were washed with 1% milk and 0.1% Tween-20 in TBS five times before incubation with horseradish peroxidase-conjugated secondary antibodies (anti-mouse or anti-rabbit, 1:5000 dilution) for 1 h. Membranes were again washed five times with 1% milk and 0.1% Tween-20 in TBS, and one time with 0.1% Tween-20 in TBS before bands were developed by enhanced chemifluorescence substrate (ECL, Thermo Fisher Scientific) and imaged on autoradiography film (Denville Scientific). Images were analyzed using ImageJ software (National Institutes of Health). The band intensities for pEGFR and EGFR were divided by that of GAPDH; then, the band intensity of pEGFR/GAPDH was divided by EGFR/GAPDH. The intensities were normalized to sample treated with 1 nM QD-EGF without gefitinib to calculate the ratio of pEGFR to total EGFR under the different experimental conditions. Flow cytometry MDA-MB-231 cells were seeded in a T-75 cell culture flask in DMEM supplemented with 10% FBS and cultured until 90% confluence. Cells were washed once with PBS and treated with 5 mL Accutase at room temperature until fully detached from the surface. Accutase was removed by centrifugation for 5 min at 200 g and cells were washed once with ice-cold PBS containing 0.5% BSA and resuspended in the same medium at 3 × 10 6 cells mL −1 . Cell suspensions were then mixed in equal volume (25 μL) with ice-cold solutions of QD-EGF (0.06–120 nM; EGF:QD = 0.33) or dye-EGF (0.02–40 nM). Control samples to measure nonspecific binding were prepared identically but with 2 μM unlabeled EGF. The cells were incubated at 4 °C for 4 h with rocking, washed three times with ice-cold PBS containing 0.5% BSA, and resuspended in PBS. Fluorescence intensities of cells were measured with 488 nm laser excitation, a 685 LP dichroic mirror, and a 695/40 nm BP emission filter for QD-EGF, or 561 nm laser excitation and 582/15 nm BP emission filter for dye-EGF. Single cells were selected using a forward scatter width gate and a minimum of 10,000 single cells were measured for each condition. The percent of maximum EGF bound for each condition, P ( c ), was calculated using the following equation: $$P(c) = \frac{{\overline {I_{c,{\mathrm{tot}}}} - \overline {I_{c,{\mathrm{ns}}}} }}{{\overline {I_{c_{{\mathrm{max}}},{\mathrm{tot}}}} - \overline {I_{c_{{\mathrm{max}}},{\mathrm{ns}}}} }} \times 100,$$ (2) where \(\overline {I_{c,{\mathrm{tot}}}}\) and \(\overline {I_{c,{\mathrm{ns}}}}\) are the mean fluorescence intensities of cells treated with c concentration of QD-EGF or dye-EGF in the absence and presence of unlabeled EGF, respectively, and \(\overline {I_{c_{{\mathrm{max}}},{\mathrm{tot}}}}\) and \(I_{c_{{\mathrm{max}}},{\mathrm{ns}}}\) are the mean fluorescence intensities of cells treated with the maximum concentration ( c max ) of QD-EGF or dye-EGF in the absence and presence of unlabeled EGF, respectively. The dissociation constant, K D , was calculated based by fitting the QD-EGF or dye-EGF binding curve to the following equation using Prism (Graphpad Software): $$P(c) = \frac{{B_{{\mathrm{max}}} \cdot c}}{{K_{\mathrm{D}} + c}},$$ (3) where B max is the maximum percent of specific binding. 2D and 3D microscopy Fluorescence microscopy of isolated QDs and cells was performed using wide-field illumination on a Zeiss Axio Observer Z1 inverted microscope with a ×100 1.45NA alpha Plan-Fluar oil immersion objective, 100 W halogen lamp illumination, 488 nm/100 mW OPSL laser, and 561 nm/40 mW diode laser units. Images were acquired using a Photometrics eXcelon Evolve 512 EMCCD camera through the Zeiss ZEN software. Excitation light was filtered using Semrock and Zeiss filters (G 365, BP 470 nm/40 nm, BP 482/18, BP 561/14 nm). Emission signals were filtered using Semrock bandpass filters (445/50, 525/50, 562/40, 600/37, and 732/68 nm). Brightfield images were acquired using transmitted-light illumination (12 V, 100 W Halogen lamp) with DIC prism III/0.55. Cellular autofluoresence spectrum measurement Cellular autofluorescence spectra were acquired with 488 nm excitation using two different instruments. For wavelengths between 530 and 727 nm, a Zeiss 710 confocal scanner Azio Observer Z1 inverted confocal microscope with a ×63 1.4 NA oil immersion objective and a tunable Mai-Tai Ti-Sapphire laser (Spectra Physics) with 488 nm laser excitation was used. Intensities were acquired using a QUASAR 34 channel spectral detector with 9.7 nm wavelength increments. For wavelengths above 727 nm, measurements were performed using the Zeiss Axio Observer Z1 inverted microscope described above using bandpass filters with one redundant wavelength to that for the confocal scanner to allow normalization of the data between the two instruments. Individual cells from samples prepared using Protocol 2 were imaged to collect autofluorescence intensity measurements at a specific emission wavelength, I AF ( λ em ), normalized to the detector sensitivity using the equation below: $$I_{{\mathrm{AF}}}(\lambda _{{\mathrm{em}}}) = \frac{{\overline {I_{{\mathrm{px}},{\mathrm{cell}}}(\lambda _{{\mathrm{em}}})} - \overline {I_{{\mathrm{px}},{\mathrm{b}}}(\lambda _{{\mathrm{em}}})} }}{{\mathop {\int }\nolimits_{\lambda _1}^{\lambda _2} {\mathrm{\Phi }}(\lambda ){\mathrm{d}}\lambda }},$$ (4) where \(\overline {I_{{\mathrm{px}},{\mathrm{cell}}}(\lambda _{\mathrm{em}})}\) is the mean pixel intensity on a cell at wavelength λ em , \(\overline {I_{{\mathrm{px}},{\mathrm{b}}}(\lambda _{{\mathrm{em}}})}\) is the mean pixel intensity of background (non-cell regions) at wavelength λ em , \(\mathop {\int }\nolimits_{\lambda _1}^{\lambda _2} {\mathrm{\Phi }}(\lambda ){\mathrm{d}}\lambda\) is the integrated quantum efficiency of the camera spanning the spectral channel bandwidth centered at wavelength λ em , and λ 1 and λ 2 are the lower and upper cutoff of the emission bandwidth. Autofluorescence at each wavelength was normalized by dividing by I AF (562nm). Autofluorescence and single fluorophore intensities Unpatterned cell samples were prepared as described above and stained with EGF conjugates of three different QDs emitting at 565, 605, and 744 nm or with a dye. The cells were then imaged at three emission wavelengths (562, 600, and 732 nm) for QDs under otherwise identical conditions and instrument settings or imaged with 561 nm laser excitation and 600 nm emission for the dye. Single QDs/dye were identified using methods 18 in which videos of QD/dye spots were saved as TIFF stacks and imported into Matlab for QD/dye spot detection and single-QD/dye identification. QD/dye spot centroids ( x 0 , y 0 ) were obtained from images using the detection/estimation/deflation algorithm from the multiple-target tracing (MTT) algorithm of Sergé et al. 66 . Centroid locations were rounded to the closest integral pixel values, ([ x 0 ],[ y 0 ]), and an intensity histogram of a 3 × 3 pixel array centered at this position for the video was then fit to a sum of two functions, a Gaussian background (mean [ μ 1 ], standard deviation [ σ 1 ], and area [ a 1 ]) and a skewed Gaussian QD/dye signal (mean [ μ 2 ], standard deviation [ σ 2 ], area [ a 2 ], and skew factor [ r ]). Curve fits that satisfied previous criteria to distinguish single-QD/dye photophysical dynamics were used to identify single QDs/dyes, for which the intensity, I QD/dye ( λ em ), was determined as: $$I_{{\mathrm{QD}}/{\mathrm{dye}}}\left( {\lambda _{{\mathrm{em}}}} \right) = \frac{{\mu _2 - \mu _1}}{{{\mathrm{\Phi }}(\lambda _{{\mathrm{em}}})}},$$ (5) where Φ( λ em ) is the quantum efficiency of the camera at wavelength λ em . Autofluorescence at a specific wavelength was calculated on the cell area for which there were no QDs, using the following equation: $$I_{{\mathrm{AF}}}(\lambda _{{\mathrm{em}}}) = \frac{{\mathop {\sum }\nolimits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\nolimits_{y = [y_0] - 1}^{[y_0] + 1} I(x,y,\lambda _{{\mathrm{em}}}) - \overline {I_{3 \times 3,{\mathrm{b}}}(\lambda _{{\mathrm{em}}})} }}{{{\mathrm{\Phi }}(\lambda _{{\mathrm{em}}})}},$$ (6) where I ( x , y , λ em ) is the intensity for pixel ( x , y ), \(\overline {I_{3 \times 3,{\mathrm{b}}}(\lambda _{{\mathrm{em}}})}\) is the mean 3 × 3 pixel intensity sum of background regions, and ([ x 0 ], [ y 0 ]) is centroid of each 3 × 3 pixel array of autofluorescence. Deconvolution 3D volumetric stacks (250 nm z-spacing, 80–200 images) of QDs were deconvolved using AutoQuantX3 (Media Cybernetics). All stacks were deconvolved using the following settings: fixed point spread function (PSF), 60 iterations, and noise level low as recommended by Media Cybernetics. PSF images were experimentally acquired using fluorescent TetraSpeck microspheres (0.1 μm diameter; Thermo Fisher Scientific), and calculated using the PSF image processing tool in Zeiss ZEN software. Isolated QD intensity calibration Two stacks of images of isolated QDs on glass coverslips were were collected in wide-field excitation mode: a time stack at a single z-focal plane (4000 images; 100 ms exposure time) and a 3D volumetric stack (250 nm z-spacing, 80 images; 100 ms exposure time). 3D z-stacks were deconvolved using AutoQuantX3. Using custom Matlab codes, the deconvolved 3D intensity of each spot \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) was then calculated as the integrated intensity of a 3 × 3 × 11 voxel centered at the centroid position according to the following equation: $$I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} = \mathop {\sum }\limits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\limits_{y = [y_0] - 1}^{[y_0] + 1} \mathop {\sum }\limits_{z = [z_0] - 5}^{[z_0] + 5} I(x,y,z) - \overline {I_{3\times3\times11,{\mathrm{b}}}},$$ (7) where [ x 0 ], [ y 0 ], and [ z 0 ] are the centroid positions rounded to the nearest pixel integer, I ( x , y , z ) is the intensity of a single pixel, and \({\overline {I_{3\times3\times11,{\mathrm{b}}}}}\) is the mean 3 × 3 × 11 voxel intensity sum of background region. Using the same 2D spot ([ x 0 ], [ y 0 ]) centroid positions, 3 × 3 time-course intensities \(( {I_{{\mathrm{spot}}}^{2{\mathrm{D}}}})\) were calculated according to the following equation: $$I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t) = \mathop {\sum }\limits_{x = [x_0] - 1}^{[x_0] + 1} \mathop {\sum }\limits_{y = [y_0] - 1}^{[y_0] + 1} I(x,y,t).$$ (8) Using Matlab, all intensities for a spot were binned into a histogram composed of 100 bins. The intensity histogram was fitted using least square estimate to a Gaussian mixture model with 2–5 Gaussians, for which one was the background noise function corresponding to the off-state of QD blinking. To maximize the accuracy in fitting, we imposed the following fitting criteria: (1) correlation coefficient greater than or equal to 0.98 between the fit and data, (2) each Gaussian area contributes at least 8% the total area, (3) maximum 75% overlap between any two Gaussians, and (4) maximum 20% difference in area between each Gaussian and its corresponding data region. For each spot, the number of Gaussians that yields the minimum AIC value was identified as optimal. AIC was calculated according the following equation: $${\mathrm{AIC}} = n_{{\mathrm{bin}}}{\mathrm{ln}}\left( {\frac{{\mathrm{RSS}}}{{n_{{\mathrm{bin}}}}}} \right) + 2(3n_{{\mathrm{Gauss}}} - 1),$$ (9) where n bin is the number of bins used to construct the intensity histogram, RSS is the residual sum of squares, and n Gauss is the number of Gaussians used to fit the intensity histogram. QDC-3DM methodology Two stacks of images of the QDs were collected in wide-field excitation mode: a time-stack at a single z-focal plane (600 images; 50 ms exposure time) and a 3D volumetric stack (250 nm z-spacing, 100–200 images; 50 ms exposure time). 3D z-stacks were deconvolved using AutoQuantX3. Deconvolved 3D images were then imported into Imaris (Bitplane) which has an automatic 3D detection algorithm (surface mode) to determine the centroid positions ( x 0 , y 0 , z 0 ) and intensity \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) of spots with a range of sizes. These spot data, the time-stack images, and the deconvolved 3D images were imported into Matlab and a custom script was used to calculate the number of QD-EGF per cell. [1] Single-QD identification : Spot positions ( x 0 , y 0 , z 0 ) were rounded to the nearest integer pixel values, ([ x 0 ], [ y 0 ], [ z 0 ]), and time-course intensities of the corresponding 2D spots, \(I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t)\) , were summed over a 3 × 3 voxel centered about the centroid positions ([ x 0 ], [ y 0 ]) at each time point using equation 8. Temporal intensities \(I_{{\mathrm{spot}}}^{2{\mathrm{D}}}(t)\) for each spot were binned into histograms and fit to a sum of two functions, a Gaussian background and skewed Gaussian signal. Single QDs were identified from istribution fits that satisfy previous criteria to distinguish single-QD photophysical dynamics 18 . [2] Single-QD intensity calibration : Deconvolved 3D spot intensities \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}}})\) for which spots correspond to single QDs \(( {I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} = I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}})\) were averaged to calculate the mean single-QD intensity, $$\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} = \frac{1}{n}\mathop {\sum }\limits_{i = 1}^n I_{1{\mathrm{QD}}_i}^{3{\mathrm{DD}}},$$ (10) where n is the number of QDs identified as single. [3] Spot intensity calibration : The number of QDs within each deconvolved 3D spot, N QD,spot , for images collected under the same conditions and experimental set was then calculated as: $$N_{{\mathrm{QD}},{\mathrm{spot}}} = I_{{\mathrm{spot}}}^{3{\mathrm{DD}}} \cdot \left( {\overline {I_{1{\mathrm{QD}}}^{3{\mathrm{DD}}}} } \right)^{ - 1}$$ (11) For any 3D field of view, such as a single cell ( N QD,cell ) containing m spots, the total number of QDs can be calculated as the sum of QDs in each spot as: $$N_{{\mathrm{QD}},\,{\mathrm{cell}}} = \mathop {\sum }\limits_{i = 1}^m N_{{\mathrm{QD}},\,{\mathrm{spot}}_i}.$$ (12) Internalization fraction calculation Cells membranes were mapped using 3D images of QD605-IgG membrane stains using the Matlab alphaShape function by importing ([ x ], [ y ], [ z ]) coordinates of QD605-IgG with an alpha radius of 50. Spatial coordinates for QD605-IgG spots were obtained using the MTT detection/estimation/deflation algorithm 66 for each 2D image of a 3D z-stack spanning the entire cell thickness. For spots detected in the same ([ x ], [ y ]) positions across adjacent z-planes, [ z ] values were averaged. Nucleus ([ x nuc ], [ y nuc ], [ z nuc ]) coordinates were determined using Imaris. In Matlab, a vector was constructed connecting the nucleus and surface through each QD744-EGF spot centroid position ([ x QD ], [ y QD ], [ z QD ]) derived from the above deconvolved 3D images, with surface intersection coordinates ([ x surf ], [ y surf ], [ z surf ]). An EGF spot was identified as internalized if it satisfied the following condition of relative distance from the surface: $$\left[ {\frac{{\left( {x_{{\mathrm{QD}}} - x_{{\mathrm{nuc}}}} \right)^2 + \left( {y_{{\mathrm{QD}}} - y_{{\mathrm{nuc}}}} \right)^2 + \left( {z_{{\mathrm{QD}}} - z_{{\mathrm{nuc}}}} \right)^2}}{{\left( {x_{{\mathrm{surf}}} - x_{{\mathrm{nuc}}}} \right)^2 + \left( {y_{{\mathrm{surf}}} - y_{{\mathrm{nuc}}}} \right)^2 + \left( {z_{{\mathrm{surf}}} - z_{{\mathrm{nuc}}}} \right)^2}}} \right]^{1/2} \le 0.8.$$ (13) The fraction of EGF internalized ( f ) was then calculated using the following equation for a cell in which there are n spots internalized. $$f = \frac{1}{{N_{{\mathrm{QD}},{\mathrm{cell}}}}}\mathop{\sum }\limits_{i = 1}^n N_{{\mathrm{QD}},{\mathrm{spot}}_i}.$$ (14) Membrane stain analysis To evaluate the accuracy of the QD membrane stain, cells were co-stained with MemBrite according to Protocol 7. Volumetric images of membranes were collected using a Zeiss 710 confocal scanner Azio Observer Z1 inverted microscope with ×63 1.4 NA oil immersion objective with 250 nm z-spacing and 640/660 nm excitation/emission bands. The cell membranes at each z-plane of the confocal images were then manually segmented to serve as the membrane standard to calculate the accuracy of membrane maps obtained from QD605-IgG membrane stains and alpha shape analysis from epifluorescence images as described above for calculating the internalization fraction. Differences in distances between the two cell membrane maps were calculated for each pixel of the membrane obtained via confocal and plotted in 3D using Matlab. 2D and 1D projections of EGF localization Cells grown on micro-contact printed surfaces have the same adhesion shapes, which can be observed using Alex Fluor 488-labeled fibronectin. The fluorescent adhesion patterns were aligned using a custom Matlab code. The EGF locations were transformed similarly and projected either onto a 2D surface or 1D line. EGF-binding simulation EGF-EGFR binding kinetics on a population of cells with heterogeneous EGFR expression was modeled using a Matlab code. The EGF-EGFR kinetic model involves three processes: association, dissociation, and internalization. Three differential equations were used to solve for the concentration of free receptor [EGFR]( t ), ligand–receptor complexes [EGF|EGFR]( t ), and internalized complexes [EGF|EGFR] int ( t ): $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGFR}}} \right]\left( t \right) = - k_{{\mathrm{on}}} \cdot \left[ {{\mathrm{EGF}}} \right]\left( t \right) \cdot \left[ {{\mathrm{EGFR}}} \right]\left( t \right) + k_{{\mathrm{off}}} \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (15) $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]\left( t \right) = k_{{\mathrm{on}}} \cdot \left[ {\mathrm{EGF}} \right]\left( t \right) \cdot \left[ {\mathrm{EGFR}} \right]\left( t \right) - (k_{{\mathrm{off}}} + k_{{\mathrm{int}}}) \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (16) $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]_{{\mathrm{int}}}\left( t \right) = k_{{\mathrm{int}}} \cdot \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right](t),$$ (17) where k on , k off , and k int are kinetic rate constants for ligand–receptor association, ligand–receptor dissociation, and ligand–receptor internalization, respectively, provided in Tables 2 and 3 . Table 2 EGF-EGFR kinetic rate parameters at 37 °C Full size table Table 3 EGF-EGFR kinetic rate parameters at 4 °C Full size table Because experiments were performed in a large medium volume ( V cell ~16.7 nL extracellular volume per cell compared to ~1.7 pL intracellular volume for ~15 µm spherical cells), EGF concentration is approximately constant and equal to the initial value [EGF] 0 , which was 0.03, 0.3, 3, or 30 nM, corresponding to 0.1, 1, 10, or 100 nM of QD with QD:EGF = 3:1. $$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}\left[ {{\mathrm{EGF}}} \right]\left( t \right) = 0;\left[ {{\mathrm{EGF}}} \right]\left( t \right) = \left[ {{\mathrm{EGF}}} \right]_{\mathrm{0}}.$$ (18) The discrete steady-state population distribution of active EGFR copy number per cell ( N R ) is approximated as a gamma distribution 37 , 38 , 40 , for which: $$p\left( {N_{\mathrm{R}}} \right) = \frac{{N_{\mathrm{R}}^{a - 1}e^{ - N_{\mathrm{R}}/b}}}{{\Gamma (a)b^a}}.$$ (19) Here Γ is the gamma function, a is the inverse of noise \(( {\overline {N_{\mathrm{R}}} ^2 \cdot \sigma ^{ - 2}})\) that defines the distribution shape, and b is the Fano factor \(( {\sigma ^2 \cdot \overline {N_{\mathrm{R}}} ^{ - 1}})\) that defines the scale, or translation burst size. \(\overline {N_{\mathrm{R}}}\) and σ are the mean and standard deviation of the protein number distribution, respectively. The average number of active receptors per cell is \(\overline {N_{\mathrm{R}}} = 100,000\,{\mathrm{cell}}^{ - 1}\) based on the average EGFR number per MDA-MB-231 cell (200,000), of which ~50% are on the membrane 42 , 69 . Based on previous quantification of EGFR on MDA-MB-231 cells by flow cytometry using antibody fragments, we use a = 3.34 70 . The rate equations were then solved for $$N_{{\mathrm{EGF}}}(N_{\mathrm{R}}) = \left( {\left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right] + \left[ {{\mathrm{EGF}}|{\mathrm{EGFR}}} \right]_{{\mathrm{int}}}} \right) \cdot V_{{\mathrm{cell}}} \cdot N_{\mathrm{A}},$$ (20) where N A is Avagadro’s number. N EGF is solved for each discrete N R to yield the average number of EGF ligands for each discrete cell, \(\overline {N_{{\mathrm{EGF}},{\mathrm{R}}}}\) . Each average is then spread by a Poisson distribution to account for intrinsic noise 39 as $$p\left( x \right) = {\mathrm{e}}^{ - \bar x}\frac{{\bar x^x}}{{x!}},$$ (21) where \(\bar x = \overline {N_{{\mathrm{EGF}},{\mathrm{R}}}}\) and x = N EGF,R . Then, each p ( x ) = p ( N EGF,R ) is scaled by p ( N R ) and summed across N R to generate the N EGF distribution. For Fig. 4c , the complete cell population was simulated. For Fig. 4d , individual cells were sampled from the N R distribution in the same number as those in the experimental data, and then used to calculate the number of EGF bound by sampling the Poisson distribution spread of kinetic binding. The statistical difference between the distribution of N EGF between experiment and simulation was calculated using the Mann–Whitney U test. Instrumentation Cell and QD imaging was performed using a Zeiss Axio Observer Z1 inverted microscope for wide-field illumination in the Smith Lab, or a Zeiss 710 confocal scanner Azio Observer Z1 inverted microscope in the Carl R. Woese Institute for Genomic Biology core facility at the University of Illinois. Gel electrophoresis for QDs and QD conjugates was performed using an EPS-300X system (C.B.S. Scientific company Inc.). Gel images were collected using a Bio-Rad Molecular Imager Gel Doc XR system. Gel electrophoresis for western blot was performed using a Bio-Rad mini Protean tetra cell. Western blotting was carried out using a Bio-Rad Criterion Blotter and films were imaged using a Konica SRX-101A film processor. Flow cytometry data were acquired using a BD Biosciences LSR Fortessa Cytometry Analyzer equipped with 488 and 561 nm lasers in the Roy J. Carver Biotechnology Center at the University of Illinois. Absorption spectra of QDs were acquired using an Agilent Cary 5000 UV–Vis–NIR spectrometer. All measurements were carried out within the dynamic range of the instrument (absorbance < 4) in the entire spectral range. Fluorescence spectra of QDs using 491 nm excitation were acquired using a Horiba NanoLog spectrofluorometer. Raw fluorescence signal was adjusted for the wavelength-dependent detector sensitivity and excitation power fluctuations. Electron microscopy images were acquired using a JOEL 2010 LaB6 high-resolution microscope in the Frederick Seitz Materials Research Laboratory Central Research Facilities at the University of Illinois. Hydrodynamic sizes of QDs were measured via an ӒKTApurifier UPC10 (GE Healthcare) with a Superose™ 6 10/300GL column (GE Healthcare), controlled using the UNICORN 5.31 Workstation software. Photolithography was performed using a Karl Suss MJB3 Mask Aligner in the Micro and Nanotechnology laboratory at the University of Illinois. Statistical information Except where otherwise noted, values are reported as mean ± standard deviation (s.d.). Statistical significance analyses were calculated using two-tailed Mann–Whitney test in Origin Pro 9.1. A statistically significant value was denoted with an asterisk (*) for p < 0.05. χ 2 goodness-of-fit tests were performed using a built-in function in Matlab. Code availability All codes used in this study are available from the corresponding author upon reasonable request. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Whether healthy or diseased, human cells exhibit behaviors and processes that are largely dictated by growth factor molecules, which bind to receptors on the cells. For example, growth factors tell the cells to divide, move, and when to die—a process known as apoptosis. When growth factor levels are too high or too low, or when cells respond irregularly to their directions, many diseases can result, including cancer. "It is believed that cells respond to growth factors at extreme levels of sensitivity," said University of Illinois at Urbana-Champaign Bioengineering Associate Professor Andrew Smith. "For example, a single molecule will result in a major change in cell behavior." In a recent paper published in Nature Communications, Smith reported the invention of a new technology platform that digitally counts, for the first time ever, the amount of growth factor entering an individual cell. Prior to this, researchers inferred growth factor binding based on how the receiving cells responded when the growth factor molecules were introduced. "We showed the first direct cause-and-effect relationships of growth factors in single cells," he said. "We expect the outcomes to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, particularly toward improved treatments for cancer." Smith's technology platform tags each growth factor with a single engineered (10 nanometer) infrared fluorescent quantum dot, which can then be viewed using a three-dimensional microscope. In their study, they counted how many epidermal growth factor (EGF) molecules bound to human triple-negative breast cancer cells that were pre-patterned on island-like surfaces. EGF molecules typically signal cell division and lead to tissue growth. Numerous cancers have mutations in their EGF receptors. "We used quantum dots as the fluorescent probe because they emit a lot more light compared to other conventional fluorescent probes such as organic dyes, and we can tune their wavelengths by changing their chemical composition," said Bioengineering doctoral student Phuong Le, the lead author of the paper. "In our study, we demonstrated that quantum dots emitting light in the near-infrared wavelength allowed the most accurate counting of growth factors binding to cells." According to Le, the team also treated the breast cancer cells with quantum dot-tagged EGF in the absence and presence of pharmaceutical drugs that inhibit EGF signaling in cells. "We found that the amount of EGF binding is inversely proportional to drug efficacy," Le said. "This finding is significant as it means that signaling molecules present in the cancer cells' tumor—a place where signaling molecules are often misregulated—can enhance the cancer cells' resistance to pharmaceutical agents."
10.1038/s41467-019-08754-5
Nano
Iron oxide nanoparticles for medical applications: Study clarifies effect of microstructure on magnetic properties
Stefan Neumann et al, Influence of the hierarchical architecture of multi-core iron oxide nanoflowers on their magnetic properties, Scientific Reports (2023). DOI: 10.1038/s41598-023-31294-4 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-023-31294-4
https://phys.org/news/2023-04-iron-oxide-nanoparticles-medical-applications.html
Abstract Magnetic properties of superparamagnetic iron oxide nanoparticles are controlled mainly by their particle size and by their particle size distribution. Magnetic properties of multi-core iron oxide nanoparticles, often called iron oxide nanoflowers (IONFs), are additionally affected by the interaction of magnetic moments between neighboring cores. The knowledge about the hierarchical structure of IONFs is therefore essential for understanding the magnetic properties of IONFs. In this contribution, the architecture of multi-core IONFs was investigated using correlative multiscale transmission electron microscopy (TEM), X-ray diffraction and dynamic light scattering. The multiscale TEM measurements comprised low-resolution and high-resolution imaging as well as geometric phase analysis. The IONFs contained maghemite with the average chemical composition \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) . The metallic vacancies located on the octahedral lattice sites of the spinel ferrite structure were partially ordered. Individual IONFs consisted of several cores showing frequently a specific crystallographic orientation relationship between direct neighbors. This oriented attachment may facilitate the magnetic alignment within the cores. Individual cores were composed of partially coherent nanocrystals having almost the same crystallographic orientation. The sizes of individual constituents revealed by the microstructure analysis were correlated with the magnetic particle sizes that were obtained from fitting the measured magnetization curve by the Langevin function. Introduction In recent decades, magnetic iron oxide nanoparticles (IONPs) have emerged as one of the most promising nanomaterials for biomedical applications, for example as heat mediator for hyperthermia cancer treatment 1 , as carrier for drug delivery 2 or as contrast agent in magnetic resonance imaging 3 . The manifold applications of IONPs arise from a combination of excellent properties including superparamagnetic behavior, high saturation magnetization, good biocompatibility and the possibility to functionalize IONPs by attaching various bioactive molecules. IONPs usually consist of magnetite (Fe \(_3\) O \(_4\) ) and/or maghemite ( \(\gamma\) -Fe \(_2\) O \(_3\) ), which crystallize in a spinel-like structure with tetrahedrally and octahedrally coordinated iron cations. Magnetite (space group \(Fd{\bar{3}}m\) ) accommodates Fe \(^{2+}\) and Fe \(^{3+}\) cations on the Wyckoff positions 8 b and 16 c , respectively 4 . This distribution of the cations guarantees charge neutrality. However, in contrast to magnetite, some octahedral iron sites in maghemite must stay vacant to preserve the chemical composition Fe \(_2\) O \(_3\) that corresponds to Fe \(_{2.67}\) O \(_4\) in the spinel-like crystal structure. The oxygen sublattice is still fully occupied. It has been shown that the Fe vacancies tend to order, which leads to the formation of different crystal structures of \(\gamma\) -Fe \(_2\) O \(_3\) . The crystal structure of \(\gamma\) -Fe \(_2\) O \(_3\) with randomly distributed vacancies can still be described as a simple cubic spinel with the space group \(Fd{\bar{3}}m\) 5 . \(\gamma\) -Fe \(_2\) O \(_3\) with vacancies partially ordered only on one of two distinct octahedral sites was described in the space group \(P4_332\) 6 , \(\gamma\) -Fe \(_2\) O \(_3\) with vacancies partially ordered on one of three distinct octahedral sites in the tetragonal space group \(P4_32_12\) but with almost identical lattice parameters a and c 7 . \(\gamma\) -Fe \(_2\) O \(_3\) with fully ordered vacancies was described as a tetragonal superstructure in the space group \(P4_12_12\) with \(c\approx 3a\) 8 . Vacancy ordering and the tetragonal distortion of the cubic spinel unit cell were originally reported for ‘microcrystalline’ \(\gamma\) -Fe \(_2\) O \(_3\) . However, the same phenomena were also observed in IONPs 9 , 10 , 11 . The chemical composition (the [Fe]/[O] ratio) and related ordering of vacancies influence the magnetic properties of IONPs. They depend strongly on the fractions of Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) 12 , 13 , 14 , because Fe \(_3\) O \(_4\) shows a higher saturation magnetization than \(\gamma\) -Fe \(_2\) O \(_3\) 15 . The size of IONPs is another important factor affecting their magnetic properties. When it decreases below a certain threshold value, IONPs become superparamagnetic 16 as required for many biomedical applications 17 , 18 , 19 , 20 . The size threshold value is around 25 nm for Fe \(_3\) O \(_4\) and 30 nm for \(\gamma\) -Fe \(_2\) O \(_3\) 21 . Therefore, the size of IONPs needs to be tailored for the respective application in order to obtain the best possible combination of properties. However, when IONPs are significantly smaller, their saturation magnetization is reduced by a disorder of the spins either in the interior of the IONPs or in their surface layer. The spin disorder in the interior of the IONPs was explained by inhomogeneous ordering of the cation vacancies 22 .The spin disorder in the surface layer of IONPs is usually explained by the incomplete coordination of superficial iron ions and the likely occurrence of structural defects at the IONP rim 23 , 24 , 25 . At 300 K, a thickness of the disordered spin layer of 0.54 nm was reported by Sharifi Dehsari et al. 26 , whereas a thickness of 1 nm was reported by Millan et al. 25 (for IONPs larger than 3 nm). Furthermore, the different [Fe]/[O] ratio in magnetite and maghemite is a reason for their different oxidation stability. Under aerobic conditions, maghemite is much more stable than magnetite 27 . Thus, the exact phase composition and distribution of Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) can vary, in particular, if IONPs are in contact with oxygen. While a full oxidization of the iron oxide to \(\gamma\) -Fe \(_2\) O \(_3\) was observed for smaller particles 28 , IONPs with intermediate sizes were found to contain non-stoichiometric Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) with \(2.667<\langle 3-\delta \rangle <3\) 12 , 28 . Large IONPs are generally assumed to have a core/shell structure with a Fe \(_3\) O \(_4\) core and an oxidized shell 12 , 13 , 28 , 29 , 30 , 31 . Recently, multi-core IONPs, often referred to as iron oxide nanoflowers (IONFs) 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , have attracted attention of many research groups, as they show superior properties with respect to their mono-core counterparts, for instance a significantly enhanced specific loss power in magnetic hyperthermia 32 , 33 , 34 , but also increased cytotoxicity to cancer cells when applying an alternating magnetic field 36 . Lartigue et al. 33 showed that the oriented attachment of individual cores building up the IONFs and the resulting continuity of their crystallographic orientation with a misalignment of the cores of only a few degrees 32 , 33 , 38 , 39 , 40 favor a magnetic ordering across the interface and consequently a cooperative magnetic behavior. As a result, IONFs show enhanced magnetic susceptibility and smaller surface disorder 33 , while their superparamagnetic behavior is preserved 40 . Still, the magnetic performance of IONFs depends on many different properties such as the size of the cores, the size of the entire particles 33 , 38 , 39 , 40 , 41 , 42 , the number of the cores within the particles 39 and their alignment 43 . While a lot of research has been dedicated to the optimization of the synthesis process of IONFs 35 , 39 and to the understanding of the magnetic interaction between individual cores within IONFs 44 , a profound description of the hierarchical structure of IONFs on the atomic scale, which is expected to influence the magnetic properties of IONFs significantly, has not been provided so far. In the present study, we describe the architecture and structure of IONFs on the nanoscopic and atomic scale, including crystallographic orientation relationships and structural coherence of the individual cores, and correlate these characteristics with the magnetic properties obtained from alternating gradient magnetometry (AGM) measurements. This contribution illustrates the capability of transmission electron microscopy (TEM) applied in high-resolution and low-resolution modes and networked by a correlative multiscale approach 45 , complemented by X-ray diffraction (XRD) and dynamic light scattering (DLS), to reveal detailed and statistically relevant information about the structure of IONFs on different length scales. Materials and methods IONFs investigated in this study are commercially available dextran-coated maghemite IONFs (synomag-D, micromod Partikeltechnologie GmbH, Rostock, Germany) with a nominal hydrodynamic diameter of 50 nm, which were synthesized by a polyol method adapted from Lartigue et al. 33 . Details on the synthesis of the IONFs can be found in the paper from Gavilán et al. 35 . For the TEM analysis, IONFs originally suspended in water were nebulized on a standard copper TEM grid covered with an amorphous carbon film. TEM experiments were carried out in a JEOL JEM-2200FS transmission electron microscope, which was equipped with a field emission gun operating at 200 kV, with a CESCOR probe aberration corrector (CEOS GmbH, Germany), with an ultra-high resolution objective lens ( \(C_S = 0.5\) mm), with an in-column energy filter ( \(\Omega\) -filter) and with a highly sensitive 2k \(\times\) 2k CCD camera (Gatan, Inc., USA). The \(\Omega\) -filter was used to remove inelastically scattered electrons from the beam and thus to improve the quality of the TEM images. The IONFs were characterized by high-resolution transmission electron microscopy (HRTEM), by scanning transmission electron microscopy (STEM) using an upper high-angle annular dark-field (HAADF) detector (EM-24630 UHADF, JEOL Ltd., Japan) and by selected area electron diffraction (SAED). Local diffraction patterns were obtained from HRTEM images using fast Fourier transform (FFT). For XRD experiments, IONFs were dried in a fume hood and then spread on a ‘zero-background’ sample holder, which was a \(\langle 5\,1\,0\rangle\) -oriented Si single crystal. XRD measurements were carried out in symmetrical Bragg-Brentano geometry on a Seifert-FPM URD6 diffractometer (Freiberger Praezisionsmechanik, Germany) that was equipped with a sealed X-ray tube with a Cu anode, with a Soller collimator in the primary beam and with a graphite monochromator in the diffracted beam. The Soller collimator reduced the axial divergence of the primary beam. The graphite monochromator eliminated diffraction lines stemming from the spectral line CuK \(_\beta\) and the fluorescence radiation of the sample. Measured XRD patterns were subjected to Rietveld refinement 46 , 47 as implemented in the MAUD software 48 . DLS experiments were carried out in backscatter mode using a ZetaSizer Nano ZS (Malvern Panalytical, UK). The laser wavelength was set to 632.8 nm, the detected scattering angle to \(173^{\circ }\) . In the DLS experiments, 100 \(\upmu\) L of IONF sample material ( \(c_{\text {IONF}}\) = 0.1 g/L) was injected into the capillary cell. The temperature (25 \(^{\circ }\) C) was controlled by the device. Due to the low IONF concentration, the viscosity of pure water at 25 \(^{\circ }\) C ( \(\eta _{\text {L}}\) = 0.89 mPa·s) was assumed, when the results of the DLS experiments were evaluated. AGM measurements were performed at room temperature in a gradient magnetic field that was generated by two magnetic coils. The maximum intensity of the external magnetic field ranged between \(-4\cdot\) 10 \(^5\) A/m and \(+4\cdot\) 10 \(^5\) A/m. The magnetic force induced by the external magnetic field was measured by a piezoelectric sensor. As the magnetic properties of the cores were of interest, the dextran shell of the IONFs was removed prior to the AGM measurements. In this preparation step, 300 \(\upmu\) L of a 25 g/L IONF suspension was mixed with 700 \(\upmu\) L pure ethanol and subsequently evaporated under stirring for 60 min at 95 \(^{\circ }\) C and at 300 min \(^{-1}\) . After evaporation, 1 mL pure ethanol was added in order to resuspend the dry IONFs. The suspension was stirred again at 300 min \(^{-1}\) and 95 \(^{\circ }\) C for 60 min. After the second ethanol evaporation step, a dry, grey IONF powder was obtained. Approximately 1.5 to 3.0 mg of the powder was fixed between two adhesive films to produce a sample suitable for the AGM measurements. This sample was attached to a pendulum connected with the piezoelectric sensor. The measured magnetization curve was normalized to the sample mass and volume in order to determine characteristic magnetic values, i.e., the specific remanence and the specific saturation magnetization. Results Phase composition and vacancy ordering Figure 1 ( a ) XRD pattern of the IONFs under study. Rietveld refinement was carried out using space group \(Fd{\bar{3}}m\) . ( b ) Dependence of the cubic lattice parameter of IONPs on their stoichiometry. The horizontal dashed lines mark the lattice parameters of Fe 3 O 4 and \(\gamma\) -Fe \(_2\) O \(_3\) . The Vegard dependence (ascending gray dashed line) was calculated for large crystallites ( \(D\rightarrow \infty\) in Eq. ( 1 )). The black crosses 49 , blue circles 29 , green triangles 13 and orange squares 14 represent values taken from literature; the red pentagon with error bars marks the lattice parameter from the present work. Full size image As mentioned in Introduction, the transition between Fe \(_3\) O \(_4\) and \(\gamma\) -Fe \(_2\) O \(_3\) is accompanied by a change in the oxidation state of the iron cations, which induces the formation and ordering of vacancies on the iron positions. Although the ordering of vacancies has to be described by different space groups ( \(Fd{\bar{3}}m\) , \(P4_332\) , \(P4_32_12\) ) from the crystallographic point of view 5 , 6 , 7 , the impact of vacancy ordering on the powder XRD pattern is rather weak 11 , 50 . The possible tetragonal distortion of the spinel-like cubic cell is small and thus hardly visible in powder XRD patterns, in particular in XRD patterns of NPs, which produce strongly broadened diffraction lines. Still, it has been demonstrated by many authors that the lattice parameter of IONPs with cubic or pseudo-cubic spinel structure depends linearly on the mole fraction of magnetite in maghemite 13 , 14 , 29 , 49 . Cervellino et al. 30 extended this Vegard-like dependence to account for the effect of the crystallite size on the lattice parameter: $$\begin{aligned} a = \bigl [(1-x_{\textrm{Fe}_3\textrm{O}_4})\cdot a_{\gamma \text{- }\textrm{Fe}_2\textrm{O}_3}+x_{\textrm{Fe}_3\textrm{O}_4}\cdot a_{\textrm{Fe}_3\textrm{O}_4}\bigr ](1-\Omega /D) \end{aligned}$$ (1) In Eq. ( 1 ), \(a_{\gamma \text{- }\textrm{Fe}_2\textrm{O}_3} = 0.83474\) nm 6 and \(a_{\textrm{Fe}_3\textrm{O}_4} = 0.83941\) nm 51 are the terminal lattice parameters of maghemite and magnetite, respectively, \(x_{\textrm{Fe}_3\textrm{O}_4}\) is the mole fraction of magnetite in maghemite, \(\Omega\) is an empiric constant and D is the NP size. The ‘correction factor’ \((1-\Omega /D)\) describes the expansion of the lattice parameter in very small NPs, which results from surface relaxation effects 52 , 53 , 54 . Cervellino et al. 30 determined \(\Omega\) to be about \(-2.05\times 10^{-3}\) nm. However, the effect of the NP size is apparent only for very small particles. Rietveld analysis of the XRD pattern of the IONFs under study (Fig. 1 a), which was carried out assuming a single-phase nature of the Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) sample and the space group \(Fd{\bar{3}}m\) , revealed the lattice parameter 0.8353(3) nm and a crystallite size of ( \(22\pm 3\) ) nm. In the Vegard-like dependence from Cervellino et al. 30 (Fig. 1 b), the refined lattice parameter (0.8353 nm) corresponds to the mole fraction \(x_{\textrm{Fe}_3\textrm{O}_4}\) = 0.12(6) and to the stoichiometric coefficient \(\langle 3-\delta \rangle\) = 2.71(2) of Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) . Rietveld refinement of the site occupancy factors (SOFs) of the iron cations indicated that the majority of vacancies occurs on the octahedral sites 8 b [SOF = 0.867(8)], while the tetrahedral sites 16 c are almost fully occupied [SOF = 0.992(8)]. The oxygen anion sites 32 e were assumed to be fully occupied [SOF = 1]. These SOFs correspond to the mole fraction \(x_{\textrm{Fe}_3\textrm{O}_4}\) = 0.18(1) and to the stoichiometry \(\langle 3-\delta \rangle =2.726(2)\) of Fe \(_{\langle 3-\delta \rangle }\) O \(_4\) . It should be mentioned that although iron vacancies are in general expected to occur exclusively on the octahedral sites 6 , 7 , 11 , Cooper et al. 50 showed that in IONPs the number of tetrahedrally coordinated cation vacancies increases, when the particle size decreases below 8 nm. Figure 2 ( a ) SAED pattern of a large ensemble of IONFs. ( b ) HRTEM image of a single IONF core. ( c ) FFT of the HRTEM image shown in ( b ). ( d ) Amplitude image of the reflection 102. Diffraction patterns in ( a ) and ( c ) are indexed using space group \(P4_332\) . Reflections associated with vacancy ordering are marked in yellow. Full size image The SAED pattern (Fig. 2 a) and the FFT (Fig. 2 c) of the HRTEM image (Fig. 2 b) show superstructure reflections (marked in yellow). Their presence indicates that the vacancies in the IONFs are ordered to a certain extent, as it would correspond, e.g., to the space group \(P4_332\) . In order to rule out a tetragonal distortion of the cubic unit cell, which was reported by Jørgensen et al. 9 and Andersen et al. 11 for IONPs with ordered vacancies, the XRD pattern from Fig. 1 a was alternatively refined using the tetragonal space group \(P4_32_12\) . However, this Rietveld refinement revealed the same lattice parameters \(a = c = 0.8353(5)\) nm, as no noticeable tetragonal distortion was observed. In order to find out, whether the vacancies are ordered throughout the whole particle or just locally, the amplitude images of the lattice fringes \(\{1\,0\,2\}\) obtained from geometric phase analysis (GPA) 55 , 56 were taken into consideration. As the lattice fringes \(\{1\,0\,2\}\) only appear in crystal structures with ordered vacancies (space group \(P4_332\) or \(P4_32_12\) ), the magnitude of the local amplitudes obtained from GPA is a measure of the amount of ordered vacant octahedral positions. In the amplitude image (Fig. 2 d), bright colors correspond to a higher amount of ordered vacancies, dark colors to a lower amount of ordered vacancies. A highly non-homogeneous distribution of ordered vacancies is apparent. Complementarily to the results of XRD, which proved that the IONFs under study are almost entirely oxidized to maghemite (cf. Fig. 1 b), the amplitude image from Fig. 2 d shows that the vacancies are ordered only in few regions, which form subdomains with a size of few nanometers. Arrangement and coherence of individual cores in the IONFs Although separated cores were found occasionally for the IONFs under study (Fig. 2 b), the majority of IONFs consists of agglomerated cores (Fig. 3 ). Several authors reported that individual cores within IONFs tend to have the same crystallographic orientation 32 , 33 , 35 , 43 . The cores in the IONFs under study possess distinct crystallographic orientation relationships, but the majority of them was mutually twisted. The IONF in Fig. 3 a is composed of two cores, which are attached along their lattice planes \((2\,2\,0)\) and mutually twisted by about \(35.3^{\circ }\) around the crystallographic direction \([1\,1\,0]\) . The twist angle was determined from the angle between the crystallographic directions \([{\bar{1}}\,1\,1]\) and \([{\bar{1}}\,1\,4]\) , which were assigned to the direction of the primary electron beam for the core A and B, respectively (Fig. 3 b and c). Note that the angle of \(35.3^{\circ }\) corresponds to the smallest angle between the crystallographic directions \(\langle 1\,1\,1\rangle\) and \(\langle 1\,1\,4\rangle\) . The filtered inverse FFT image showing strongly magnified \((2\,2\,0)\) lattice fringes (Fig. 3 d) reveals some discontinuities at the interface of the cores, which resemble dislocations. The presence of these crystal structure defects is confirmed by the strain field component perpendicular to the \((2\,2\,0)\) lattice planes of the cores (Fig. 3 e), which corresponds to the strain distribution that is typically observed around the cores of edge dislocations 56 , 57 . Figure 3 ( a ) HRTEM image of a double-core IONF. The outer boundaries of the individual cores and their interface are indicated by a solid line and by a dashed line, respectively. Panels ( b ) and ( c ) show local FFTs of the cores labeled A and B in ( a ), respectively. In panel ( b ), reflections associated with the ordering of vacancies are marked by arrows. ( d ) Filtered inverse FFT showing the \((2\,2\,0)\) lattice fringes from the region in the middle of panel (a) that is marked by a square. ( e ) Strain field component perpendicular to the \((2\,2\,0)\) lattice planes of the cores as determined by GPA. Full size image Figure 4 ( a ) HRTEM image of a double-core IONF. The outline of the IONF, the interface between the two cores, and the interface between individual nanocrystals within the larger core are indicated by a solid, dashed and dotted line, respectively. Panels ( b ) and ( c ) show local FFTs of the cores labeled A and B in ( a ), respectively. The spots marked by yellow circles were used for GPA. Reflections associated with the ordering of vacancies are marked by arrows in ( b ). The strain field components \(\varepsilon _{xx}\) and \(\varepsilon _{yy}\) and the rigid rotation field \(\omega _{xy}\) determined by GPA are shown in panels ( d ), ( e ) and ( f ), respectively. The coordinate system is provided in the lower left corner of panel ( a ). Full size image Another double-core IONF is depicted in Fig. 4 a. Also in this case, individual cores possess a specific orientation relationship. They share the lattice planes \(\{3\,1\,1\}\) and are mutually twisted by about \(19.2^{\circ }\) , which is the angle between the crystallographic directions \([{\bar{1}}\,1\,2]\) and \([{\bar{2}}\,1\,7]\) (cf. Fig. 4 b,c). Moreover, these cores share additional lattice planes, e.g., \((0\,4\,{\bar{2}})_{\text {A}} \parallel (2\,4\,0)_{\text {B}}\) , \((0\,{\bar{4}}\,2)_{\text {A}} \parallel ({\bar{2}}\,{\bar{4}}\,0)_{\text {B}}\) , \((3\,{\bar{3}}\,3)_{\text {A}} \parallel (1\,{\bar{5}}\,1)_{\text {B}}\) or \(({\bar{3}}\,3\,{\bar{3}})_{\text {A}} \parallel ({\bar{1}}\,5\,{\bar{1}})_{\text {B}}\) . Note that the lattice planes \(\{3\,3\,3\}\) and \(\{5\,1\,1\}\) have the same interplanar spacing in cubic structures. The coincidence of several lattice planes is a possible reason for the shape of the interface between the individual cores. In contrast to the straight interface between the cores from Fig. 3 , which is more or less perpendicular to the shared lattice planes \((2\,2\,0)\) , the interface between the cores in Fig. 4 is rather curved, because its direction is not restricted by a single coinciding family of lattice planes. A more detailed information about the local misorientations of the cores was obtained from GPA 55 , 56 that was performed on the ‘non-colinear’ reflection spots \(3\,1\,1_\text {A}\parallel 3\,{\bar{1}}\,1_\text {B}\) and \(3\,{\bar{3}}\,3_\text {A}\parallel 1\,{\bar{5}}\,1_\text {B}\) . The strain field components \(\varepsilon _{xx}\) and \(\varepsilon _{yy}\) shown in Fig. 4 d,e, which represent the strain parallel and perpendicular to the \(\{3\,1\,1\}\) lattice planes of the cores, reveal that the lattice strain is primarily concentrated at the interface of the cores, whereas no apparent strain seems to be present within the cores. The rigid rotation field \(\omega _{xy}\) shown in Fig. 4 f disclosed that the cores A and B are additionally twisted along the viewing direction by about \(2^{\circ }\) . Moreover, Fig. 4 f suggests that core B is further fragmented into smaller nanocrystals (NCs) that are slightly twisted with respect to each other along the viewing direction by about \(0.3^{\circ }\) . Thus, the size of the primary building blocks within the IONFs is actually smaller than 10 nm. Figure 5 ( a ) Dependence of the XRD line broadening expressed in the reciprocal space units, \({\text {FWHM}}({\text {rad}}) \cdot \cos \theta /\lambda\) , on the magnitude of the diffraction vector, \(|\textbf{q}| \equiv q = 4\pi \sin \theta / \lambda\) . Black circles represent experimental data, the black solid line shows the dependence of the line broadening on \(|\textbf{q}|\) calculated for partially coherent NCs according to Rafaja et al. 58 . ( b ) Schematic representation of the effect of the mutual misorientation of crystallites by the angle \(\omega\) in direct space on the rotation of their reciprocal lattices, adapted from Rafaja et al. 59 . The reciprocal lattice points of two different crystallites are shown by filled and empty circles, respectively. The overlap of the reciprocal lattice points (hatched areas) represents the degree of partial coherence of the crystallites that decreases with their increasing distance from the origin of the reciprocal lattice 58 , 59 . Solid ellipses mark two examples of overlapping pairs of reciprocal lattice points. The dashed ellipse marks separated (non-coherent) reciprocal lattice points. Full size image The fragmentation of the IONF cores was confirmed by XRD. The XRD line broadening that was obtained by fitting individual XRD lines with Pearson VII functions 60 , 61 increased steeply at \(|\textbf{q}| \approx 75\,\textrm{nm}^{-1}\) (Fig. 5 a), which is an indicator of the partial crystallographic coherence of adjacent NCs 58 , 59 . In previous reports 58 , 59 , it was shown that adjacent crystallites can be partially coherent for XRD, if they are sufficiently small and if they possess very similar crystallographic orientations. Such crystallites cannot be distinguished by XRD from each other and appear larger. The degree of the partial coherence corresponds to the volume of the overlapping parts of the reciprocal lattice points (Fig. 5 b), which depends on the size of the reciprocal lattice points (approx. reciprocal value of the size of individual NCs), on the misorientation of neighboring NCs ( \(\omega\) ) and on the magnitude of the diffraction vector. A consequence of the partial coherence of NCs is a ‘narrowing’ of the XRD lines that appears at short diffraction vectors. The dependence from Fig. 5 a was described by a model from Rafaja et al. 58 . The refinable parameters of the model were the size of the crystallites and their local misorientation. The cluster size corresponds to the reciprocal value of the XRD line broadening extrapolated to \(|\textbf{q}| = 0\) . The refinement revealed a cluster size of 16 nm, a primary crystallite size of 7 nm and a crystallite misorientation of \(0.25^{\circ }\) . The cluster size, the crystallite size and the misorientation of crystallites agree very well with the parameters determined from HRTEM and GPA (cf. Fig. 4 ). Statistical determination of particle, core and shell size The results of HRTEM and XRD experiments discussed above confirmed that the majority of IONFs under study consists of agglomerates of nanocrystalline cores having specific mutual crystallographic orientations. However, these techniques cannot reveal statistically reliable information about the size distribution of the respective objects. HRTEM is typically applied to image few particles, thus its statistical reliability is low. XRD probes a significantly larger volume of the sample. However, the crystallite size distribution is usually obtained from the shape of the XRD lines assuming a certain shape of the distribution function 62 . This approach is not easily applicable for partially coherent NCs, because the partial coherence of adjacent NCs affects the shape of the XRD lines in addition to the size distribution and microstrain (variation of the interplanar spacing) 63 . Figure 6 Schematic representation of the multi-stage segmentation routine used for the determination of the particle size and core size distribution. ( a ) Original low-magnification HAADF-STEM image of the IONFs. ( b ) HAADF-STEM image segmented into individual particles by the semi-automatic segmentation routine from Neumann et al. 45 . ( c ) Single IONF segmented into several cores by a shape-based segmentation routine. ( d ) Binary image of a single segmented IONF. (e) Shape of the IONF and its individual cores approximated by ellipses based on the DTECMA algorithm 64 . ( f ) Shape markers determined on the basis of the ellipses from ( e ). ( g ) Outer Euclidean distance transform of the shape markers from ( f ) used as the marking function for the watershed segmentation of the IONF into its cores. Full size image In order to gain statistical insights into the size distribution of the entire IONFs and the individual cores, low-magnification HAADF-STEM imaging was employed. This technique allows to visualize 50-100 particles in a single low-magnification HAADF-STEM image. The HAADF-STEM images were evaluated using a multi-stage segmentation routine based on the watershed algorithm 65 . In the first stage of the routine, accumulated IONFs (Fig. 6 a) were segmented into individual particles (Fig. 6 b) by a semi-automatic segmentation routine 45 , 66 . For this segmentation step, the image intensity was adjusted, the noise was reduced using a Gaussian filter, the pre-processed images were binarized and morphologically smoothed 45 . Finally, individual particles were segmented using a marker-based watershed transformation. The markers were determined based on the extended minima transform of the inverted inner Euclidean distance transform of the pre-processed binary image 67 . The result of the segmentation routine was inspected and critical regions of the image were segmented manually. From the segmented images (Fig. 6 b), the area-equivalent diameter \(d_A\) of individual IONFs was determined using $$\begin{aligned} d_A = \sqrt{\frac{4A}{\pi }} \end{aligned}$$ (2) where A is the area of the IONFs. In the second step of the multi-stage segmentation routine, every individual IONF was segmented into its cores by a segmentation routine that considers mainly the IONF shape (Fig. 6 c). When an IONF consists of coalesced cores, its contour shows concave points (Fig. 6 d). Individual cores were localized using the Distance Transform-based Ellipse Contour Matching Algorithm (DTECMA) 64 that was applied to binary images of individual IONFs (Fig. 6 e). This algorithm identifies overlapping objects—in this case individual cores of an IONF—by approximating their two-dimensional projections with ellipses. Afterwards, shape markers were determined based on the extended minima transform of the inverted inner Euclidean distance transform 67 of the binary images of the individual ellipses determined by the DTECMA algorithm (Fig. 6 f). Finally, the outer Euclidean distance transform of the shape markers (Fig. 6 g) was determined and used as the marking function for the watershed segmentation of the IONFs into their cores. The segmentation of the IONFs into their cores was controlled by adjusting the parameters of the DTECMA algorithm, i.e., the distance threshold influencing the extraction of concave points and the regularization parameter balancing the number of ellipses, as well as by adjusting the threshold value of the extended minima transform that was used to determine the shape markers. The size of the individual cores was then determined analogously to the size of the IONFs (Eq. 2 ). The size distribution of the IONFs and the individual cores determined from HAADF-STEM images using the multi-stage segmentation routine are depicted in Fig. 7 together with the size distribution of the hydrodynamic diameter of the IONFs that was determined using DLS. In order to be able to compare the size distribution determined using DLS with the size distributions derived from HAADF-STEM images, the intensity distribution density \(q_{6}(D_h)\) that is primarily provided by DLS must be converted to the number distribution density \(q_{0}(D_h)\) using 68 $$\begin{aligned} q_{0}(D_h) = \frac{D_h^{-6}q_{6}(D_h)}{\int _{D_{h,\text {min}}}^{D_{h,\text {max}}}D_h^{-6}q_{6}(D_h)\text {d}D_h} \end{aligned}$$ (3) Note that the hydrodynamic diameter of the IONFs corresponds to their size including the dextran shell. As HAADF-STEM imaging uses electrons scattered by atomic nuclei to high angles, it is highly sensitive to the atomic number of the scattering atoms 69 . For this reason, HAADF-STEM imaging visualizes IONFs almost without their light dextran shell. Moreover, the dextran shell degrades quickly under the impact of the high-energy electron beam. Consequently, the size of IONFs determined using HAADF-STEM ( \(D_P^{\text {STEM}}\) ) is smaller than the hydrodynamic diameter ( \(D_h^{\text {DLS}}\) ) determined using DLS 37 . Figure 7 Number distribution density ( \(q_0\) ) of the size of the IONFs ( \(D_P^{\text {STEM}}\) ), their cores ( \(D_C^{\text {STEM}}\) ) and the hydrodynamic diameter ( \(D_h^{\text {DLS}}\) ) as determined using HAADF-STEM and DLS, respectively. Full size image The mean sizes \(\langle D^{\textrm{DLS}}_h\rangle\) and \(\langle D^{\textrm{STEM}}_P\rangle\) and their standard deviations ( \(\sigma\) ), which are summarized in Table 1 , were determined from the obtained size distributions (Fig. 7 ) using $$\begin{aligned} \langle D\rangle = \int _{D_{\text {min}}}^{D_{\text {max}}}Dq_0(D)\text {d}D \end{aligned}$$ (4) and $$\begin{aligned} \sigma =\sqrt{\int _{D_{\text {min}}}^{D_{\text {max}}}\left[ (D-\langle D\rangle )^2 q_0(D)\right] \text {d}D} \end{aligned}$$ (5) The difference between the mean hydrodynamic diameter, \(\langle D^{\textrm{DLS}}_h\rangle = (29\pm 8)\) nm, and the mean diameter of the IONFs determined by HAADF-STEM, \(\langle D^{\textrm{STEM}}_P\rangle = (20\pm 4)\) nm, reveals an estimate of the mean thickness of the dextran shell ( \(\approx 5\) nm). The mean IONF size obtained from HAADF-STEM, \(\langle D^{\textrm{STEM}}_P\rangle = (20\pm 4)\) nm, agrees very well with the mean IONF size obtained from HRTEM, \(\langle D^{\textrm{HRTEM}}_P\rangle = (19\pm 4)\) nm. A good agreement was also achieved for the mean size of the cores, \(\langle D_C\rangle\) , determined using HAADF-STEM and HRTEM. Additionally, HRTEM revealed the size of the slightly twisted core fragments, \(\langle D_F\rangle\) , which was visible by XRD as the mean size of individual crystallites (Fig. 5 ). Note that \(\langle D^{\textrm{XRD}}_F\rangle\) is slightly smaller than \(\langle D^{\text {HRTEM}}_F\rangle\) , because XRD recognizes mainly the undisturbed interior of the NCs, while their possibly defect-rich rim contributes rather to diffuse scattering than to the diffraction lines. Thus, the difference between \(\langle D^{\text {HRTEM}}_F\rangle\) and \(\langle D^{\text {XRD}}_F\rangle\) can be understood as a first estimate of the thickness of the disordered surface layer of the core fragments, which is approximately 1 nm. The ‘cluster size’ of approx. 16 nm obtained from XRD corresponds to the size of agglomerates of partially coherent twisted domains. Its value is between the size of the cores \(\langle D_C\rangle\) and the size of the IONFs \(\langle D_P\rangle\) (Table 1 ), which illustrates once more the crystallographic partial coherence of the cores within IONFs discussed above. Table 1 Hydrodynamic diameter \(\langle D_h\rangle\) , particle diameter \(\langle D_P\rangle\) , core diameter \(\langle D_C\rangle\) and diameter of the core fragments \(\langle D_F\rangle\) as determined by DLS, low-magnification HAADF-STEM, HRTEM and XRD. Full size table Influence of the structure of the IONFs on their magnetic properties The magnetization curve of the IONFs measured by AGM and normalized to the sample density is depicted in Fig. 8 a. The IONFs show superparamagnetic behavior that is characterized by negligible remanent magnetization and coercive field. The normalized (mass) saturation magnetization was ( \(50\pm 1\) ) Am \(^2\) /kg, which is lower than the saturation magnetization of bulk maghemite (74.3 Am \(^2\) /kg) 15 . Assuming that the saturation magnetization is reduced by the spin disorder in the surface layer of the magnetic particles, the ratio between the thickness of the disordered spin layer ( t ) and the particle size ( D ) can be calculated using the relation 24 , 25 , 26 $$\begin{aligned} M_S = M_S^{\textrm{bulk}} \left( 1 - \frac{6t}{D}\right) \end{aligned}$$ (6) For \(M_S = (50\pm 1)\) Am \(^2\) /kg and \(M_S^{\textrm{bulk}} = 74.3\) Am \(^2\) /kg, t / D is \((0.055 \pm 0.001)\) . A disordered spin layer having a thickness of 1 nm 25 would be consistent with a particle size of 18 nm, which agrees best with \(\langle D_P\rangle\) from Table 1 . A disordered spin layer having a thickness of 0.54 nm 26 would correspond to a particle size of 10 nm, which is between \(\langle D_F\rangle\) and \(\langle D_C\rangle\) . Figure 8 ( a ) Magnetization curve of the IONFs as measured by AGM (crosses) and the Langevin fits using three log-normal functions (solid blue line) and using the Kaczmarz method 70 , 71 (dashed red line). ( b ) Distributions of the magnetic particle size corresponding to the fits in panel ( a ). Full size image For modelling of the measured magnetization curve, two approaches were used. Both are based on the approximation of the M ( H ) dependence by the Langevin function: $$\begin{aligned} M(H) = M_S{\mathcal {L}}(\xi ) \end{aligned}$$ (7) where \(M_S\) is the saturation magnetization and \({\mathcal {L}}(\xi ) = \coth (\xi ) - 1/\xi\) . The parameter \(\xi\) is related to the (volume) saturation magnetization ( \(M_S\) ), to the strength of the external magnetic field ( H ), to the permeability of vacuum ( \(\mu _0\) ), to the Boltzmann constant ( \(k_B\) ) and to the sample temperature ( T ) 71 , 72 : $$\begin{aligned} \xi (H) = \frac{M_S\pi d_c^3H\mu _0}{6k_BT} \end{aligned}$$ (8) Note that in Eq. ( 8 ), \(M_S\) has the unit of A/m like H . As the recorded signal is a superposition of the magnetizations of all particles in the sample, the size distribution of the magnetic particles must be taken into account. In the first modelling approach, it was assumed in analogy to previous reports 33 , 35 , 72 , 73 , 74 , 75 that the size distribution can be described by log-normal functions. As microstructure analyses revealed the existence of three different types of magnetic ‘objects’ (Table 1 ), a sum of three log-normal functions was employed for the Langevin fit: $$\begin{aligned} P(d_c) = \sum _{i=1}^3w_i\frac{1}{\sqrt{2\pi }\sigma _i d_c}\exp \left[ -\frac{\left( \ln d_c - \mu _i\right) ^2}{2\sigma _i^2}\right] \end{aligned}$$ (9) The refinable parameters were the weights of the log-normal functions ( \(w_i\) ), the medians of the magnetic particle sizes ( \(\mu _i\) ) and the widths of the log-normal functions ( \(\sigma _i\) ). The fitting function based on Eq. ( 7 ) had the form: $$\begin{aligned} M(H) = M_S\int _0^\infty P(d_c){\mathcal {L}}(\xi )\text {d}d_c \end{aligned}$$ (10) The best fit of the magnetization function (Fig. 8 a) was obtained for the sizes of magnetic particles of ( \(6\pm 4\) ) nm, ( \(12\pm 1\) ) nm and ( \(20\pm 5\) ) nm, which agree well with the size of the fragments ( \(D^{\textrm{HRTEM}}_F\) and \(D^{\textrm{XRD}}_F\) ), with the size of the cores ( \(D^{\textrm{STEM}}_C\) and \(D^{\textrm{HRTEM}}_C\) ) and with the size of the IONFs ( \(D^{\textrm{STEM}}_P\) and \(D^{\textrm{HRTEM}}_P\) ) from Table 1 , respectively. The resulting size distribution function obtained from the Langevin fit is depicted in Fig. 8 b. The sizes of very small particles (fragments of the cores and the cores themselves) determined from the magnetization curve are slightly smaller than the corresponding sizes \(D_F\) and \(D_C\) determined using HAADF-STEM, HRTEM and XRD as expected, because the magnetization of small particles is reduced by a disordered spin layer at their surface 24 , 25 , 26 . In the second approach, the particle size distribution was substantially less constrained, as the shape of the distribution was determined using Kaczmarz’ iterative method 70 , 71 without any a priori assumption (except keeping the values of the distribution function non-negative). Within this method, a matrix \({\textbf{A}}_{ji}\) is composed, which contains magnetization values calculated according to Eqs. ( 7 ) and ( 8 ) for individual values of the magnetic particle size ( \(d_{c,i}\) ) and for individual values of the external magnetic field ( \(H_j\) ). This matrix is used for iterative calculation of the ‘weighting factors’ W : $$\begin{aligned} W^{k+1} = W^k +\frac{M_j-{\textbf{A}}_j W^k}{\Vert {\textbf{A}}_j\Vert ^2}{\textbf{A}}_j^\intercal \end{aligned}$$ (11) that describe the particle size distribution. In Eq. ( 11 ), k is the iteration number. The starting set of the ‘weighting factors’ ( \(W^0\) ) is a zero vector having the same length like the vector \(d_{c,i}\) . \(M_j\) are the magnetization values measured at different intensities of the external magnetic field \(H_j\) , and \({\textbf{A}}_j\) the corresponding row vectors of the \({\textbf{A}}_{ji}\) matrix (calculated for the same magnetic field \(H_j\) but for different particle sizes \(d_{c,i}\) ). After each iteration, negative values of W are reset to zero. Following previous reports 70 , 71 , 10,000 iterations were employed. The final fit of the magnetization curve obtained from $$\begin{aligned} M_j^{\mathrm{(calc)}} = \sum _i W_i^{10,000}{\textbf{A}}_{ji} \end{aligned}$$ (12) is depicted in Fig. 8 a, the size distribution ( \(P(d_c)\,\widehat{=}\,W^{10,000}\) ) in Fig. 8 b. It can be seen from Fig. 8a that both approaches, which are the Langevin fit with three log-normal functions corresponding to the size distributions of the whole particles (IONFs), their cores and fragments, and the Langevin fit using Kaczmarz’ method, reveal almost the same magnetization curve despite the relatively large differences in the corresponding size distribution. This shows a relatively low sensitivity of the magnetization curve to the exact particle size distribution and suggests that additional information obtained from structure analysis, e.g., information about the number of different magnetic objects, can help to improve the reliability of the size distribution. Discussion In analogy with the paper from Gavilán et al. 35 , where a hierarchical structure of similarly synthesized IONFs was characterized and described by a multimodal size distribution, the IONFs under study were found to be composed of agglomerated maghemite NCs (Fig. 9 ). Our XRD and HRTEM analyzes identified the NCs as elementary blocks forming the magnetic cores and IONFs. The mean sizes of the NCs were \(\langle D^{\text {XRD}}_F \rangle = (7 \pm 1)\) nm and \(\langle D^{\text {HRTEM}}_F \rangle = (9 \pm 3)\) nm, cf. Table 1 . The difference in the size of the core fragments obtained from XRD and HRTEM is connected with a different sensitivity of the analytical techniques to the structural disorder at the surface of the NCs. XRD recognizes only the coherent part of the NCs as the core fragments. Therefore, it reveals the size of their undisturbed interior, while HRTEM sees the core fragments including their rim, in particular for isolated NCs. The NCs were also recognized by the Langevin fit of the magnetization curve. Their ‘magnetic’ size was \((6 \pm 4)\) nm. The amount of the NCs determined from the magnetic measurement was relatively low (Fig. 8 b), because the majority of neighboring NCs possessed almost the same crystallographic orientation, as revealed by HRTEM (Fig. 2 b) and as concluded from the coherence phenomena affecting the XRD line broadening (Fig. 5 ). The misorientation of the NCs within the cores was below \(1^{\circ }\) , as revealed by GPA of the HRTEM images (Fig. 4 ) and by XRD (Fig. 5 ). This kind of crystallographic coherence facilitates coupling of magnetic moments in individual NCs forming the cores 33 , 42 . Thus, the magnetic measurement recognized much more cores than isolated NCs (Fig. 8 ). The size of the cores can be determined most reliably using HRTEM in combination with local orientation analysis (FFT/HRTEM or GPA). HAADF-STEM may overestimate the size of the cores, because it uses a shape-based segmentation routine to identify individual cores in the IONFs (Fig. 6 ). However, this routine cannot distinguish parts of the IONFs with different crystallographic orientations from each other like HRTEM complemented by FFT or GPA. XRD can only estimate the size of the cores from the size of the clusters composed of partially coherent NCs (core fragments). The ‘magnetic’ size of the cores, \(\langle D^{\text {AGM}}_C \rangle = (12 \pm 1)\) nm, refers to the size of magnetic domains with uniform orientation of spin moments. Thus, half of the difference between \(\langle D^{\text {HRTEM}}_C \rangle = (13 \pm 3)\) nm and \(\langle D^{\text {AGM}}_C \rangle\) can be understood as the thickness of the disordered spin layer of the cores. According to Eq. ( 6 ), a disordered spin layer having a thickness of \(\approx 0.5\) nm would reduce the saturation magnetization of the cores from 74.3 15 to 57.1 Am \(^2\) /kg, which approaches the saturation magnetization of 50 Am \(^2\) /kg obtained from the Langevin fit of the magnetization curve (Fig. 8 ). Note that Sharifi Dehsari et al. 26 reported about a disordered spin layer having a thickness of 0.54 nm. As reported by Morales et al. 22 , an additional reason for the reduction of the saturation magnetization might be a certain degree of disorder of the spins even in the volume of the IONFs as a result of an inhomogeneous ordering of the cation vacancies in the IONFs (Fig. 2 d). A large part of the cores in the IONFs possessed distinct mutual crystallographic orientation relationships (Figs. 3 and 4 ), which resulted from their attachment along lattice planes with matching interplanar spacings. The attachment of the cores along lattice planes with the same interplanar spacing is a phenomenon, which was observed even in dual-phase systems with different crystal structures of the counterparts 76 . Such cores are not mutually coherent for XRD, and can be easily distinguished by FFT/HRTEM because of their different crystallographic orientations. In contrast to XRD and HRTEM, low-magnification HAADF-STEM cannot distinguish these two kinds of cores from each other directly, but it identifies these cores just as convex parts of the IONFs. Furthermore, it should be mentioned that the determination of the size of the cores from low-magnification HAADF-STEM images does not succeed, when the cores overlap in the projection direction. However, this was rarely the case in our IONFs. Figure 9 Schematic illustration of the hierarchical structure of a dextran-coated IONF, adapted from Gavilán et al. 35 and modified. Hydrodynamic diameter \(D_h\) , particle diameter \(D_P\) , core diameter \(D_C\) and diameter of the core fragments \(D_F\) are indicated. Red and purple arrows mark neighboring cores with lattice planes with matching interplanar spacings and fragmented cores with nearly identical crystallographic orientation, respectively. Full size image The IONFs under study are agglomerates of cores consisting of individual NCs. The size of the IONFs was quantified using both, HRTEM and HAADF-STEM (Table 1 ). Still, low-magnification HAADF-STEM is more reliable than HRTEM from the statistical point of view, because it allows more IONFs to be analyzed (Fig. 6 ). The accuracy of low-magnification HAADF-STEM for the determination of the size of the IONFs is sufficient, as only one segmentation step, i.e., the semi-automatic segmentation based on a marker-based watershed algorithm, is required 45 , 66 . From the point of view of the magnetic properties, the IONFs can behave as magnetic particles with uniform orientation of magnetic moments, even if their cores are crystallographically non-coherent. Still, adjacent cores should be attached along specific lattice planes like in Figs. 3 and 4 , and the angle between the easy magnetization axes of the individual cores should be small. Therefore, a cooperative magnetic behavior is expected also within the multi-core IONFs. A magnetic coupling was confirmed by the presence of magnetic particles having a size of \((20 \pm 5)\) nm as concluded from the Langevin fit of the magnetization curve (Fig. 8 ). This particle size agrees very well with the size of the IONFs, which was \((20 \pm 4)\) nm and \((19 \pm 4)\) nm according to HAADF-STEM and HRTEM, respectively. The structure of the IONFs under study can be summarized as follows. The IONFs with the size \(D_P\) are composed of several cores having the size \(D_C\) (Fig. 9 ). The cores consist of several NCs having the size \(D_F\) . Individual NCs contain maghemite with the average chemical composition \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) and with partially ordered vacancies on metallic positions (Fig. 2 d). The main driving force for the clustering of NCs and for the formation of the cores and IONFs is the minimization of the surface energy via oriented attachment of primary NCs along certain crystallographic facets 33 , 40 , 41 , 42 , 43 . This mechanism generally involves rotations of the NCs in three-dimensional space, until they share the same facets 77 . However, this process depends strongly on the reaction conditions. It has been shown previously that the internal structure of IONFs is influenced by many different parameters of the synthesis process, e.g., by the nature of the polyol solvent 41 , 43 , by the heating temperature, heating time and heating rate 38 , 39 , 78 , by the stoichiometry of the iron precursor 10 , 39 and by the presence and concentration of a reducing agent 32 , 41 , 78 . The arrangement of the cores in IONFs is controlled primarily by the kinetics of the nucleation and aggregation of the primary NCs, which in turn depends on the type of polyol used for the synthesis 43 . Higher formation and growth rates of the NCs cause a faster aggregation resulting in a higher misalignment of the NCs within the IONFs. As we observed not only a fully epitaxial alignment but also specific orientation relationships between individual NCs building up the IONFs, we can conclude that the nucleation and aggregation of the NCs in our IONFs was slightly too fast. Consequently, not all NCs did have enough time to order to possess the same crystallographic orientation. Some NCs were just oriented along specific lattice planes that were parallel to each other. This kind of alignment of NCs might partially reduce the surface energy but also inhibit a full alignment of the NCs. Moreover, this alignment of NCs produces local strain fields, which are compensated by crystal structure defects, possibly dislocations (Fig. 3 ). Conclusions A combination of TEM, XRD and DLS disclosed the hierarchical architecture of dextran-coated multi-core IONFs prepared by a polyol method. The TEM measurements combined high-resolution (HRTEM with FFT and GPA) and low-resolution (HAADF-STEM) modes in a correlative multiscale approach in order to describe the internal structure of the IONFs on the atomic scale including the orientation relationships between individual NCs and cores, and to determine the size distribution of the constituents in a statistically relevant manner. It was shown that the basic units of the IONFs are maghemite NCs with partially ordered vacancies on the iron sites. NCs with distinct crystallographic orientation relationships form magnetic cores, which agglomerate and build up the IONFs. Neighboring cores were typically attached by sharing lattice planes with the same interplanar distance. The presence of these objects was confirmed by the Langevin fit of the magnetization curve measured using AGM. As the magnetic sizes of the NCs, of the cores and of the IONFs were very close to the corresponding sizes obtained from the microstructure analysis, it was concluded that the magnetic moments of individual NCs interact mutually. It was shown that the magnetic interaction between individual NCs and cores is strongly affected by their mutual crystallographic orientation. The strongest coupling of magnetic moments was observed between neighboring NCs that had almost the same crystallographic orientation and that formed the magnetic cores. A weaker but still existing magnetic interaction was detected between the magnetic cores within individual IONFs, which had a distinct orientation relationship but no full crystallographic coherence. From the difference between the particle sizes obtained from the microstructure analysis and from the magnetic measurement, it was concluded that the magnetic cores have a disordered spin layer at the rim. This layer, which has a thickness of approximately 0.5 nm, reduces the saturation magnetization of the IONFs together with the inhomogeneous ordering of the vacancies on the iron sites in \(\gamma\) -Fe \(_{2.72\pm 0.02}\) O \(_4\) . Data availability The datasets analyzed in the current study are available from the corresponding author on request.
Iron oxide nanoparticles are often used in medical technology as contrast agents for magnetic resonance imaging or as transport agents for drugs in the bloodstream, for example in tumor therapy. For these applications, the nanoparticles have to be biocompatible and superparamagnetic. Thus, they must be strongly magnetizable in a magnetic field and also must lose their magnetization when the magnetic field is switched off. Using analytical high-resolution transmission electron microscopy, a team at TU Bergakademie Freiberg investigated how the magnetic properties of the nanoparticles can further be improved via microstructure design. The researchers published their results in the current issue of Scientific Reports. The knowledge of the exact structure of the iron oxide nanoparticles sized between 20 and 30 nanometers helps to optimize the manufacturing process and to improve the magnetic properties of the particles systematically. Each particle consists of at least two superparamagnetic nanocrystalline cores and a shell that does not contribute to the magnetic properties. The maximum magnetization of the nanoparticles depends on the mutual orientation of the individual cores. How well are the cores oriented to each other? "The current state of research assumed that a strong alignment of magnetic moments in multi-core iron oxide nanoparticles is enabled by the same crystallographic orientation of individual cores. However, our analyses showed that this is not necessarily true," says Stefan Neumann, research associate at TU Bergakademie Freiberg and the first author of the publication. "Also other, but still specific crystallographic orientation relationships of the cores can promote their magnetic interaction. Nevertheless, a fully random alignment of the cores deteriorates the magnetic properties of the nanoparticles," says Neumann. "In order to be able to produce highly superparamagnetic iron oxide nanoparticles for future applications in medicine on demand, we need knowledge of their internal structure," says co-author Prof. David Rafaja, head of the Institute of Materials Science at TU Bergakademie Freiberg. "During the production of the nanoparticles, individual cores are formed first. When the cores get more time to align in the right way, then the magnetic properties of the nanoparticles can further be improved." Background: Analyzing ultra-fine particles The results were obtained within the priority program "MehrDimPart—Highly Specific and Multidimensional Fractionation of Fine Particle Systems with Technical Relevance." The aim of the research is to develop technological approaches that enable a controlled production of highly specific and technologically relevant particle systems with desired properties. In addition to the team from TU Bergakademie Freiberg, scientists from the Karlsruhe Institute of Technology have also contributed to the current publication. The basic research behind this work was focused on the structure of the nanoparticles to be able to optimize the production of particles with specific magnetic properties. A toxicological study was not carried out.
10.1038/s41598-023-31294-4
Medicine
AI predicts if—and when—someone will have cardiac arrest
Natalia Trayanova, Arrhythmic sudden death survival prediction using deep learning analysis of scarring in the heart, Nature Cardiovascular Research (2022). DOI: 10.1038/s44161-022-00041-9. www.nature.com/articles/s44161-022-00041-9
https://dx.doi.org/10.1038/s44161-022-00041-9
https://medicalxpress.com/news/2022-04-ai-ifand-whensomeone-cardiac.html
Abstract Sudden cardiac death from arrhythmia is a major cause of mortality worldwide. In this study, we developed a novel deep learning (DL) approach that blends neural networks and survival analysis to predict patient-specific survival curves from contrast-enhanced cardiac magnetic resonance images and clinical covariates for patients with ischemic heart disease. The DL-predicted survival curves offer accurate predictions at times up to 10 years and allow for estimation of uncertainty in predictions. The performance of this learning architecture was evaluated on multi-center internal validation data and tested on an independent test set, achieving concordance indexes of 0.83 and 0.74 and 10-year integrated Brier scores of 0.12 and 0.14. We demonstrate that our DL approach, with only raw cardiac images as input, outperforms standard survival models constructed using clinical covariates. This technology has the potential to transform clinical decision-making by offering accurate and generalizable predictions of patient-specific survival probabilities of arrhythmic death over time. Main Sudden cardiac death (SCD) continues to be a leading cause of mortality worldwide, with an incidence of 50–100 per 100,000 in the general population in Europe and North America 1 , and accounts for 15–20% of all deaths 2 . Patients with coronary artery disease are at the highest risk of arrhythmic sudden cardiac death (SCDA) 3 , 4 . Although implantable cardioverter devices (ICDs) effectively prevent SCD due to ventricular arrhythmias, current clinical criteria for ICD candidacy—that is, left ventricular ejection fraction (LVEF) <30–35% 5 —capture only a mere 20% all SCDAs 6 , highlighting the critical need to develop personalized, accurate and cost-effective arrhythmia risk assessment tools to mitigate this enormous public health and economic burden. Several studies have identified risk factors for SCDA, and many risk stratification approaches have attempted to transcend LVEF 7 , 8 . However, limitations in these approaches have been barriers to their clinical implementation. Previous attempts have broadly stratified populations based on subgroup risk, failing to customize predictions to patients’ unique clinical features 9 . SCDA risk has been typically assessed at pre-defined finite time points, ignoring the likely patient-specific time evolution of the disease 10 . Additionally, in previous work, confidence estimates for predictions have been ‘one-size-fits-all’, varying only by risk subgroup, thus preventing the identification of low-confidence, potentially highly erroneous prediction outliers 11 . Moreover, few previous studies have validated their results externally or comprehensively compared model performance to standard approaches. A robust, generalizable SCDA risk stratifier with the ability to predict individualized, patient-specific risk trajectories and confidence estimates could considerably enhance clinical decision-making. Finally, although arrhythmia arises, mechanistically, from the heterogeneous scar distribution in the disease-remodeled heart, machine learning the features of that distribution has not been explored for risk analysis. Image-derived mechanistic computational models of cardiac electrical function that incorporate scar distribution have proven successful in predicting arrhythmia risk 12 ; however, they remain exceedingly computationally intensive. Therefore, computational models are impractical as a first-stage screening tool in a broad population. Using raw contrast-enhanced (late gadolinium enhancement (LGE)) cardiac images that visualize scar distribution in a DL framework, which additionally draws on standard clinical covariates, could overcome these limitations and lead to accurate patient-specific SCDA probabilities in fractions of a second. Here we present a DL technology for prediction of SCDA risk in patients with ischemic heart disease. Our approach, which we term Survival Study of Cardiac Arrhythmia Risk (SSCAR), embeds, within a survival model, neural networks to estimate individual patient times to SCDA ( T S C D A ). The neural networks learn from raw clinical imaging data, which visualize heart-disease-induced scar distribution, as well as from clinical covariates. The predicted patient-specific survival curves offer accurate SCDA probabilities at all times up to 10 years. The performance and high generalizability of the approach are demonstrated by testing on an external cohort, after internal cross-validation. Our technology represents a fundamental change in the approach to arrhythmia risk assessment, as SSCAR uses the data to directly estimate uncertainty in its predictions. Therefore, SSCAR has the potential to considerably shape clinical decision-making regarding arrhythmia risk, offering not a simple ‘at-risk/not-at-risk’ prediction but, instead, an estimate of the time to SCDA together with a sense of ‘how certain’ the model is about each predicted T S C D A . Results SSCAR overview The arrhythmia risk assessment algorithm in SSCAR is a DL framework that incorporates multiple custom neural networks (which fuse different data types), combined with statistical survival analysis, to predict patient-specific probabilities of SCDA at future time points. Figure 1 presents an overview of SSCAR. On the left and right, cardiac magnetic resonance (CMR) images and clinical covariates (yellow panel) are used as inputs to the two corresponding branches of the model. The goal of each of the branches is to predict the patient-specific survival curve. In the left branch, cardiac CMR images—visualizing the patients’ three-dimensional (3D) ventricle geometry and contrast-enhanced remodeled tissue—are used as input by a custom-designed encoder–decoder convolutional neural sub-network (red panel, left). This CMR sub-network is trained to reduce the dimension of the input (that is, encode) and to discover and extract imaging features associated with SCDA risk directly from the CMR images by learning and applying filters (that is, convolving). The encoder–decoder design of the sub-network ensures that resulting imaging features retain sufficient information to be able to reconstruct the original images (red panel, left, decoder path). In the right branch, the 22 clinical covariates in Table 1 are provided to a dense sub-network (green panel, right), which discovers and extracts non-linear relationships between the input variables. The outputs of the sub-networks are combined (ensembled) in a way that best fits the observed SCDA event training data (center path, dot-dashed) to estimate the most probable T S C D A and the uncertainty in the prediction. The output of the model is a per-patient cause-specific survival curve (bottom, blue). Fig. 1: Schematic overview of SSCAR. Top panel (yellow) shows patient data used in this study. SSCAR uses contrast-enhanced (LGE)-CMR images with the left ventricle automatically segmented (left inset) and clinical covariates (right inset; see Methods and Table 1 for a complete list) as inputs to the two sub-networks (left and right pathways). Labels associated with each patient (SCDA events, middle inset, dot-dashed contour)—consisting of the observed times to event and indicators whether the events were SCDA or non-SCDA—are used as targets during training only. LGE-CMR data is taken as input by a 3D convolutional neural network constructed using an encoder–decoder architecture (red panel, left). Clinical covariates are fed to a dense neural network (green panel, right). The sub-networks are trained to estimate two parameters (location μ and scale σ ) specific to each patient, which fully characterize the probability distribution of the patient-specific time to SCDA (top blue panel; the time to SCDA is modeled as probabilistic, assumed to follow a log-logistic distribution). During training (dot-dashed arrows and white middle panel), the neural network weights are optimized via a maximum likelihood process, in which a probability distribution is sought (blue double-headed arrow in the middle white panel) to best match the observed survival data (yellow ‘x’s in the middle white panel). Finally, the optimized probability function is used on test LGE-CMR images and covariates to predict patient individualized survival curves (blue bottom panel). Full size image Table 1 Clinical covariate data. Full size table SSCAR overall risk prediction performance SSCAR was developed and internally validated using data from 156 patients with ischemic cardiomyopathy (ICM) enrolled in the Left Ventricle Structural Predictors of Sudden Cardiac Death (LVSPSCD) prospective observational study 11 , 13 . SSCAR performance was evaluated comprehensively on this internal set using Harrell’s concordance index (c-index) 14 —range is [0, 1], higher scores are better—and the integrated Brier score ( \(\overline{\,{{\mbox{Bs}}}\,}\) ) 15 —range is [0, 1], lower scores are better. SSCAR has excellent concordance on the internal set (.82–.89) for all times up to 10 years (Fig. 2a ). Additionally, the \(\overline{\,{{\mbox{Bs}}}\,}\) ranges from .04 to 0.12, suggesting strong calibration, given the high concordance. The model maintains its risk discrimination abilities at all times, as further evidenced by the high areas under the receiver operator characteristic (ROC) curves evaluated at years 2–9 (Extended Data Fig. 1 ). All events up to 10 years are used to construct the cross-validated ROC and precision-recall (PR) curves for the internal validation set (Fig. 2b,c ). The area under the ROC curve is 0.87 (95% confidence interval (CI): 0.84–0.90), whereas the area under the PR curve is 0.93 (95% CI: 0.91–0.95). Fig. 2: SSCAR overall performance. a , C-index (top, blue) measuring model risk discrimination—higher is better—and integrated Brier score (bottom, red) showing overall fit—lower is better—for various time points. b , ROC curve at 10 years for the internal validation and external test cohorts, with the respective areas under the curve (AUROC). c . PR curve at 10 years for the internal validation and external test cohorts, with the respective areas under the curve (AUPR). For all panels, shaded areas represent approximate 95% CIs; solid and dashed lines indicate averages for the internal and external cohorts, respectively; and random chance performance thresholds are shown using dotted lines (the dot-dashed line is used to differentiate the internal random chance performance from the external). The chosen time of 10 years was used to capture all SCDA events in the population. Source data Full size image To demonstrate the model’s performance, an external test was performed using an independent case–control set of 113 patients with coronary heart disease selected from participants with available CMR images and the same list of covariates enrolled in the PRE-DETERMINE study 16 . These patients had less severe left ventricular systolic dysfunction but otherwise had similar inclusion/exclusion criteria to patients in the LVSPSCD study ( Methods ). Despite the dissimilarities between cohorts, SSCAR performance carries over well to the external cohort, resulting in a c-index of 0.71–0.77 and \(\overline{\,{{\mbox{Bs}}}\,}\) of 0.03–0.14 (Fig. 2a ; dashed lines). The area under the ROC curve is 0.72 (95% CI: 0.67–0.77), and the area under the PR curve is 0.73 (95% CI: 0.68–0.78), on the external set (Fig. 2b,c ). Patient-specific survival curves predicted by SSCAR The SSCAR survival model presented here predicts cause-specific survival curves for each patient through two individualized parameters—the location μ and scale σ —characterizing the probability distribution of T S C D A ( Methods ). Using deep neural networks to directly learn these parameters from CMR images and from clinical covariates in a way that best models the survival data produces highly individualized survival probability predictions. Extended Data Fig. 2a illustrates individualized cause-specific survival curves (solid, blue) for a patient with T S C D A around 6 years (left panel) and a patient censored (non-SCDA event) at around 7 years (right panel). In both cases, the survival curves estimated by SSCAR accurately predict the event probabilities: in the first case, the estimated survival probability crosses the 50% threshold close to the event time; in the censored case, SSCAR predicts more than 80% probability of survival at the time of the (non-SCDA) event. For reference, two commonly used survival curves are depicted: the Kaplan–Meier estimate (purple, dot-dashed) and the Breslow estimate based on a Cox proportional hazards model using the clinical covariates (green, dashed), demonstrating worse performance by underestimating the risk for the patient with SCDA and overestimating the risk for the censored patient. Further details on SSCAR’s internal performance compared to the Cox proportional hazards model are presented in Fig. 3 . Fig. 3: Model comparison. In blue (left y axis), c-index (dark blue), balanced accuracy (BA, mid-blue), F-score (light blue) and in red (right y axis) integrated Brier score ( \(\overline{\,{{\mbox{Bs}}}\,}\) ) are shown for a standard Cox proportional hazards model fit on the clinical covariates (linear Cox PH), the covariate sub-network of the SSCAR approach using clinical covariates with a Cox survival model (covariate only, Cox), the covariate sub-network of SSCAR using clinical covariates and the log-logistic survival model (covariate only), the CMR sub-network using images only (CMR only) and the full arrhythmia prediction neural network model (SSCAR). Random chance performance thresholds are shown using dotted lines. All performance measures are calculated using data up to τ = 10 years. All model comparison values are based on averages over 100 cross-validation train/test splits of the internal validation dataset. The error bars represent approximate 95% CIs. Source data Full size image The predicted location parameter estimates the most probable T S C D A , and the predicted scale parameter provides a measure of confidence for the location. The inclusion of both a location and a scale parameter in the model offers the advantage of building in uncertainty directly into the T S C D A prediction. Notably, this uncertainty is patient-specific and learned from data. Extended Data Fig. 2b presents examples of predicted T S C D A probability distributions for two patients (P1 and P2) with different scale parameters, visualized as the widths of the distributions. Shown are the actual (dotted) and predicted (solid) T S C D A as well as the probability distributions (shaded). For P1, the prediction error is small (solid versus dashed vertical lines), and the model is certain, as seen by the narrower probability distribution of P1’s T S C D A or, equivalently, a smaller predicted scale parameter. In the case of P2, the prediction error is larger, and the model predicts a wider distribution or, equivalently, a larger scale parameter, indicating higher uncertainty. Remarkably, using the entire internal cohort to quantify this direct relationship between prediction error—calculated as the relative mean absolute difference of actual and predicted times—and scale parameter reveals significant positive correlation (Pearson’s r = 0.42, P < 0.001), demonstrating that SSCAR recognizes which predictions of T S C D A will turn out inaccurate and ‘lowers the confidence’ in them through a larger scale parameter. Image-based risk prediction The CMR sub-network (see Extended Data Fig. 3 for architecture details) in SSCAR integrates neural network DL on images within an overall statistical survival model. This branch of SSCAR uses LGE-CMR—a modality uniquely suited for visualizing ventricle geometry and portions of the myocardium with contrast-enhanced remodelling—to learn image features most useful in predicting a patient’s survival T S C D A . CMR raw pixel values from the automatically segmented left ventricle are directly provided to the network, eliminating the need for arbitrary thresholds aiming to delineate areas of enhancement. Using only images as inputs (Fig. 3 and Supplementary Table 1 ), SSCAR achieves 0.70 (95% CI: 0.67–0.72) c-index and 0.17 (95% CI: 0.167–0.178) \(\overline{\,{{\mbox{Bs}}}\,}\) for event data truncated at 10 years on the internal validation set. On the external testing set, the CMR-only model achieves 0.63 (95% CI: 0.59–0.66) c-index and 0.19 (95% CI: 0.186–0.200) \(\overline{\,{{\mbox{Bs}}}\,}\) . It is noteworthy that, among the covariate sub-network’s 22 clinical covariates, it already includes manually engineered features from the CMR images. For example, infarct size—calculated as the percentage of left ventricular tissue deemed fibrotic using manual segmentation performed by trained experts—was among the 22 and, indeed, had considerable effect on lowering T S C D A . Despite including CMR-based features in the covariate network, the CMR sub-network (using only CMR as inputs) achieves similar performance to the covariate one (Fig. 3 ). Furthermore, ensembling the two sub-networks together leads to a significant increase in overall performance compared to using just the covariate-based one, demonstrating that the CMR sub-network identifies different CMR-based features than the manually engineered ones. Imaging features learned by the CMR network can be interpreted using a gradient-based sensitivity analysis (Fig. 4a ). The gradient here quantifies the effect on the predicted T S C D A of features identified by the CMR neural network, which are averaged per patient to form the gradient map ( Methods ). This map overlaid on the myocardium (right column, blue and red heat map) shows the degree of contribution of the local pixel intensity to the most probable T S C D A (that is, to the location parameter) for a patient without an SCDA event (top) and one with SCDA (bottom). Myocardial regions found to be characterized with large positive gradient (dark blue) are interpreted as having high importance in increasing T S C D A , and, conversely, regions with large magnitude negative gradient (dark red) represent areas that are responsible for decreasing the predicted T S C D A . The areas of contrast-enhanced myocardium (middle column, in brighter green) do not fully overlap with the gradient map, which suggests that, although features learned by the CMR neural network may co-localize with enhanced tissue, the algorithm does not act as a mere enhancement locator. For example, the patient who did not experience SCDA has contrast-enhanced tissue, but the effect of these regions is to increase the predicted T S C D A , suggesting a nuanced relationship between presence of enhancement and propensity of SCDA. Fig. 4: SSCAR interpretation. The features learned by SSCAR are interpreted by performing a gradient-based sensitivity analysis of the location parameter (the most probable T S C D A ) to changes in the neural network input or features. The gradient value quantifies this sensitivity. The magnitude of the gradient measures the strength of the sensitivity of the predicted T S C D A to inputs or intermediary features. The sign of the gradient shows the direction of the effect. That is, for a small increase in the value of inputs or features, a positive gradient (blue) indicates a higher predicted T S C D A , whereas a negative gradient (red) indicates a decrease in the predicted T S C D A . a , Shown is the CMR sub-network feature interpretation for an example patient who did not experience SCDA (No SCDA, top) and for a patient who did (SCDA, bottom). For each patient, a subset of 3 of the 12 contrast-enhanced short-axis CMR images (corresponding to three locations in the heart, base to apex, top to bottom, left column) used as inputs by SSCAR are overlaid with blood pool and myocardium segmentation (middle column, orange and green, respectively). A heat map of extracted features scaled by the value of the gradient shows contribution of the local pixel intensity to the predicted location parameter for the last convolutional layer (right column, blue and red heat maps). Of note, although the patient with SCDA shows high gradients in areas with contrast enhancement, the patient on the left shows that enhancement can also lead to positive gradients, suggesting that the network does not simply create a mask of the enhanced regions to make predictions but learns a nuanced relationship between scar and propensity for SCDA. b , Covariate sub-network interpretation based on an average of all patients (mid-blue and mid-red bars), patients with SCDA (dark blue and dark red bars) and patients with no SCDA (light blue and light red bars). Top four highest (blue bars) and bottom four lowest (red bars) average gradients of the neural network output (that is, the predicted location parameter) with respect to the clinical covariate inputs are shown. The error bars represent approximate 95% CIs. LVEF CMR, left ventricular ejection fraction computed from CMR; betablock, use of β -blocker medication; ECG hr, heart rate from ECG; digoxin, use of digoxin medication; infarct %, infarct size as % of total volume; ECG QRS, QRS complex duration from ECG; LV mass ED, left ventricular mass in end-diastole. Source data Full size image Non-linear neural network for covariate data SSCAR incorporates patient clinical covariate data (Table 1 ) through the use of a dense, multi-layer neural network (Fig. 1 , green panel). This sub-network discovers and extracts potential non-linear relationships between the covariates and integrates them within SSCAR’s overall survival predictions. We demonstrate the utility of the sub-network by comparing its performance with a (linear) Cox proportional hazards model (Fig. 3 ). To avoid mis-attributing performance differences to the underlying statistical models, we consider an intermediary model that uses neural network feature extraction with a Cox proportional hazards model. Using clinical covariate data only, SSCAR with a Cox survival model (covariate only, Cox) outperforms the standard Cox proportional hazards model (linear Cox PH) in terms of c-index (0.73 versus 0.58, dark blue, left y axis), balanced accuracy (0.65 versus 0.45, mid-blue, left y axis), F-score (0.78 versus 0.69, light blue, left y axis) and \(\overline{\,{{\mbox{Bs}}}\,}\) (0.14 versus 0.30, red, right y axis). We show that the neural network model maintains interpretability by performing a sensitivity analysis of the predicted T S C D A with respect to changes in the covariates (Fig. 4b ). As above, high positive gradients (blue) denote covariates for which small increases in their values lead to large increases in T S C D A , whereas small negative gradients (red) represent covariates for which small increases lead to large decreases in T S C D A . The top four positive gradient covariates are LVEF computed from CMR, β -blocker medication, heart rate computed from electrocardiogram (ECG) and use of digoxin. The bottom four negative gradient covariates are left ventricular mass at end-diastole, use of diuretic medication, QRS duration computed from ECG and infarct size (%). Discussion Here we present an approach to SCDA risk assessment, termed the SSCAR framework, which uses a deep neural network survival model to predict patient-specific survival curves in ischemic heart disease. SSCAR consists of two neural networks: (1) a 3D convolutional network learning on raw unsegmented for enhancement LGE-CMR images that visualize heart-disease-induced scar distribution and (2) a fully connected network operating on clinical covariates. SSCAR’s predicted patient-specific survival curves offer accurate SCDA probabilities at all times up to 10 years. SSCAR is not only a highly flexible model, able to capture complex imaging and non-imaging feature inter-dependencies, but is also robust owing to the statistical framework governing the way these features are combined to fit the survival data. Our framework predicts entire probability distributions for the T S C D A , allowing for uncertainties in predictions to be themselves patient-specific and learned from data, thereby equipping the model with a self-correction mechanism. This approach remedies a well-known important limitation of neural networks—the high confidence in erroneous predictions. SSCAR’s integration of deep neural network learning within a survival analysis and the resulting detailed outputs could represent a paradigm shift in the approach to SCDA risk assessment. Despite many heralding DL as the arrival of the artificial intelligence (AI) age in personalized healthcare 17 , 18 , 19 , 20 , 21 , no considerable progress has so far been made using DL on contrast-enhanced cardiac images to assess arrhythmia risk. Although there have been non-DL efforts to incorporate clinical imaging-derived features in SCDA risk stratification 22 , 23 , 24 , these severely underuse the data, suffering from two main limitations: features often rely on time-consuming, manual processing steps, typically involving arbitrarily chosen image intensity thresholds; or features are either too coarse to capture the intricacies of the scar distribution or highly mathematical, undermining their physiological underpinning. On the other hand, the DL efforts related to arrhythmia have focused primarily on its cardiologist-level detection in ECG signals 25 , 26 , 27 , 28 , 29 . In the current work, we present a DL approach that takes as input directly raw, unsegmented LGE-CMR images and automatically identifies features that best model and predict the T S C D A . SSCAR is an SCDA risk prediction model that combines raw imaging with other data types in the same DL framework. Our technology operates on LGE-CMR images and clinical covariates within a unified feature learning process, allowing for the different data types to synergistically inform the overall survival model. Among the clinical covariates used in SSCAR are standard manually derived imaging features, which prevents the CMR neural network from merely re-discovering these known features and, instead, encourages it to learn new features. SSCAR achieves performance that is beyond the state-of-the-art in both relative terms—SCDA risk ordering among patients—and absolute terms—accurately calibrated probabilities of SCDA. Our robust testing scheme overcomes important limitations of previous work on SCDA risk prediction 10 , 16 , 22 , 23 , 30 . First, we demonstrate high generalizibility by computing internal cross-validation performance numbers resulting from 100 train/test splits of the data and, notably, on an entirely separate external cohort, showing modest performance degradation. Second, our approach prevents the model from being over-tuned to a certain time horizon by computing performance metrics at multiple time points up to 10 years. Because SSCAR is a combination of neural networks, each working on different data types (images and clinical covariates), we were able to perform a comprehensive bottom-up analysis of overall performance. We demonstrated that the added complexity of our DL approach—potentially at some expense to interpretability—is justified by the significantly elevated performance numbers. Indeed, we developed and evaluated a regularized Cox proportional hazards model using the available clinical covariates to serve as a baseline for the rest of the analysis. We showed that the neural-network-driven feature extraction of SSCAR on the same covariates performs significantly better in the same proportional hazards setting, highlighting the importance of non-linear relationships in the covariates. Furthermore, we showed that, even when using only LGE-CMR images to predict arrhythmia risk, the CMR neural network in SSCAR (1) outperforms the Cox proportional hazards model constructed using clinical covariates, which include standard imaging and non-imaging features, and (2) performs on par with the covariate-only network in SSCAR using the same clinical variables, suggesting that the image-only neural network in SSCAR is able to identify highly predictive imaging features in the LGE-CMR images. Finally, we demonstrate that the imaging features found by SSCAR’s CMR network cannot be explained away even when considering non-linear relationships between standard covariates, as evidenced by the ensembled SSCAR model superior performance over SSCAR using either data type. Notably, a level of interpretability is embedded in the overall design of the custom neural network used in SSCAR. Interpretability of AI algorithms is paramount to their broad adoption, and concerns surrounding it are particularly prevalent in healthcare. In our approach, we take multiple steps to ensure the relevance and interpretability of resulting features. Our sensitivity analysis of the outputs to the extracted features offers a lens into the neural network, rendering some transparency to the algorithm ‘black-box’ (Fig. 4 ). In addition, CMR images taken as input by the CMR neural network are automatically segmented to include myocardium-only raw intensity values, and the network is designed as an encoder–decoder to ensure minimal loss of information during the feature extraction process. SSCAR achieves strong performance despite working on a relatively small dataset. A concern with DL on smaller datasets is overfitting, which manifests itself as high performance during training (good fit) but poor performance when applied to a new test set. Indeed, the results in this paper show some differences between metrics on the internal validation and external test cohorts. However, we emphasize that, although the two cohorts’ covariates were ‘harmonized’ where possible ( Methods ), they represent two different distributions (for example, low versus moderately reduced LVEF, unmatched versus matched case–control and three versus 60 CMR acquisition sites), likely accounting for any performance differences in the two populations. Furthermore, several measures were taken to mitigate overfitting. In addition to standard techniques—dropout, kernel and bias regularizers—we designed the CMR sub-network as an encoder–decoder that uses the distilled features used in risk prediction to also re-construct the original image as an additional regularization technique. Finally, all numbers cited on the internal validation set are averages of the test performance of hundreds of train/test data splits, adding a layer of statistical rigor. In SSCAR, we directly model the cause-specific hazard rate and use the implied survival function to make predictions. A potential shortcoming of models that do not directly model competing risks is that predicted probabilities for the event of interest assume a reality where no other type of death could occur, thereby potentially undermining interpretability. A limitation here is that we could not compute the cause-specific cumulative incidence function, as it requires additional all-cause mortality data as well as competing risk data (for example, revascularization data). However, should such data become available, our competing risk framework makes such an extension straightforward. An additional limitation in this work is that the list of covariates is not comprehensive. Few standard clinical covariates were dropped when ‘harmonizing’ the internal and external cohorts (for example, all diuretic types were merged into one variable, and there were no angiotensin receptor-neprilysin inhibitor data). However, because no left ventricle standard imaging covariates were excluded, we do not expect any of the omitted variables to affect conclusions drawn regarding the performance of the sub-components of SSCAR relative to the baseline Cox model. Including additional covariates identified in past work as predictors of SCDA, but not part of standard clinical practice, was beyond the scope of our work. However, these could, in principle, erode the performance of the image-based feature extraction in SSCAR in favor of the covariate-only part. Nevertheless, we would expect that, in general, including more variables with proper regularization can only improve the overall results in SSCAR, even if a re-balance of its components’ performance contribution occurs. Similarly, including right ventricle CMR images and parameters and adjusting the methodology accordingly could help generalize SSCAR to more cardiomyopathies. SSCAR fuses cutting-edge DL technology with modern survival analysis techniques. It represents innovation in CMR imaging feature extraction and learning of non-linear relationships among standard clinical covariates. The technology aims to transform clinical decision-making regarding arrhythmia risk and patient prognosis by encouraging practitioners to eschew the view of predicted risk as a single number outputted by a ‘black-box’ algorithm but, rather, to be guided by the estimated time-to-outcome in the context of patient-specific time prediction uncertainty, which is itself built in SSCAR’s learning process. Through its accurate predictions and considerable levels of generalizability and interpretability, SSCAR represents an essential step toward bringing patient trajectory prognostication into the age of AI. Methods The research protocol used in this study was reviewed and approved by the Johns Hopkins University institutional review board and the Brigham and Women’s Hospital institutional review board. All participants provided informed consent to be part of the clinical studies described below. There was no participant compensation. Patient population and datasets This study was a retrospective analysis based on a subset ( n = 269) of patients selected from the prospective clinical trials described below using the process outlined in Extended Data Fig. 4 . Of note is that the entire model development described in this manuscript was based on the internal cohort (see below), whereas the case–control external cohort was used exclusively for testing (outcomes were solely used for computing relevant metrics once the model was fixed). LVSPSCD cohort (internal) Patient data came from the LVSPSCD study (ClinicalTrials.gov ID NCT01076660 ) sponsored by Johns Hopkins University. As previously described 11 , 13 , patients satisfying clinical criteria for ICD therapy for SCDA (LVEF ≤35%) were enrolled at three sites: Johns Hopkins Medical Institutions (Baltimore, Maryland), Christiana Care Health System (Newark, Delaware) and the University of Maryland (Baltimore, Maryland). A total of 382 patients were enrolled between November 2003 and April 2015. Patients were excluded if they had contraindications to CMR, New York Heart Association (NYHA) functional class IV, acute myocarditis, acute sarcoidosis, infiltrative disorders (for example, amyloidosis), congenital heart disease, hypertrophic cardiomyopathy or renal insufficiency (creatinine clearance <30 ml min −1 after July 2006 or <60 ml min −1 after February 2007). The protocol was approved by the institutional review boards at each site, and all participants provided informed consent. CMR imaging was performed within a median time of 3 days before ICD implantation. The current study focused on the ischemic cardiomyopathy patient subset with adequate LGE-CMR, totaling 156 patients. As part of the clinical study, the participants had undergone single-chamber or dual-chamber ICD or cardiac resynchronization with an ICD implantation based on current guidelines. The programming of anti-tachycardia therapies was left to the discretion of the operators. PRE-DETERMINE and DETERMINE Registry cohorts (external) The PRE-DETERMINE (ClinicalTrials.gov ID NCT01114269 ) and accompanying DETERMINE (ClinicalTrials.gov ID NCT00487279 ) Registry study populations are multi-center, prospective cohort studies comprised of patients with coronary disease on angiography or documented history of myocardial infarction (MI). The PRE-DETERMINE study enrolled 5,764 patients with documented MI and/or mild to moderate left ventricle dysfunction (LVEF between 35% and 50%) who did not fulfill consensus guideline criteria for ICD implantation on the basis of LVEF and NYHA class (that is, LVEF >35% or LVEF between 30% and 35% with NYHA Class I heart failure) at study entry 6 . Exclusion criteria included a history of cardiac arrest not associated with acute MI, current or planned ICD or life expectancy of less than 6 months. The accompanying DETERMINE Registry included 192 participants screened for enrollment in PRE-DETERMINE who did not fulfill entry criteria on the basis of having an LVEF <30% ( n = 99), an LVEF between 30% and 35% with NYHA Class II–IV heart failure ( n = 19) or an ICD ( n = 31) or who were unwilling to participate in the biomarker component of PRE-DETERMINE ( n = 43). Within these cohorts, 809 participants had LGE-CMR imaging performed. Within this subset of patients, 23 cases of SCD occurred and were matched to four controls on age, sex, race, LVEF and follow-up time using risk set sampling. Of the resulting 115 patients, the current study focused on 113 patients with adequate LGE-CMR images for analysis. Finally, covariate data for this cohort were minimally ‘harmonized’ with the internal cohort, by retaining common covariates only. Some important differences between the external and internal cohorts remained, such as significantly higher LVEF in the external cohort. LGE-CMR acquisition The CMR images in the internal and external cohort were acquired using 1.5-T magnetic resonance imaging devices (Signa, GE Medical Systems; Avanto, Siemens). The exact software versions for the devices cannot be precisely retroactively ascertained given the very broad nature of the study. All were two-dimensional (2D) parallel short-axis left ventricle stacks. The contrast agent used was 0.15−0.20 mmol kg −1 of gadodiamide (Omniscan, GE Healthcare) or gadopentetate dimeglumine (Magnevist, Schering), and the scan was captured 10−30 minutes after injection. Owing to the multi-center nature of the clinical studies considered here, there were variations in CMR acquisition protocols. The most commonly used sequence was inversion recovery fast gradient echo pulse, with an inversion recovery time typically starting at 250 ms and adjusted iteratively to achieve maximum nulling of normal myocardium. Typical spatial resolutions ranged 1.5−2.4 × 1.5−2.4 × 6−8 mm, with 2−4-mm gaps. CMR images in the external cohort were sourced from 60 sites with a variety of imaging protocols, whereas those in the internal cohort originated from three sites and were more homogeneous. No artifact corrections were applied to the images. More details regarding on CMR acquisition can be found in previous work 11 , 13 , 31 , 32 . Clinical data and primary endpoint In both LVSPSCD and PRE-DETERMINE/DETERMINE cohorts, baseline data on demographics, clinical characteristics, medical history, medications, lifestyle habits and cardiac test results were collected (see Table 1 for a list of the common ones between the cohorts that were used in SSCAR). The primary endpoint for LVSPSCD was SCDA defined as therapy from the ICD for rapid ventricular fibrillation or tachycardia or a ventricular arrhythmia not corrected by the ICD. For the PRE-DETERMINE studies, the primary endpoint was sudden and/or arrhythmic death. Deaths were classified according to both timing (sudden versus non-sudden) and mechanism (arrhythmic versus non-arrhythmic). Unexpected deaths due to cardiac or unknown causes that occurred within 1 hour of symptom onset or within 24 hours of being last witnessed to be symptom free were considered sudden cardiac deaths. Deaths preceded by an abrupt spontaneous collapse of circulation without antecedent circulatory or neurological impairment were considered arrhythmic in accordance with the criteria outlined by Hinkle and Thaler 16 . Deaths that were classified as non-arrhythmic were excluded from the endpoint regardless of timing. Out-of-hospital cardiac arrests due to ventricular fibrillation that were successfully resuscitated with external electrical defibrillation were considered aborted arrhythmic deaths and were included in the primary endpoint. Data preparation The inputs to our model were the unprocessed LGE-CMR scans and the clinical covariates listed in Table 1 . The training targets were the event time and event type (SCDA or non-SCDA). As a pre-processing step, the raw LGE-CMR scans were first segmented for left ventricle myocardium using a method based on convolutional neural networks developed and described in previous work 33 . In brief, this segmentation network consisted of three sub-networks: a U-net with residual connections trained to identify the entire region of interest; a U-net with residual connections trained to delineate the myocardium wall; and an encoder–decoder tasked with correcting anatomical inaccuracies that may have resulted in the segmentation. In this context, anatomical correctness was defined via a list of pass/fail rules (for example, no holes in the myocardium, circularity threshold and no disconnected components). Once each patient’s LGE–CMR 2D slices were segmented via this method, they were stacked, all voxels outside the left ventricle myocardium were zeroed out and the slices were sorted apex-to-base using DICOM header information and step-interpolated on a regular 64 × 64 × 12 grid with voxel dimensions 2.5 × 2.5 × 10 mm. These dimensions were chosen to make all patient volumes consistent with minimal interpolation from the original resolution while allowing enough room to avoid truncating the left ventricle. Finally, the input to the neural network model consisted of a two-channel volume (that is, 64 × 64 × 12 × 2). The first channel was a one-hot encoding of the myocardium and blood pool masks. The second channel had zeros outside of the myocardium and the original CMR intensities on the myocardium, linearly scaled by multiplication with half the inverse of the median blood pool intensity in each slice. To mitigate overfitting, train/time data augmentation was performed on the images, specifically 3D in-plane rotations in increments of 90 ° to avoid artifacts and panning of the ventricle within the 3D grid. The clinical covariate data were de-meaned and scaled by the standard deviation. Survival model Statistical fit For each patient i , the outcome data were the pair ( X i , Δ i ), where X i is the minimum between the time to SCDA from arrhythmia T i and the (right) censoring time C i , after which either follow-up was lost or the patient died due to a competing risk. The outcome Δ i is 1 if the patient had the arrhythmic event before they were censored ( T i ≤ C i ) and 0 otherwise. We estimated the (pseudo-)survival probability function S i ( t ), the probability that the time to SCDA exceeds t . We modeled the T i values as independent, each having a cause-specific hazard rate 34 based on the log-logistic distribution with location parameter μ i and scale parameter σ i , such that \({S}_{i}(t;{\mu }_{i},{\sigma }_{i})=1/\{1+\exp [(\log t-{\mu }_{i})/{\sigma }_{i}]\}\) . The patient-specific parameters μ i and σ i were modeled as outputs of neural networks applied to LGE-CMR images and clinical covariates, trained by minimizing the loss function given by the negative likelihood: $$-\log {{{\mathcal{L}}}}=-\mathop{\sum}\limits_{i}\frac{\log {x}_{i}-{\mu }_{i}}{{\sigma }_{i}}+{\delta }_{i}\log {\sigma }_{i}+(1+{\delta }_{i})\log \left[1+\exp \left(\frac{\log {x}_{i}-{\mu }_{i}}{{\sigma }_{i}}\right)\right],$$ where x i is the observed time and δ i is the censoring status. With μ i and σ i estimated, the patient-specific survival functions were given by S i ( t ) as above. Performance metrics The all-time performance of the models was evaluated using two measures. The first was Harrell’s c-index 14 with the patient-specific μ i parameters as the risk scores ( \(\exp ({\mu }_{i})\) is the mode of the log-logistic distribution) to gauge the model’s risk discrimination ability. The second was the integrated Brier score 15 , which is defined as the time-average of mean squared error (MSE) between true 0/1 outcome and predicted outcome probability and gauges both probability calibration and discrimination. Both measures were adjusted for censoring, corrected by weighing with the inverse probability of censoring, and calculated for data before a given cutoff time τ 35 ; if unspecified, τ = 10 years, corresponding with the maximum event time in the dataset. Metrics derived from the confusion matrix (for example, precision and recall) were computed at several time points ( τ = 2, 3… years). Probability thresholds at these times were selected by maximizing F-score (for precision, recall and F-score) or Youden’s J statistic (for sensitivity, specificity and balanced accuracy) on the training data. Of note, to preserve consistency in evaluation between the internal and external cohorts, metrics computed on the external cohort were not covariate adjusted, potentially underestimating performance 36 . Neural network architecture SSCAR is a supervised survival analysis regression model composed of two sub-networks, each operating on different input types (Fig. 1 ): a convolutional sub-network (‘CMR’), which takes the LGE-CMR images as inputs, and a dense sub-network (‘covariate’). which uses the clinical covariate data. Feature extraction in the CMR sub-network from the LGE–CMR images was achieved by a 3D convolutional encoder–decoder model. The encoder used a sequence of 3D convolutions and pooling layers, followed by one dense layer to encode the original 3D volume into a lower-dimensional vector. Non-linear activation functions and dropout layers were added before each downsampling step. The encoding was further used for two purposes: survival and reconstruction. For the survival branch, the encoding was first stratified into one of r (learned) risk categories (Supplementary Table 2 ) and then fed to a two-unit dense layer to predict—for each patient—a set of two parameters, location μ and scale σ , which fully characterized the probability distribution of the patient’s log-time to SCDA (see the ‘Statistical fit’ section), followed by a bespoke activation function. This activation function clipped \(\ln \mu\) on [ − 3, 3] and clipped σ from below at σ m i n , where σ m i n was found such that the difference between the 95th and 5th percentiles of the predicted T S C D A distribution was no less than 1 month. This survival activation function effectively restricted the ‘signal-to-noise’ ratio μ / σ . For the purpose of reconstruction, the encoding was decoded via a sequence of transposed convolutions to re-create the original volume. Feature extraction from the clinical covariate data was performed using a sequence of densely connected layers, followed by a dropout layer to prevent overfitting. The resulting tensor used a similar path to the one followed by the convolutional encoding to eventually map to the two survival parameters. Finally, once the two sub-networks were trained, they were frozen and joined using a learned linear combination layer to ensemble the survival predictions. The predicted survival parameters (location and scale) aimed to minimize the aforementioned negative log-likelihood function for the log-logistic distribution, accounting for censoring in the data and class imbalance. The reconstructed output of the CMR sub-network minimized the MSE to the original input. Its contribution to total loss was learned to provide regularization to the imaging features extracted, ensuring that the survival fit relied on features able to reconstruct the original image. Both stochastic gradient descent (SGD) and Adam 37 optimizers were used. All code was developed in Python 3.7 using Keras 2.2.4 (ref. 38 ), TensorFlow 1.15 (ref. 39 ), numpy 1.6.2, scipy 1.2.1, openCV 3.4.2, pandas 0.24.2 and pydicom 1.2.2. Each train/evaluate fold took 3–5 minutes on an Nvidia Titan RTX graphics processing unit. Training and testing The entire model development and internal validation were performed using the LVSPSCD cohort. After a hyperparameter tuning step, the best model architecture was then used on the entire internal validation set to find the best neural network weights. As the ensembling layer was hyperparameter free, it did not use hyperparameter tuning. Hyperparameter tuning A hyperparameter search was performed using the set of parameter values described in Supplementary Table 2 , given the vast number of hyperparameter configurations available to define the model architectures. The package hyperopt 0.1.2 (ref. 40 ) was used to sample parameter configurations from the search space using the Parzen window algorithm to minimize the average validation loss resulting from a stratified ten-times-repeated ten-fold cross-validation process. The maximum number of iterations was 300 for the covariate sub-network and lowered to 100 for the CMR sub-network, given its highly increased capacity. Each fold was run using early stopping based on the loss value on a withheld 10% portion of the training fold with a maximum of 2,000 epochs (20 gradient updates per epoch). In hyperparameter tuning, models were optimized using SGD with a learning rate of .01 (default value in the neural network package used). The architecture with the highest Harrell’s c-index 14 was selected. Hyperparameters deemed to have little effect on learning (for example, maximum number of epochs) were fixed. Convolutional kernel size and the activation function for convolutions were kept at the default values in the neural network package used. The batch size was set to the highest value, given the memory constraints of our hardware. Internal validation and external test Internal model performance was assessed using ten repetitions of stratified ten-fold cross-validation on the LVSPSCD cohort. Early stopping based on the c-index on a withheld 10% subset was implemented with a maximum training of 2,000 epochs (20 gradient updates per epoch). The optimizer was Adam with learning rate 10 −5 for the CMR sub-network, 5 × 10 −4 for the covariate sub-network and .01 for the ensemble. A final model was trained with all the available LVSPSCD data and tested on the PRE-DETERMINE cohort. Of note, the final model shares the same architecture and training parameters with all the models in the 100 internal data splits but has different (fine-tuned) weights, which are derived using the entire internal dataset. To estimate CIs on the external cohort, the same cross-validation process was applied to the PRE-DETERMINE cohort, supplementing the training data in each fold with the LVSPSCD cohort. Approximate normal CIs were constructed using the 100 folds. Gradient-based interpretation of SSCAR The trained network weights in SSCAR were interpreted for both covariate and CMR sub-network using the gradients of outputs with respect to intermediary neural network internal representations of data. For the CMR sub-network, we adapted Grad-CAM 41 to work on regression problems and applied it to SSCAR by performing a weighted average of the last convolutional layer feature maps, where the weights were averages of gradients of the location parameter output with respect to each channel. The result was then interpolated back to the original image dimensions and overlaid to obtain the gradient maps shown (Fig. 4a , bottom row). For the covariate sub-network, the gradient of the location parameter output was taken with respect to each of the inputs and averaged over three groups: all patients, patients with SCDA and patients with no SCDA. Statistical analysis All values reported on the internal validation data set were averages over 100 data splits resulting from a ten-times-repeated ten-fold stratified cross-validation scheme. Values reported on the external test dataset represented a single evaluation on the entire set. All CIs were normal approximations resulting from the aforementioned 100 splits. In computing CIs for the external test set, the same procedure was used on all available data, ensuring that test folds came exclusively from the external dataset. Error bars are standard errors with sample standard deviation estimated from the 100 splits. Correlation P value was based on the exact distribution under the bivariate normal assumption. Covariate P values are based on two-sample Welch’s t -test 42 for continuous variables and Mann–Whitney U -test for categorical variables. Cox proportional hazards analysis was performed using the Python lifelines 0.25.5 (ref. 43 ) package; it included a hyperparameter sweep for the ℓ 1 and ℓ 2 regularization terms and followed the same train/test procedure as the neural network models. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Patient data used in this manuscript cannot be made publicly available without further consent and ethical approval, owing to privacy concerns. The CMR images and patient clinical data can be provided by the authors pending Johns Hopkins University institutional review board and Brigham and Women’s Hospital institutional review board approval and a completed material transfer agreement. Requests for these data should be sent to N.A.T. and/or C.M.A. Code availability The code for this project is available under the Johns Hopkins University Academic Software License Agreement at . Change history 28 April 2022 A Correction to this paper has been published:
A new artificial intelligence-based approach can predict, significantly more accurately than a doctor, if and when a patient could die of cardiac arrest. The technology, built on raw images of patient's diseased hearts and patient backgrounds, stands to revolutionize clinical decision making and increase survival from sudden and lethal cardiac arrhythmias, one of medicine's deadliest and most puzzling conditions. The work, led by Johns Hopkins University researchers, is detailed today in Nature Cardiovascular Research. "Sudden cardiac death caused by arrhythmia accounts for as many as 20 percent of all deaths worldwide and we know little about why it's happening or how to tell who's at risk," said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and Medicine. "There are patients who may be at low risk of sudden cardiac death getting defibrillators that they might not need and then there are high-risk patients that aren't getting the treatment they need and could die in the prime of their life. What our algorithm can do is determine who is at risk for cardiac death and when it will occur, allowing doctors to decide exactly what needs to be done." The team is the first to use neural networks to build a personalized survival assessment for each patient with heart disease. These risk measures provide with high accuracy the chance for a sudden cardiac death over 10 years, and when it's most likely to happen. The deep learning technology is called Survival Study of Cardiac Arrhythmia Risk (SSCAR). The name alludes to cardiac scarring caused by heart disease that often results in lethal arrhythmias, and the key to the algorithm's predictions. The team used contrast-enhanced cardiac images that visualize scar distribution from hundreds of real patients at Johns Hopkins Hospital with cardiac scarring to train an algorithm to detect patterns and relationships not visible to the naked eye. Current clinical cardiac image analysis extracts only simple scar features like volume and mass, severely underutilizing what's demonstrated in this work to be critical data. "The images carry critical information that doctors haven't been able to access," said first author Dan Popescu, a former Johns Hopkins doctoral student. "This scarring can be distributed in different ways and it says something about a patient's chance for survival. There is information hidden in it." The team trained a second neural network to learn from 10 years of standard clinical patient data, 22 factors such as patients' age, weight, race and prescription drug use. The algorithms' predictions were not only significantly more accurate on every measure than doctors, they were validated in tests with an independent patient cohort from 60 health centers across the United States, with different cardiac histories and different imaging data, suggesting the platform could be adopted anywhere. "This has the potential to significantly shape clinical decision-making regarding arrhythmia risk and represents an essential step towards bringing patient trajectory prognostication into the age of artificial intelligence," said Trayanova, co-director of the Alliance for Cardiovascular Diagnostic and Treatment Innovation. "It epitomizes the trend of merging artificial intelligence, engineering, and medicine as the future of healthcare." The team is now working to build algorithms now to detect other cardiac diseases. According to Trayanova, the deep-learning concept could be developed for other fields of medicine that rely on visual diagnosis. The team from Johns Hopkins also included: Bloomberg Distinguished Professor of Data-Intensive Computation Mauro Maggioni; Julie Shade; Changxin Lai; Konstantino Aronis; and Katherine Wu. Other authors include: M. Vinayaga Moorthy and Nancy Cook of Brigham and Women's Hospital; Daniel Lee of Northwester University; Alan Kadish of Touro College and University System; David Oyyang and Christine Albert of Cedar-Sinai Medical Center.
10.1038/s44161-022-00041-9
Chemistry
Synthesis studies transform waste sugar for sustainable energy storage applications
Hoi Chun Ho et al. Amending the Structure of Renewable Carbon from Biorefinery Waste-Streams for Energy Storage Applications, Scientific Reports (2018). DOI: 10.1038/s41598-018-25880-0 Journal information: Scientific Reports , Nature
http://dx.doi.org/10.1038/s41598-018-25880-0
https://phys.org/news/2018-09-synthesis-sugar-sustainable-energy-storage.html
Abstract Biorefineries produce impure sugar waste streams that are being underutilized. By converting this waste to a profitable by-product, biorefineries could be safeguarded against low oil prices. We demonstrate controlled production of useful carbon materials from the waste concentrate via hydrothermal synthesis and carbonization. We devise a pathway to producing tunable, porous spherical carbon materials by modeling the gross structure formation and developing an understanding of the pore formation mechanism utilizing simple reaction principles. Compared to a simple hydrothermal synthesis from sugar concentrate, emulsion-based synthesis results in hollow spheres with abundant microporosity. In contrast, conventional hydrothermal synthesis produces solid beads with micro and mesoporosity. All the carbonaceous materials show promise in energy storage application. Using our reaction pathway, perfect hollow activated carbon spheres can be produced from waste sugar in liquid effluence of biomass steam pretreatment units. The renewable carbon product demonstrated a desirable surface area of 872 m 2 /g and capacitance of up to 109 F/g when made into an electric double layer supercapacitor. The capacitor exhibited nearly ideal capacitive behavior with 90.5% capacitance retention after 5000 cycles. Introduction In the pursuit of a sustainable economy, both renewable energy and renewable chemical practices must be adopted. While the former can be produced from many sources, one feasible option for the combination of renewable energy and chemicals so far emanates from biorefineries 1 . However, with the current low oil price, biorefineries need improved profitability to compete with fossil fuels. This would require manufacturing of diversified products and effective utilization of byproducts for materials applications 1 , 2 . While lignin has been the center of attention for years as a co-product, the most overlooked byproduct is the impure sugar stream in liquid effluence from biorefinery pretreatment plants 3 . There exists a state-of-the-art technology that utilizes biomass, pretreated by acids or alkali, to break down amorphous carbohydrates to sugars for better cellulose accessibility 4 . Sugar content in the biomass pretreatment liquid effluence can contain maximum of 50% of the initial hydrolysable carbohydrate from the biomass 5 , 6 , 7 . Therefore, the efficiency of biorefineries can be improved significantly if this waste-stream sugar can be captured in a simple, cost-effective way without a need for extensive purification and apply it to materials design. However, a challenge, for biorefinery coproduct generation from the waste-stream, is the low concentration of soluble carbohydrates 1 . Concentrating this liquid effluence using waste heat, which is widely available in biorefineries, is achievable and already a common practice in Kraft pulping mills 1 , 6 . Utilization of this untapped biomass sugar could be prioritized and one of the potential applications can be its conversion to carbon particles with tunable morphologies as a medium for renewable energy storage, such as electric double layer (EDL) supercapacitors. Over the last decade, there has been growing interest in tailoring carbon sphere structures for different applications in renewable energy sectors. For EDL supercapacitor electrode applications, spherical carbon with a tunable porosity and controllable particle size distribution is of great interest 8 , 9 , 10 , 11 , 12 , 13 . The variety of structures can provide excellent performance for catalysis, adsorption, and energy storage 8 , 9 , 10 , 11 , 12 , 14 , 15 . Carbon spheres can be made from several methods 8 , 9 , 10 , 16 , 17 , 18 , 19 , 20 , 21 . One of the most inexpensive methods to date is hydrothermal carbonization (HTC). HTC is a relatively green technology and scalable to industrial production levels 9 . The HTC method is applicable to precursors with high moisture content much like the carbohydrates in pretreatment liquid effluence 22 . To better control the porosity, size, and shape of the carbon spheres, different strategies including templating and self-assembly were employed together with HTC 16 . Hard templating, which often uses silica as the template, can be one of the most straightforward ways to synthesize carbon spheres with a controllable morphology 14 , 23 . However, for silica hard templating, the most critical step is to obtain a template having strong interaction with the carbon precursor. The process is very tedious, and the removal of the template requires corrosive chemicals like sodium hydroxide or even hydrofluoric acid 13 , undesirable for green chemistry application. On the contrary, soft template synthesis does not require significant preparation or removal of the template 20 , 21 . We propose the synthesis of carbonaceous matter in a controllable manner using soft templating, followed by HTC and subsequent high temperature carbonization of solid HTC-derivatives. Emulsion (made from oil, water, and surfactant) and water-based HTC were carried out at different time-scales to study the evolution of spherical carbon products. The two synthesis routes were then correlated with the resulting carbon morphology, porosity, and surface characteristics. Furthermore, the carbon products derived from renewable sugar were investigated as EDL electrodes for supercapacitor application. Supercapacitors store energy based on two different principles: EDL capacitance from the pure electrostatic charge accumulation at the electrode interface, and (2) the pseudo -capacitance based on fast and reversible redox processes at characteristic potentials 17 . Out of these two mechanisms, we synthesized and characterized EDL supercapacitors and hence we will discuss the EDL supercapacitors only in this article. Surface activation of carbon products was conducted using KOH. We performed large-scale molecular dynamics (MD) simulations to understand the evolution and characteristics of the pore structures in an emulsion-based system. While previous studies have shown the possibility of producing carbon spheres from carbohydrates and even acid or alkaline pretreated biomass-derived hydrolyzed hemicellulose using HTC, detailed understanding on the structural evolution with respect to the hydrothermal reaction media is not fully understood 3 , 7 , 11 , 24 . In this study, we used sugarcane-derived table sugar as a model molecule to establish the physics and the carbon formation mechanism. We then corroborated our findings using the result from laboratory-made steam-pretreated liquid effluence from woodchips. After establishing that perfectly hollow carbon spheres can be made from pretreatment liquid effluence, we explored the potential application of our model material as supercapacitor electrodes. This study exhibits a pathway to design sustainable energy storage materials from the waste stream of a future biorefinery. Results and Discussion Structure of the carbonaceous materials The HTC of a carbohydrate precursor involves a four-step process – dehydration, condensation, polymerization, and aromatization as shown schematically in Fig. 1(a) 23 , 25 . The process is as follows: sugar molecules dehydrate, forming mainly a furfural-derivative 26 that decomposes into organic acid, and/or other species 27 . As the reaction continues, furfural and the excess dehydrated sugar condensate and polymerize. The growing “heads” in the polymer chain consist of reactive hydrophilic hydroxyl groups while the center of the chain becomes relatively dehydrated and hydrophobic. The center of the polymer chain then aromatizes with other chain centers to form a larger hydrophobic core. The aggregated chains, therefore, form spherical, micelle-like structures with a hydrophobic core and hydrophilic corona 28 . This evolution mechanism of the spherical carbonaceous aggregates was perfectly captured by scanning electron microscopy (SEM) [Fig. 1(b–d) ] with our water-based HTC synthesis (abbreviated as N system representing ‘No’ surfactants). In Fig. 1 , hydrothermal carbonization for 45 minutes (N45) and for 165 minutes (N165) give rise to micelle like structure and consequently spherical carbon structures [Fig. 1(c–d) ]. However, the 20 minutes sample (N20) with insufficient polymerization time exhibits out-of-equilibrium amorphous irregular-shaped structures with no carbon spheres [Fig. 1(b) ]. Interestingly, this shows that the micellar morphologies evolve from an irregular-shaped amorphous carbonaceous material to a perfectly shaped spherical particulate carbon as HTC time is increased. As the dehydrated sugar polymers aromatize during HTC, polymer cores continuously give off volatiles, lose functional groups, and carbonize further. Thus, when HTC duration increased, HTC samples became more carbonized and thermally stable, as seen in the thermogravimetric analysis (TGA) [Figure S1 ] of the HTC products. TGA results also show that carbon yield during high temperature carbonization increases as the HTC duration increases. Therefore, longer HTC time produces compact carbon spheres. For example, N45 samples show sphere diameter of 6.3 ± 1.5 μm [Fig. 1(c) ] while N165 samples have spheres of diameter 3.3 ± 1.6 μm [Fig. 1(d) ]. The N45 carbon, under transmission electron microscope (TEM), exhibits a perfectly spherical solid structure as shown in Fig. 1(e) . The perfectly spherical structure, has been corroborated by the cross-sectional thickness profile shown in Fig. 1(f) . Figure 1 Carbon spheres from the simple HTC synthesis (N carbon samples that are made without use of any surfactant). ( a ) A schematic representation of the evolution of carbon spheres during simple HTC. SEM images of N samples with ( b ) 20, ( c ) 45, and ( d ) 165 minutes HTC durations showing carbon morphology evolving from amorphous irregular-shaped carbon to spherical particulate carbon with increasing HTC time. ( e ) TEM image of a single N carbon sphere with 45 minutes HTC duration (N45). ( f ) Thickness profile of the single N45 sphere showing a solid spherical structure formation. Full size image Ferric chloride, primarily used as a catalyst during HTC synthesis, plays a critical role in carbonization and aromatization 29 , 30 , 31 . When hydrolyzed, ferric chloride forms ferric hydroxide or oxide and hydrochloric acid (HCl) in water 31 . As such the resulting acid catalyzes dehydration of the sugar; reducing sugar intermediates in this system could partially reduce ferric ions to ferrous ions and then be subsequently oxidized into various iron oxide species. Therefore, it is possible that some spheres may have traces of iron oxide at the micelle core 29 , 31 . However, most iron was likely removed during final acid wash of the resulting carbon except any iron oxide protected within carbon shells. Inductively coupled plasma optical emission spectrometry (ICP-OES) confirms that all samples obtained after carbonization and acid washing contain <1.2% of iron with the lowest being 0.28% of Y20 [Table S1 ]. The low iron contents together with the minimal traces of iron redox peak in the cyclic voltammetry experiments of the supercapacitor electrodes prepared from these carbonaceous materials indicate that our energy storage device is primarily an EDL capacitor, and hence pseudo -capacitance plays minimal role in our results. While carbon sphere formation in HTC is an established mechanism, the detailed procedure for emulsion medium HTC is far from understood. We denote emulsion synthesized carbon samples as Y samples, indicating presence of surfactant and oil in the reaction medium. The mechanism is schematically shown in Fig. 2(a) . In the emulsion formed by sodium dodecyl sulfate (SDS) surfactant (1 g/100 ml), water and paraffin oil (4:1 v/v), surfactant molecules form surfactant micelles. First, sugar naturally dissolves in the water phase. As HTC progresses, the sugar molecules in water behave much like the N samples and consequently dehydrate, condensate, polymerize, and aromatize. The hydrophobicity of the dehydrated and polymerized condensed sugar molecules gradually increases. The hydrophobic polymerized sugar molecules are entropically attracted towards the hydrophobic core of the surfactant micelles in the emulsion. Note that the hydrophobic tail (dodecyl) and the hydrophilic head (sulfate) of the SDS are denoted by the yellow and red color, respectively [Fig. 2(a) ]. As HTC continues, a layer-by-layer self-assembly of the sugar molecules in the surfactant micelle gives rise to the hollow carbon structures. The spherical carbon samples from emulsion-based HTC after 45 and 165 minutes (Y45 and Y165 carbons, respectively) can be seen in Fig. 2(b,c) . The hollow nature of Y45 is revealed from the crumbled sphere in Fig. 2(b,d–f) . The TEM image in Fig. 2(e) reveals a sphere having a bright core and dark edges indicating a hollow structure. In contrast to Fig. 1(f) where the cross-section thickness profile of N45 shows that the center of N45 sphere is the thickest part, Fig. 2(f) shows the Y45 bead having a hollow structure with the shell thickness of ca . 0.2 µm. Unfortunately, these broken spherical particles were not observed in the Y165 sample due to the longer HTC reaction. For long enough HTC reaction duration, the carbon shells can grow thicker and thus prevent the spheres from breaking. This mechanism also explains the smaller sizes of Y45 samples (2.5 ± 0.5 μm) as compared to 3.9 ± 1.2 μm for Y165 samples, as the longer HTC duration in Y165 allows sugar molecules to be part of a single micelle in a closely packed form. KOH activation of Y45 and Y165 [denoted as aY45 and aY165 respectively, Fig. 2(g,h) ] retained their morphologies with a slight increase in size to 4.0 ± 1.7 μm and 4.14 ± 1.62 μm respectively compared to their precursors [Fig. 2(b,c) ]. The slight increase in sphere sizes after activation is due to the addition of oxygen containing functional groups on carbon during activation, thereby expanding the carbon structure. Like N20, emulsion-based HTC was prematurely stopped after 20 minutes (Y20 sample) before the sugar molecules had a chance to form these hollow spherical structures, giving rise to out-of-equilibrium structures without carbon spheres [Fig. 2(i,j) ]. The emulsion-based carbon bead formation will be elaborated further using Molecular Dynamics simulation in a later section. Figure 2 Carbon spheres from emulsion-based HTC synthesis (Y samples). ( a ) A schematic representation of the evolution of carbon spheres during emulsion-based HTC. SEM images of Y samples with ( b ) 45, ( c ) 165, and ( d ) 45 minutes HTC showing perfectly spherical structures with the longer HTC durations and a revelation of their hollow nature with broken spheres. ( e ) TEM image of a single Y sphere with 45 minutes HTC (Y45) showing its hollowness. ( f ) Thickness profile of the single Y45 sphere showing its thin shell, ca . 0.2 µm. SEM images of activated Y sample with ( g ) 45 minutes and ( h ) 165 minutes HTC showing the retention of carbon morphology after activation. Insert of ( g ) reveals the retention of hollow nature of spheres. ( i ) Y sample with 20 minutes HTC showing the out out-of-equilibrium structures, and ( j ) activated Y sample with 20 minutes HTC which retained the out-of-equilibrium structures after activation. Full size image So far, we discussed the pathway to produce solid and hollow carbon spheres from simple sugar molecules using water and emulsion-based HTC techniques. Subsequently, we will discuss the understanding of the self-assembly these sugar derived Y and N samples’ and energy storage properties. Prior to that, we wanted to apply the same technique to synthesize carbon from biomass pretreatment liquid effluence as our long-term goal is to utilize carbon precursors from industrial effluence to produce sustainable energy storage materials. Liquid effluence from steam pretreated woodchips was prepared and subsequently carbonized following the same emulsion medium and carbonization parameters as the Y165 sample. The resulting carbons show perfectly hollow spherical structure from the SEM and TEM images as shown in Fig. 3(a–c) . These results from the biomass pretreatment liquid effluence prove that hollow carbon spheres can be produced as a co-product from biorefinery wastes, and these carbonaceous materials followed the same functionalities as that of Y samples. Carbon spheres produced herein are smaller and with thinner shells. It is known in literature that hydrothermally produced carbon sphere sizes are affected by the type of carbon precursors 32 , which can explain our smaller sphere sizes when comparing to the Y samples. For a simpler processing viewpoint, woodchip pretreatment liquid effluence was fed directly without being further concentrated into the hydrothermal synthesis reactor after being emulsified. The sugar extracted in the liquid was estimated to be ~30% of the initial woodchip mass. As a result, the sugar content in HTC used for the woodchip pretreatment effluent hydrothermal synthesis was lower than that of the Y samples, leading to the thinner carbon shell produced 11 . To the best of our knowledge, this is the first controllable hollow carbon sphere synthesis of biomass pretreatment liquid effluent from steam pretreated biomass using emulsion-based HTC. The success of this method demonstrates that our approach has potential for carbonaceous materials synthesis from a wide range of biorefinery waste materials. Figure 3 Spherical hollow carbon from steam pretreated woodchip liquid effluence. ( a ) SEM image. ( b ) and ( c ) TEM images. Full size image Molecular dynamics simulation We believe that there is a mixture of hollow and solid spheres obtained during emulsion-based HTC, as not all Y45 and Y165 spherical carbons are formed with the assistance of surfactant micelles and therefore not all of them are hollow. To further examine the coexistence of hollow and solid spherical beads in emulsion HTC, we performed coarse-grained molecular dynamics (CGMD) simulations. The underlying principle behind the structure formation conjectured in the previous section can also be verified using computational modeling. The simulations are carried out using LAMMPS package 33 in a canonical ensemble (see Methods Section for computational details). In an emulsion system, where surfactant number density is at or over the critical micelle number, the surfactants start forming the micelle at an early simulation stage. Figure 4(a) shows formation of such micelle at as early as 3 million simulation time steps. Figure 4(b–d) show evolution of the carbon structures at 4, 5, and a fully equilibrated 8 million simulation time steps. At the beginning of simulations (beginning of HTC in experiments), between 3–4 million time steps, it can be assumed that the charges on the polymer chains (sugar) are not fully stripped off. Therefore, when polymer (sugar) molecules are near the surfactant micelle, the micelle corona (red color beads) attracts the charges on the polymer chains. As HTC progresses, polymer chains get further dehydrated and gradually become hydrophobic. These hydrophobic backbones are then absorbed by the surfactant micelle cores due to strong interactions between many surfactant tails (which are hydrophobic) and hydrophobic polymer chains. The evolution from Fig. 4(b,c) shows a gradual change towards totally spherical beads consisting of surfactant and polymer molecules. The equilibrium structure in Fig. 4(d) consists of spherical beads formed by surfactant head and surfactant tail (red and yellow) along with the polymer chains (gray). Concurrently in Fig. 4(a–d) , we also observe progress of a separate bead formation consisting of only polymer (grey color) molecules. As these polymer chains are far away from the surfactant micelle, interaction between those polymer molecules with surfactants is unfavorable. As a result, these polymer chains form individual beads with other polymer molecules only. The computer simulations show the presence of both hollow and solid spheres in an emulsion system. Figure 4 CGMD simulations of surfactant polymer mixtures replicating the surfactant-sugar emulsion HTC experiment. The top panel shows evolution of the bead formation inside the central simulation cell at ( a ) 3 million, ( b ) 4 million, ( c ) 5 million and ( d ) 8 million simulation time steps. The red and yellow color spheres represent surfactant head and tail segments. The grey color spheres represent the polymer molecules (sugar derivative in experiment). The images at ( e ) and ( f ) exhibit a single bead formed by surfactant molecules in the presence of surfactant as shown in blue circle in (d) and when the surfactants are stripped off from the same, respectively. The intermolecular structure factor, S( Q ) for the bulk and near surfactant polymer molecules are plotted in ( g ) in red and blue color for high- Q range. The bulk and near surfactant polymer molecules are circled in ( d ) in red and blue colors also. The low- Q range profiles of S( Q ) for the bulk and near surfactant polymers is shown in green and magenta color, respectively. The inset in ( g ) shows the snapshots for a completely stripped off surfactant system. Full size image The inset of Fig. 4(g) shows only the carbonaceous spherical structure after the surfactant molecules are pyrolyzed off at high temperature. Two types of spheres are observed in the emulsion-based technique, solid spheres (shown in red circle) formed by only sugar aggregated (hydrogen-bonded) beads and hollow spheres (shown in blue circle) formed by sugar polymers absorbed by surfactant micelles. Because of the difference in their formation mechanisms, we expect a difference in their morphology too. The solid spheres, away from the surfactant micelle, show smooth surfaces. Whereas, the hollow spheres are seen to exist in the surfactant micelle environment. A closer look at the single bead formed by sugar absorbed surfactant micelles reveals that the surfactants serve as templates [as in Fig. 4(e) ], resulting in rough surfaces as shown in Fig. 4(f) once the surfactants are completely burnt off. The difference in surface roughness and porosity is reiterated in detail in the structural investigation in Fig. 4(g) . We show the inter-particle structure factor defined as, \(\frac{1}{N}\sum _{ij}{e}^{-iQ.{r}_{ij}}\) between the polymer molecules only. Here Q is the wave vector, r ij is the distance between two particles and N is the total number of particles. The near-surfactant S( Q ) (blue lines), is calculated from polymer molecules within 2 σ while the bulk S( Q ) (red lines) is obtained from the same molecules 2 σ distance away from the surfactant molecule, where σ is the Lennard-Jones diameter of each monomer. The near-surfactant S( Q ) represents polymer molecules from only those polymer spherical beads that are agglomerated within surfactant micelles. The bulk S( Q ) represents polymer molecules that are not associated with the surfactant micelles. As our focus is to understand the molecular level self-assembly in bulk and near-surfactant, we concentrate on S( Q ) at high- Q , i.e., shorter length-scale properties. The polymer molecules in bulk show wide and weaker peaks (red), representing essentially agglomerated structures with broad distribution. The bulk molecules are parts of the sphere formed solely by polymer chains and their wider peaks represent swollen structures with a relatively smoother surface. For polymers near the surfactant molecules (blue lines), well-defined structures are observed with peak at 0.33, 0.44, 0.55, 0.66 σ −1 , and so on. The structures show equally spaced molecules representing a layering of the polymer molecules within the micelle. It suggests that when the polymers enter the micelle core, they position themselves between the surfactant molecules thereby giving rise to the layering structure. As the surfactants are pyrolyzed off, the empty spaces left behind result in molecular level porous structures. This molecular level investigation corroborates our hypothesis that carbon morphology can be controlled using different formation mechanisms. The simulations concurrently support very well the collected electron micrographs and the proposed mechanism discussed in previous sections. Gas adsorption-desorption and surface characteristics of the carbon samples Based on the MD simulation, we expect the rougher surface from emulsion-based Y samples to generate higher surface area due to (1) the hollow nature of the carbon spheres and (2) the templating of surfactants. We calculated the surface area and pore volumes as obtained from gas adsorption-desorption experiments in Table 1 . Surface areas of Y45 and Y165 are approximately double those of N45 and N165. While no bead formation was observed with the 20 minutes HTC samples, the surfactant templating effect alone exfoliates Y20 carbons resulting in notably higher surface area than that in N20 carbons. In terms of HTC duration, surface area and pore volume both decreases as HTC duration time increases, for both Y and aY samples (Table 1 ) due to consolidation of layered structured pores collapsing. The trend was not as obvious with the N samples, as all three samples have surface area around 300 m 2 g −1 . To further increase the surface area of the carbon samples and eventually their capacitance performance, we activated Y samples with KOH. KOH acts as both the activating and templating agent, creating new pores and enlarging existing pores. As it heats up, KOH melts at 360 °C, infiltrating into macropores of carbon. As an activating agent, KOH etches new micropores and mesopores on the carbon surfaces. After washing, these emptied pores are exposed, changing the surface area and pore size distribution considerably 34 . The mechanism of KOH activation can be complex. Generally speaking, it can be represented as 6KOH + C = 2 K + 3H 2 + 2K 2 CO 3 35 , 36 . The highest surface areas and pore volumes were achieved by KOH activation. The aY20, aY45, and aY165 give rise to 1495, 1384, and 1037 m 2 g −1 surface areas with 0.627, 0.577, and 0.418 ccg −1 pore volume respectively. Table 1 Surface area and pore volume measured from nitrogen adsorption isotherm. Full size table The molecular scale templating effect from surfactants can also be seen by the increase in microporosity which is observed in the isotherms in Fig. 5(a–c) . The steep initial rise at low relative pressure suggests micropore filling 37 . Higher amounts of micropore filling (Fig. 5(a) ) for the Y samples can be seen. Y and aY isotherms contain the shape of Type I isotherms while N isotherms resemble Type II isotherms 38 . The percent micropore and mesopore were also quantified in Table 1 . It should be noted that the MD simulation fails to predict the mesoporosity of solid beads of the N samples. MD simulations are performed in an emulsion-based system and hence it cannot predict the N sample morphology accurately. A separate MD simulation of the N system was not performed to quantify the mesoporosity in the simulation. For Y and aY isotherms, the relatively flat plateau region suggests the limited amount of multilayer filling, or the presence of meso or macropores. N isotherms, on the contrary, have noticeable hysteresis, indicating capillary condensation in mesopores 39 . These features are also observed in the pore-size distribution analysis in Fig. 5(d) and its magnified version (from 2 nm onward) in Fig. 5(e) . The mesoporous characteristics of N samples can be explained in the following: N sugar polymers were polymerized into their final shape in bulk without being templated with hydrophobic segments of a surfactant. Thus, the polymerized sugars in N samples self-assemble randomly without being fully dehydrated. During the high temperature carbonization step, these functional groups evolved as volatiles, as shown in the TGA plots in Figure S1 between ca . 200 °C to ca . 700 °C, activating and creating channels within the structure. This gives rise to the mesoporous structures in the final carbonized products. In contrast, the Y samples did not have mesopores as a result of micelle formation with the surfactant causing the sugar polymers to dehydrate before being incorporated in the hydrophobic core. This is evident by the smaller weight loss from the TGA plots in Figure S1 compared to the N counterparts between ca . 400 °C to ca . 700 °C. Hence, this mechanism suppresses the amount of mesopores that can be formed in an emulsion-based HTC as observed with the Y samples. To summarize, the layering effect between surfactants and sugar-derived carbonaceous polymers gave rise to higher surface area with smaller micropores for the Y samples while the evolution of available volatiles from activated N samples created larger mesopore channels during the high temperature carbonization step. Figure 5 Carbon surface characteristics based on gas adsorption-desorption isotherm and porosity. ( a ), ( b ), and ( c ) Isotherms for adsorption-desorption for aY, Y and N series samples respectively. ( d ) Pore size distributions for different samples determined using the Quenched Solid Density Functional Theory (QSDFT). ( e ) Pore size distributions at longer length scales >2 nm to 200 nm. The color schemes are shown in the legends. Full size image Small angle X-ray scattering (SAXS) characterization of two selected samples was carried out to investigate the presence of porous structure within the samples. Figure 6 depicts the SAXS curves for the carbonized N45 and Y45 samples. One of the most notable differences was that Y45 exhibits a scattering shoulder in the high- Q region, i.e., 0.1 < Q < 0.6 nm −1 while N45 merely shows asymptotic decay in scattering intensity. Here, Q is the magnitude of the scattering vector defined as Q = | Q | = 4 πλ −1 sin θ , with λ and θ being the wavelength of incident X-ray beam and half of the scattering angle, respectively. The high- Q scattering feature of Y45 indicates the existence of nanometer scale structures created during carbon synthesis by the assistance of the surfactant where hydrophobic carbon precursors accumulated inside the micelle and these segments were separated by the hydrophobic surfactant tail and/or the oil molecules. By considering the curve shape of the high- Q scattering shoulder showing Intensity ∝ Q −1 , we employed the Guinier-Porod model for cylindrical objects to fit the high- Q scattering shoulder 40 . The data fit, indicated the existence of cylindrical pores with an average diameter of 8.6 Å (or 0.86 nm). Note that both N45 and Y45 exhibit a power law with a fractal dimension of approximately 3 in the low-Q region revealing the presence of a 3-dimensional (3D) network structure. We anticipate that they are 3D bridged pores of the samples. The N45 sample shows a steeper slope than that of the Y45 sample in low-Q region indicating larger porous structure existing within the N45 samples (see Fig. 6 ). These measured data corroborate very well with the measured pore volume and surface area shown in Table 1 . Specifically, the pore volume and surface area of N45 and Y45 are ( ca . 0.416 vs. 0.347 m 3 g −1 ) and ( ca . 273 vs. 743 m 2 g −1 ), respectively. Figure 6 Small angle X-ray scattering (SAXS) data for N45 and Y45 carbon samples. Full size image Supercapacitor application To this end, we demonstrate the application of these renewable carbonaceous materials for renewable energy storage systems. We used the synthesized porous carbonaceous particles to prepare electrodes for EDL. EDL supercapacitors, are energy storage devices with power performances that fit in between dielectric capacitors and batteries from the Ragone plot, which exemplifies the energy and power relationships for different energy storage devices 41 . Unlike batteries and pseudo-capacitors, EDL supercapacitors do not rely on faradaic reactions 42 . Thus, these type of supercapacitors have higher charge-discharge rates and stabilities 43 . EDL supercapacitors rely solely on the electrostatic separation between ions in electrolyte and electrons in electrodes 44 . Using porous carbonaceous materials to serve as electrodes has gathered significant interest 45 , 46 , especially when derived from renewable materials 47 , 48 , 49 . Figure 7(a,b) display typical cyclic voltammetry current-voltage (CV) curves and charge-discharge profiles respectively for Y45 sample as an example. The rest of the CV and charge-discharge curves can be found in Supporting Information. The CV curves show symmetrical rectangular shapes and the charge-discharge profiles show nearly symmetric triangular shapes for all scan rates and current densities. This represents good to excellent capacitive performances. Capacitances generally follow the same trend as surface area and pore volume based on the governing formula for capacitors, C = ε·A/d, where C is the capacitance of the supercapacitor, ε is the product of electrolyte dielectric constant and permittivity of free space, A is the surface area between the electrode and electrolyte, and d is the separation distances between ions in the electrolyte and the electrons in the electrodes. As a result, the amount of charge that can be stored, i.e. capacitance, increases with increasing accessible surface 44 . Therefore, aY20 and aY45 with the highest surface areas of all samples give the highest capacitance of up to 113 F g −1 . The direct correlation between surface area and capacitance can be observed in Fig. 7(c,d) , where a summary of the capacitance values is shown. The notable exception to the trend is N165 and Y165 samples. Y165 has a higher surface area when compared to N165 but not its capacitance. Although many factors can cause this discrepancy, at least one of the major factors is the pore characteristics of the two samples. From the gas adsorption-desorption experiments, it has been shown that N samples have larger pores than the Y samples. The Quenched Solid-State Functional Theory (QSDFT) model showed that Y165 has 56% of its pores smaller than 0.64 nm, which was the lower limit of our pore size measurements. N165, however, only has 46% of such small pores. Although solvated potassium ion has a size of 0.31 nm and solvated hydroxyl ion has a size of 0.35 nm 50 which are much smaller than our 0.64 nm limit, we may still speculate that the larger pores in N165, partly compensated its low surface area for capacitance. Many micropores of Y165 could have blocked electrolyte ions from reaching the carbon electrode surface, reducing the effective surface area on the electrode for capacitance applications 51 , 52 . Mesoporosity on the other hand, could make the diffusion of electrolyte ions onto carbon electrode surfaces easier, contributing to high capacitance, especially when fast diffusion of ions are required, as in the case with a high scan rate 50 , 53 . As a result, N165 and Y165 have similar capacitances at a slow scan rate, but as the scan rate increases, N165 gradually outperforms Y165. Similarly, other Y samples generally have poorer rate handling capability in comparison to the N samples. Noticeably, aY165 rate handling capability was the worst followed by aY45. Similar to Y165, the first suspect for explaining the poor rate handling was the kinetic limitation from pore size distribution. However, the DFT results of aY165 and aY45 did not differ much from aY20 which the only other activated sample but with good rate handling capability. Figure 7 Capacitance measurement using carbonaceous materials as electrodes of EDL supercapacitors. ( a ) Cyclic voltammetry IV curves for Y45 at 10, 20, 50, 100 and 200 mVs −1 scan rates. The legends are in mVs −1 ( b ) Charge-discharge experiments for Y45 sample at 200, 500, 1000 and 2000 mAg −1 current densities. Legends are in mAg −1 . ( c ) Capacitances for all the samples as shown in the legend color code. ( d ) Table showing the capacitance values at two different scan rates. ( e ) Electrochemical impedance spectroscopy (EIS) results and ( f ) Cycle stability of aY20, aY45, and aY165. Full size image The activated samples were further analyzed using electrochemical impedance spectroscopy. Figure 7(d) shows the Nyquist plots exhibit almost vertical lines at the low frequency region, representing behaviors closer to an ideal capacitor 17 , 54 . A shallower slope closer to 45° as seen at the mid to low frequency range represents Warburg resistance which indicates the slow diffusion of ions on the surface of the electrodes 55 . One can also find a good indication of the equivalent series resistance when extrapolating the vertical portion of the curve to the x-axis on the Nyquist plot 56 , 57 . Notably, aY165 curve is shifted to the right relative to aY45 and with aY20 being the furthest left curve. This indicates impedance of aY165 is higher than aY45 then closely followed by aY20. This explains the trend observed with a steeper drop in capacitance vs. scan rate curve in Fig. 7c for aY165 than aY45 and aY20 with the shallowest drop discussed earlier. We believe the main reason in their conductivity differences are due to the different amounts of metal species in the activated samples and thus their capacitance rate handling capability. Indeed, earlier ICP-OES results in Table S1 showed that aY45 and aY165 have the highest amount of metal species from the iron catalyst and the KOH activation process. Specifically, aY165 contains 0.897% iron and 0.535% potassium, the highest, followed by aY45, then aY20 with the least with 0.313% iron and 0.211% potassium. Because of the promising capacitances, 5000 long term cycling stability was evaluated for all activated samples as shown in Fig. 6(e) . Capacitance retentions of 98.3%, 97.2% and 99.8% for aY20, aY45, and aY165 respectively were measured, revealing high cycle stability performances during practical applications. So far, we have shown supercapacitor properties of samples made from sugar derived carbonaceous materials. To correlate the functionalities of sugar derived activated carbon with real world waste management, the hollow carbon spheres made from woodchip pretreatment liquid effluence was activated and characterized. A surface area of 872 m 2 /g and a pore volume of 0.511 cc/g were obtained. When made into supercapacitor electrodes, capacitance of 109 F/g was measured with a scan rate of 5 mV/s which maintained a 90.5% capacitance retention after 5000 cycles (Figure S4 ). The desirable result and nearly ideal capacitive behavior reinforced the potential for biorefineries to utilize its biomass waste using simple emulsion-based hydrothermal synthesis for EDL supercapacitor electrode applications as a value-added product. Conclusions We have analyzed the evolution of spherical carbon particles and pore formation mechanisms from sugars via hydrothermal synthesis. The morphologies of these products can be controlled by modifying the composition of the media and thus altering the carbonization mechanisms. Both computational and experimental results show an intriguing effect that carbon morphologies evolve from a poorly defined charcoal material to perfectly shaped spherical carbon as HTC time increases. The size of the spheres can also be controlled by the HTC duration. Because of the differences in reaction mechanisms, surfactant loaded precursor in the emulsion medium self-assembled into hollow spheres (Y sample) whereas in the absence of surfactant in simple hydrothermal reaction medium, the precursors only yield solid spheres (N Sample). In terms of porosity, Y samples have higher surface area and microporosity due to their hollow nature and the layered templating effect caused by the surfactant molecules. In contrast, N samples are both microporous and mesoporous mainly due to the evolution and activation of volatiles during carbonization. The carbon surface area analysis, molecular dynamics simulations, and the measured small-angle x-ray scattering data reveal the templating effect of surfactants on the emulsion-based hydrothermal synthesis of carbons. When these carbons were made into EDL electrodes of supercapacitors, the observed supercapacitances correlated very well with the measured surface areas and exhibited excellent capacitive behaviors. Particularly, the Y samples synthesized for short duration (20–45 minutes) show very high capacity ( ca . 50–80 F/g) even at high scan rates. The best performing Y samples were then activated with KOH, and surface areas and capacitances further improved up to 1495 m 2 g −1 and 113 F g −1 , respectively, with a 98.3% capacitance retention for aY20 after 5000 cycles. Finally, we produced supercapacitor electrodes from liquid effluence of steam pretreated woodchips that are typically a byproduct of biorefineries using our reaction pathway. Perfectly hollow spheres can be synthesized from the woodchips. The resulting activated hollow carbon spheres exhibited a desirable surface area of 872 m 2 /g and capacitance of up to 109 F/g. Almost ideal capacitive behaviors were observed with 90.5% capacitance retention after 5000 cycles. Thus, we provide a potential solution for deriving energy harvesting materials from a renewable resource. While the investigation was performed at a laboratory scale, the simplicity of the overall synthesis technique can easily be scaled up to industrial standards. The advantage of this method is threefold: (1) the synthesis technique is simple, (2) the method can be used in parallel with biorefinery unit operation, and (3) the cost of biorefineries will eventually decrease since a waste product can be converted into an energy storage material. We believe this work will influence a change in the practices of biorefineries towards achieving their goal of competing with fossil fuels. Methodology Hydrothermal synthesis and carbonization HTC of carbon was conducted from 120 ml 0.5 M sucrose (Diamond Crystal, Savannah, GA) and 0.5 M FeCl 3 solution. In another case, 0.5 M sucrose and 0.5 M FeCl 3 solution were made from an emulsion of ultrasonicating 1.2 g sodium dodecyl sulfate (SDS), 24 ml paraffin oil (Merck-KGaA, Darmstadt, Germany), and 96 ml DI water. The hydrothermal reactions were carried out for 20, 45, or 165 minutes. Additionally, liquid effluence of steam pretreated woodchips (obtained from carpentry waste from eastern Tennessee mixed hardwood biomass) at 180 °C for 12 hours was made into an emulsion just like the previous case with SDS and paraffin oil then subsequently HTC for 165 minutes with 1.2 g FeCl 3 . Amount of carbohydrate resulted in the liquid effluence was estimated at around 2.5 g. All chemicals were purchased from Sigma-Aldrich unless noted otherwise. Synthesis was done in a 200 mL PPL lined stainless steel autoclave (Columbia International). The autoclave was placed in an oven at 200 °C for a specific synthesis time. The hydrothermal synthesized solid product was then placed in a quartz tube inside a tube furnace and carbonized at 1000 °C for 20 minutes in a nitrogen atmosphere. The carbonized material was then washed with 0.5 M HCl and water. Activation KOH was ground with the dried samples in a 2:1 (KOH:sample) ratio. Ground samples were then ramped in a tube furnace under a nitrogen atmosphere at the rate of 8 °C per minute to 800 °C and held for 30 minutes. The activated samples were then cooled, washed with water, and dried. The activated ID carbon materials were designated as “aID” carbon (i.e. activated Y20 is named as aY20). Characterization Scanning electron microscope (SEM) images were collected with a Hitachi S4800. Carbon sphere diameters were measured by Image J software and characterized by the mean and standard deviation of 10 randomly chosen spheres within a sample. Transmission electron microscope (TEM) images were taken using a Zeiss Libra 120 Transmission Electron Microscope operating at 100 kV. Samples were dispersed onto a carbon film coated copper grid before analysis. Thermal gravimetric analysis was investigated by a Q500, TA instruments. For each sample, ca . 15 mg was used for measurement onto a platinum pan. Temperature was ramped to 105 °C at 10 °C/min, held for 30 min, and ramped to 1000 °C at 7.5 °C/min in nitrogen atmosphere. Inductively coupled plasma optical emission spectrometry was outsourced to Galbraith Laboratories Inc., a commercial analytical chemistry laboratory service in Knoxville, TN. Nitrogen adsorption desorption experiments were carried out with a Quantachrome Autosorb iQ at 77 K. Surface area, pore size distribution, and pore volume were determined using the Quenched Solid Density Functional Theory (QSDFT) 58 . For capacitance measurements, the carbonaceous material was first mixed with a conductive carbon black (Timcal super C45) at 8:1 ratio then with 10 wt. % aqueous Polytetrafluoroethylene (60% dispersion in water). The mixture was then mixed in ethanol to form a paste. The paste was then coated on two 7/16 th in. diameter Ni-foam circles and dried overnight. These current collectors were then pressed and used as electrodes in symmetrical two-electrode cells with 6 M KOH as electrolyte. Filter paper served as the separator. Two stainless steel rods were used to clamp on the electrode-filter paper-electrode complex and the complex was housed inside a Teflon Swagelok cell. VersaSTAT 4 (Princeton Applied Research) was used to perform cyclic voltammetry, charge-discharge, and electrochemical impedance spectroscopy (EIS) experiments. A 0 to 0.8 V voltage window was used. Scan rates varied from 10 mV s −1 to 200 m V s −1 and from 200 mA g −1 to 2000 mA g −1 . EIS was conducted at a frequency of 500 kHz to 50 mHz with an amplitude of 10 mV. The specific capacitances were calculated from C = 2q/mE, where E is the voltage window of 0.8 V, m is the mass of the carbon sample used, and q is the charge accumulated calculated from VersaStudio software. Cycle stability was conducted on a Arbin battery cycler (Arbin Instrument) at 500 mA g −1 . Small-Angle X-ray Scattering (SAXS) data were acquired at the Center for Nanophase Materials Sciences (CNMS) in Oak Ridge National Laboratory on an Anton Paar SAXSess mc 2 . The scattered beam was recorded on a CCD detector (PI-SCX, Roper) with a pixel resolution of 2084 × 2084 and pixel dimensions of 24 × 24 µm 2 . The data collection time was 20 minutes. For the measurements, the X-ray was generated at 40 kV/50 mA at a beam wavelength of λ = 1.541 Å (Cu Kα radiation). The generated X-ray beam was slit-collimated using a Kratky camera giving rise to the beam size of 18 mm (Length) × 0.4 mm (Width) and the collected SAXS data were desmeared and expressed as intensity versus Q , where Q = (4 π sin θ )/ λ after subtraction of detector dark current and background scattering. Computational Model Coarse-grained molecular dynamics (CGMD) simulations were performed on a dilute mixture of sugar and SDS surfactant. The sugar molecules were modeled as short polymer chains of 15 monomeric units with 4 hydrophilic monomers with charges on the polymer backbone. The charges on the polymer chains allow the chain to be slightly polar thereby mimicking the polar hydroxyl groups of the sugar molecules. The charge sites are converted to neutral once aggregates are formed to mimic the fully hydrophobic carbonaceous sugar molecules after HTC. The SDS surfactants were modeled as 12-mer polymer chains with 1-mer hydrophilic polar head and 11-mer hydrophobic tails has been done in previous CGMD studies 59 , 60 . While the experiments were performed in both water and emulsion (in SDS), we performed only one set of simulation in SDS (emulsion in experiments). The purpose of the simulation was to understand the self-assembly of carbonaceous bead formation irrespective of the presence of absence of SDS, we chose the latter. The interactions between the neutral monomers were modeled using Lennard-Jones (LJ) force-field (FF) while the charge interactions were modeled using explicit Coulomb interactions. Each monomer bead was represented by mass, m , and Lennard-Jones bead diameter, σ . For the simulations, we considered the mass and LJ diameter equal to 1 and 0.97 respectively, same for all monomers. As the variation of m and σ of sugar and surfactant monomers are relatively small, hence this choice m and σ would provide critical self-assembly information without drastically altering the fundamental physics. The model system consisted of 2000 polymer chains and 2000 surfactant molecules in a periodic box of 100 σ × 100 σ × 100 σ at a density, 0.064/ σ 3 . In experiments, the hydrothermal carbonization process strips off the charges on the sugar and SDS molecules. Therefore, the most realistic way to computationally model hydrothermal carbonization would be to strip off the charges after a certain simulation time. Hence, we modeled the carbonization process by stripping off the charges from both the polymers and surfactants after equilibrating for 5 million LJ time-steps. By deleting all the charges from the system, the interaction between the monomers becomes solely hydrophobic, representing a purely carbonaceous material. The polymer chains and surfactant molecules undergo self-assembly during equilibration, however, to achieve equilibration for a fully hydrophobic (no-charge scenario) system, we ran for 3 million more time steps. All the simulation parameters are in reduced units. The temperature is fixed at, T* = \(1.0{k}_{B}T/\varepsilon \) and the simulation time step is fixed at, Δt = 0.01 σ . The visualization of MD trajectories were generated by VMD code 61 and the structural analysis was performed using our in-house code.
Biorefinery facilities are critical to fueling the economy—converting wood chips, grass clippings, and other biological materials into fuels, heat, power, and chemicals. A research team at the US Department of Energy's (DOE's) Oak Ridge National Laboratory has now discovered a way to create functional materials from the impure waste sugars produced in the biorefining processes. Using hydrothermal carbonization, a synthesis technique that converts biomass into carbon under high temperature and pressure conditions, the team transformed waste sugar into spherical carbon materials. These carbon spheres could be used to form improved supercapacitors, which are energy storage devices that help power technologies including smartphones, hybrid vehicles, and security alarm systems. The team's results are published in Scientific Reports, a Nature research journal. "The significant finding is that we found a way to take sugar from plants and other organic matter and use it to make different structures," said Amit Naskar, a senior researcher in ORNL's Materials Science and Technology Division. "Knowing the physics behind how those structures form can help us improve components of energy storage." By modifying the synthesis process, the researchers created two varieties of the novel carbon spheres. Combining sugar and water under pressure resulted in solid spheres, whereas replacing water with an emulsion substance (a liquid that uses chemicals to combine oil and water) typically produced hollow spheres instead. "Just by substituting water for this other liquid, we can control the shape of the carbon, which could have huge implications for supercapacitor performance," said Hoi Chun Ho, a Ph.D. candidate working with Naskar at the Bredesen Center for Interdisciplinary Research and Graduate Education, a joint venture of ORNL and the University of Tennessee, Knoxville. The team also discovered that altering the duration of synthesis directly affected the size and shape of the spheres. To further explore the discrepancies between solid and hollow carbon structures, the team ran synthesis simulations on the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. They also used transmission electron microscopy (TEM) and small-angle X-ray scattering (SAXS) tools at the Center for Nanophase Materials Sciences (CNMS), another DOE Office of Science User Facility, to characterize the capabilities and structure of the carbon samples. "We wanted to determine what kind of surface area is good for energy storage applications, and we learned that the hollow spheres are more suitable," said ORNL researcher Monojoy Goswami of CNMS and the Computer Science and Engineering Division. "Without these simulations and resources, we wouldn't have been able to reach this fundamental understanding." With this data the team tested a supercapacitor with electrodes made from hollow carbon spheres, which retained about 90 percent capacitance—the ability to store an electric charge—after 5,000 charge cycles. Although supercapacitors cannot store as much energy as batteries can store, they have many advantages over batteries, such as faster charging and exceptionally long lifetimes. Some technologies contain both batteries to provide everyday energy and supercapacitors to provide additional support during peak power demands. "Batteries often support smartphones and other electronic devices alone, but supercapacitors can be useful for many high-power applications," Ho said. "For example, if a vehicle is driving up a steep hill with many passengers, the extra strain may cause the supercapacitor to kick in." The pathway from waste sugar to hollow carbon spheres to supercapacitors demonstrates new potential for previously untapped byproducts from biorefineries. The researchers are planning projects to find and test other applications for carbon materials derived from waste sugar such as reinforcing polymer composites with carbon fibers. "Carbon can serve many useful purposes in addition to improving supercapacitors," Ho said. "There is more work to be done to fully understand the structural evolution of carbon materials." Making use of waste streams could also help scientists pursue forms of sustainable energy on a broader scale. According to the ORNL team, biorefineries can produce beneficial combinations of renewable energy and chemicals but are not yet profitable enough to compete with traditional energy sources. However, the researchers anticipate that developing useful materials from waste could help improve efficiency and reduce costs, making outputs from these facilities viable alternatives to oil and other fossil fuels. "Our goal is to use waste energy for green applications," Goswami said. "That's good for the environment, for the biorefinery industry, and for commerce."
10.1038/s41598-018-25880-0
Biology
Study finds relationships among herbicide-resistant weeds, tillage practices and agricultural greenhouse gas emissions
Chaoqun Lu et al, Emerging weed resistance increases tillage intensity and greenhouse gas emissions in the US corn–soybean cropping system, Nature Food (2022). DOI: 10.1038/s43016-022-00488-w Journal information: Nature Food
https://dx.doi.org/10.1038/s43016-022-00488-w
https://phys.org/news/2022-04-relationships-herbicide-resistant-weeds-tillage-agricultural.html
Abstract Tillage is a common agricultural practice that helps prepare the soil and remove weeds. However, it remains unknown how tillage intensity has evolved and its effect on net greenhouse gas (GHG) emissions. Here, using a process-based modelling approach with a multi-source database, we examined the change in tillage intensity across the US corn–soybean cropping systems during 1998–2016 and the impact of tillage intensity on soil GHG emissions. We found that tillage intensity first decreased and then, after 2008, increased, a trend that is strongly correlated with the adoption of herbicide-tolerant crops and emerging weed resistance. The GHG mitigation benefit (−5.5 ± 4.8 TgCO 2 e yr −1 ) of decreasing tillage intensity before 2008 has been more than offset by increased GHG emissions (13.8 ± 5.6 TgCO 2 e yr −1 ) due to tillage reintensification under growing pressure of weed resistance. As weed resistance persists or grows, tillage intensity is anticipated to continue rising, probably increasing GHG emissions. Our results imply that farmers’ choices in managing herbicide resistance may help mitigate agricultural GHG emissions, underscoring the importance of an alternative strategy to control weeds. Main Emissions of greenhouse gases (GHGs), such as carbon dioxide (CO 2 ), methane (CH 4 ) and nitrous oxide (N 2 O), from agriculture (cultivation of crops and livestock) and deforestation account for about a quarter of global total GHG emissions 1 . In the United States, agriculture contributed ∼ 10% of total GHG emissions in 2018, a proportion that has increased by 10% since 1990, which represents a substantial increase compared with the national total GHG emission increase of 3.7% in the same period 2 . The agriculture sector provides a notable GHG mitigation potential 3 , but doing so requires a deep understanding of the sector’s GHG flux dynamics and their key environmental drivers including human management practices 4 . Tillage is an important cropping practice that helps prepare the soil and remove weeds. Although various definitions of tillage types exist in the literature, for our purposes, tillage practices can be grouped into three types, namely, conventional tillage, conservation tillage and no-till, which differ by degrees of soil disturbance and residue retention. Conventional tillage leaves less than 15% residual on the soil surface, while conservation tillage has at least 30% residue left and no-till keeps the soil covered 100% of the time 5 , 6 . Various tillage practices have different impacts on the physical, hydrological and biogeochemical processes in the soil. For example, conventional tillage practices (such as disc ploughing) not only promote soil organic carbon oxidation and decomposition but also accelerate soil erosion by increasing soil exposure to wind and rain 7 . On the other hand, no-till and conservation tillage (such as strip-till and mulch-till) have been widely adopted by farmers to conserve soil and water 8 . However, the no-till system contributes less than is often assumed to agricultural sustainability because it may retard springtime soil warming, increase weed, pest and disease pressures, and lead to crop yield loss 9 , 10 , 11 , 12 . There are many reasons why tillage intensity has mostly declined on the US cropped acres in the past decades. Reduced tillage has been widely adopted to suppress soil erosion, preserve moisture and reduce crop production cost in the use of fuel, labour and machinery 8 , 13 . The advent of herbicide-tolerant (HT) crops, commencing in the late 1990s, has made it possible to spray herbicide over the growing crops, further reducing reliance on tillage 14 . But the benefit of HT crop adoption in reducing tillage might not be sustainable in the long run as weed resistance has emerged to the main chemical used, glyphosate 15 . Evidence to date suggests that partial reversion to conventional tillage has resulted 16 , 17 . For example, a recent study 17 reveals that the shares of conservation tillage and no-till in soybean fields declined by 3.9% and 7.6%, respectively, when eight glyphosate-resistant weed species are identified, despite little initial effect on tillage practices upon first emergence of weed resistance. However, the consequences of the changing tillage intensity in soil GHG fluxes during this period remain unclear. In the United States, a wide variety of studies have been conducted to quantify the GHG mitigation potential of the agriculture sector 18 , 19 , 20 . More recent efforts have involved seeking policy and market solutions that promote additional mitigation practices 21 , 22 , 23 , 24 . Nonetheless, most existing tillage-related assessment and prediction activities either lack data to characterize the spatiotemporal patterns of tillage practices and their intensity changes or focus on the resultant fluxes of single GHGs. This limits the explicit characterization of system responses and hinders us from identifying and adopting sustainable management practices. Although the US Geological Survey developed tillage intensity maps for 1989–2004 by aggregating county-level survey into eight-digit hydrologic unit watersheds 25 , little is known about how tillage practices in the United States have changed in more recent years, especially given increasing concerns about herbicide-resistant weeds 13 , 16 , 26 . In addition, there is still limited understanding as to how tillage decisions are driven by environmental stressors such as herbicide and herbicide-resistant weeds, and how they together have affected GHG mitigation outcomes during recent decades. There is substantial evidence that using more intensive tillage is a coping strategy for many farmers faced with herbicide-resistant weeds, and this has raised concerns about negative environmental impacts 16 , 17 . Here we use a process-based land ecosystem model, a long-term farmers’ survey and time-series gridded data of environmental changes to examine the relationships between genetically engineered HT crop adoption, the emergence of weed resistance to herbicide and farmers’ decisions in tillage practices, and how historical tillage practices altered net GHG fluxes in agricultural land (Fig. 1 ). Our study in the United States could provide insightful information for other agricultural regions in the world that are impacted by growing weed pressure, herbicide resistance, intensifying tillage and diminished GHG mitigation potential. Fig. 1: Conceptual depiction of the hypothetical GHG fluxes in response to tillage intensity changes affected by HT crop adoption and emergence of herbicide-resistant weeds. Arrow thicknesses and box sizes indicate the intensity of tillage practice, herbicide use and seed varieties in controlling weeds. The conditions in phase I and II represent the tillage intensity shift before and after 2008 in the United States, respectively, and the tillage-affected GHG fluxes that are examined in this study. The arrow direction of the y axis indicates higher tillage intensity or more GHG emissions. (Background photograph © USDA, ). Full size image A key data source of our study is a commercial survey of farmer choices regarding corn and soybean seed varieties, pesticide choices (including herbicide, insecticide and fungicide) and intensity of tillage practices 17 . These data allow us to develop time-series gridded maps to characterize the location and intensity of tillage practices in the US corn–soybean cropping systems during 1998–2016. We explored the reasons that were likely to shape the changes in tillage intensity across the country by examining the relationships between the state-level adoption rate of genetically engineered HT crops, the number of herbicide-resistant weed species, and corn–soybean acreage under different tillage practices. Furthermore, we used the annual tillage intensity maps to drive a land ecosystem model, Dynamic Land Ecosystem Model (DLEM), to distinguish and quantify how historical tillage practice changes in the US corn–soybean cropping system have affected net fluxes of CO 2 and N 2 O. Results Tillage intensity change and potential contributing factors We adopted a unitless index to represent the national and state-level tillage intensity by standardizing the acreage ratio of intensive to less-intensive tillage practices (for example, acreage ratio of conventional to conservation tillage, conservation to no-till, and conventional to no-till) into a value between 0% and 100% ( Methods ). Our analysis indicates that tillage intensity in the US corn–soybean cropping systems declined substantially during 1998–2008, but shifted to an increasing trend after 2008 (Fig. 2c,d , blue line). The farmers’ survey data indicate that corn and soybean acreage under no-till practice increased by 5.2 Mha and 6.8 Mha, respectively, during 1998–2008. However, this increase was followed by a no-till cropland acreage decline, comprised of 0.2 Mha from corn and 2.4 Mha from soybean, in the period 2009–2016 (Fig. 2a,b ). No-till was used on 20–33% (minimum–maximum) of national corn acreage, but had a much larger share (34–55%) on soybean-planted lands over the study period (Supplementary Fig. 1 ). This supports the statement in Livingston et al. 27 that soybean production is more reliant on herbicide and less on tillage than is corn production. However, the no-till shares estimated in our study only reflect the annual percentage of corn- and soybean-planted areas under no-till according to the surveyed farm operators, without separating continuous or intermittent no-till. By using the same definition of no-till (defined as the absence of any tillage operation from harvest of the previous crop to harvest of the current crop), we found that the acreage shares of no-till reported by our data were close to the three most recent field-level crop-specific production practice surveys conducted by the US Department of Agriculture (USDA) Agricultural Resource Management Survey (ARMS), that is, 23–27% of total corn acreage was under no-till in 2005, 2010 and 2016, and 35–45% of total soybean acreage in 2002, 2006 and 2012 6 . Fig. 2: Annual changes of crop acreage under each tillage practice since 1998 and possible factors regulating the tillage intensity shift. a – d , Annual acreage changes of corn ( a ) and soybean-planting area ( b ) in the lower 48 states of the United States; and the relationships between tillage intensity index (tillage index), accumulated species number (AccSpecies) of weeds that are resistant to herbicides, and planting percentage of HT crop varieties (HT%) for corn ( c ) and soybean ( d ). Dotted and solid lines in c and d indicate the period before and after 2008, respectively. Correlation coefficients indicate the relationship between the two lines specified. Source data Full size image Corn acreage changes under conventional tillage showed annual fluctuations around the zero line (no change), whereas areas receiving conservation tillage first declined and then increased from 1998–2016. Soybean acreage under conservation and conventional tillage changed with a similar pattern, first declining until 2007 and then increasing with the 2016 level close to or above the initial level of 1998. Increases in no-till crop acreage before 2008 predominantly occurred in the Mississippi River basin, the Corn Belt and the lower Mississippi alluvial valley in particular, and small areas along the US east coast (Supplementary Fig. 2 ). Accordingly, the acreage of conservation and conventional tillage decreased in these areas. However, the no-till area declines after 2008 were mainly found in the southern United States and the southern part of the Midwest, while intensifying tillage centred in the central and northern part of the Midwest where a large amount of fertilizer has been applied to boost crop growth 28 . The spatial heterogeneity of tillage practice changes explained the faster GHG responses in phase II shown by the conceptual diagram (Fig. 1 ) and in our model estimations. Among states, the central (for example, Illinois, Indiana and Iowa) and western Corn Belt states (for example, Nebraska, North Dakota and Minnesota) had large variations of different tillage practices over the years in corn and soybean acreage (Supplementary Figs. 3 and 4 ). We examined the relationships among tillage intensity, crop seed varieties and weed resistance since 1998. Our analysis demonstrated that the early-stage (1998–2008) reduction in tillage intensity was strongly correlated with the increases in national adoption rate of genetically engineered HT corn and soybean varieties. The latter included seeds that had HT genes only, as well as stacked genes (that is, with both HT and insect-tolerant traits). Nationally, the adoption rate of HT crops has substantially increased since the beginning of the study period, reaching a level above 90%, or close to 100% in some states during the 2000s (Supplementary Figs. 5 and 6 ). The same survey data reveal that the national average share of HT varieties in all the planted corn grew from 11% in 1998 to ∼ 90% after 2008, while HT soybean varieties increased from 61% in 1998 to ∼ 95% since 2004. The percentage of HT crops in soybean production started increasing earlier than that in corn production and also peaked a few years earlier. The share of HT varieties in both crops levelled off in the early- to mid-2000s, and thus had no significant relationship ( P > 0.1) with the post-2008 increases in tillage intensity. Nonetheless, we find that the increasing number of herbicide-resistant weed species was closely correlated to the rising tillage intensity after 2008 (with a correlation coefficient of 0.81 in corn and 0.87 in soybean, P < 0.01) (Fig. 1c,d ). Likewise, weed resistance to herbicide was more prevalent in the US soybean production than in corn production, with accumulative number of species up to 220 and 166, respectively, in 2016. This may be caused by more herbicide usage and less reliance on tillage in soybean production than in corn production. Similar results were reported by Livingston et al. 27 , in which 5.6% of corn acres in 2010 and 40% of soybean acres in 2012 were identified as being infested by glyphosate-resistant weeds or declines in glyphosate effectiveness. Our results demonstrate the historical role of HT crop adoption and growing weed pressure in changing tillage intensity across the United States. It also implies that, in the future, farmers are likely to adopt tillage practices on more cropped acres to control weeds given that tillage does not promote herbicide resistance. Tillage impacts on GHG fluxes We set up a series of simulation experiments using DLEM to distinguish between and quantify the impacts of historical tillage practices and tillage intensity change (TIC) on soil GHG emissions. The former is estimated as the difference between the experiment driven by historical tillage practices versus the no-till scenario, while the latter reflects the differences between experiments under varied versus fixed tillage practices ( Methods ). Model simulation results show that historical tillage practices during 1998–2016 resulted in net GHG emissions at a rate of 64.3 ± 20.0 TgCO 2 e yr −1 (mean ± SD, SD indicating interannual variability of GHG estimates). This is close to the direct N 2 O emissions from national synthetic fertilizer uses (that is, 63.6 TgCO 2 e yr −1 in 2016 reported by the Environmental Protection Agency 29 ), but with larger interannual variations (SD, 31% of mean). Nearly half of the tillage impact was attributed to tillage-induced CO 2 emission from soils, and the rest from direct soil N 2 O emissions. In the context of multifactor environmental changes, tillage impacts were estimated to range from 32.9 TgCO 2 e yr −1 in 2009 to 102.8 TgCO 2 e yr −1 in 2012 (Fig. 3 ). The highest tillage-induced GHG emissions we found, in 2012, were likely caused by crop mortality in the summer drought which limited crop nitrogen demand and provided more substrate for decomposition and denitrification 30 . The maximum–minimum GHG difference resulting from tillage practices on the US corn–soybean system was equivalent to 5–23% of the global GHG mitigation potential of crop management (0.3-1.5 PgCO 2 e yr −1 (ref. 31 )). This difference suggests a sizeable GHG mitigation potential in tillage management, if the tillage decision can be made with consideration of crop nitrogen demand/supply balance and the impacts of climate extremes. Fig. 3: Model-estimated impacts of tillage practices on GHG emissions in the US corn–soybean cropping system during 1998–2016. Error bars denote modelled uncertainty, which is the standard deviation calculated from multiple model runs with various values of key parameter sets (details of uncertainty estimation and simulation experiments can be found in the Supplementary Information ). Note that the lowest tillage-induced GHG emission is not found in 2008 when national tillage intensity was the lowest because the complex interactions between tillage practices and other environmental changes within managed croplands, and the legacy effects of residual removal, are also included in model estimations. Source data Full size image Our estimation shows that tillage-induced GHG emissions declined at an annual rate of 4.6 TgCO 2 e yr −1 during 1998–2008, and then marginally increased by 2.7 TgCO 2 e yr −1 during 2009–2016 (Fig. 3 ). Regardless of change direction, the changing trends were at a similar level or higher than the reported trend of annual GHG emissions from the US agriculture sector during 1990–2016 (including CO 2 , CH 4 and N 2 O emissions from agricultural soil management, rice cultivation, livestock and manure management, liming and field burning of agricultural residues, 2.3 TgCO 2 e yr −1 (ref. 29 )). Impacts of tillage intensity change on net GHG fluxes Tillage impact on GHG emissions first declined during 1998–2008 and then increased, which was predominantly determined by tillage intensity change across the United States. The GHG mitigation rate from tillage reduction alone in the period 1998–2008 (−5.5 ± 4.8 TgCO 2 e yr −1 , 1998–2008; Fig. 4a ) could offset the annual GHG emission increase from the entire US agriculture sector 29 over the same period, but this mitigation benefit disappeared after 2008, and the tillage impact shifted to accelerating GHG emissions. We estimated that the tillage intensity increase during 2009–2016 resulted in a net GHG source of 13.8 ± 5.6 TgCO 2 e yr −1 , more than double the GHG mitigation rate due to reduced tillage intensity in the preceding decade. Fig. 4: Tillage intensity change-induced soil GHG emissions. a , b , Annual average ( a ) and accumulated ( b ) CO 2 and N 2 O fluxes (in CO 2 e) resulting from tillage intensity change relative to the year 1998 (for the period 1998–2008) and the year 2008 (for the period 2009–2016). We selected 2008 as the benchmark year for the post-2008 assessment because national tillage intensity started to increase from this year. The cumulative GHG fluxes in b reflect how soon the GHG mitigation during 1998–2008 has been offset by the increased GHG emissions after 2008. The shaded area in a represents the 95% confidence interval (CI) as calculated from multiple simulation experiments with prescribed parameter values. Source data Full size image We find that the declining tillage intensity cumulatively reduced GHG emissions by 61.0 TgCO 2 e during 1998–2008, while increased GHG emissions of 110.0 TgCO 2 e were caused by intensifying tillage in the post-2008 period (Fig. 4b ). The cumulative impacts of tillage practice change shifted from a net GHG sink to a net source in 2013. During the past approximately two decades (1998–2016), cumulative GHG emissions due to tillage intensity change in the US corn–soybean cropping system was estimated as 49.1 TgCO 2 e. This change was equivalent to 1.2-fold of the net GHG emission increase from the whole US agriculture sector during the same period (annual increase rate of 2.18 TgCO 2 e yr −1 for 19 yr, 41.4 TgCO 2 e in total 29 ). The model estimates for the US corn–soybean system reveal that the tillage intensity changes over the past two decades are large enough to shape the dynamics of national agricultural soil GHG fluxes. Our work implies that the benefit of HT crop adoption in reducing tillage has reached its peak, while the emerging weed resistance is found to contribute to intensifying tillage practices. As weed resistance persists and grows, tillage intensity is anticipated to continue to rise, which would further increase GHG emissions and contribute to global warming. Spatial patterns of tillage impacts Our model estimations indicate a substantial impact of historical tillage adoption on GHG fluxes, equivalent to the role of agricultural fertilizer input. Nevertheless, there exist large interannual variations due to tillage intensity changes and their interactions with other factors such as climate variability and crop resource use efficiencies. Spatially, the highest tillage-induced GHG emissions are found to centre in the Corn Belt area in 1998, the Prairie Pothole Region in particular, including northern Iowa and southwestern Minnesota (Fig. 5 ). Due to the decline in tillage intensity, GHG emissions in 2008 were reduced, which was consistent with the spatial shift of tillage intensity across the region (Supplementary Figs. 2 and 9 ). However, GHG emissions due to tillage rebounded in 2016, resulting in a pattern similar to 1998 but with wider source areas (Fig. 5 ). The reduced GHG emissions under tillage practices (shown by shades of green in Fig. 5 ) reflect more weight on the role of tillage in reducing denitrification rate and residue retention in these areas. Fig. 5: Spatial patterns of the model estimated GHG fluxes due to the historical use of tillage across the US corn–soybean cropping system. a – c , The impacts of historical use of tillage on GHG fluxes for the years 1998 ( a ), 2008 ( b ) and 2016 ( c ). Full size image In terms of the impacts of tillage intensity changes, we find the spatial distribution and magnitude for the accumulated CO 2 emissions are similar to those of N 2 O fluxes before and after 2008 (Fig. 6 ). However, the spatial coverages of tillage-affected areas differ between these two periods. The considerable spatial heterogeneity in the GHG flux responses (that is, a mixture of negative and positive of values) are primarily caused by the variations in local climate, soil properties and cropping system mixed with tillage intensity changes across the country (Fig. 6 and Supplementary Fig. 2 ). The areas with increased GHG emissions due to intensifying tillage after 2008 covered the entire Corn Belt and the lower Mississippi alluvial valley. In particular, the western Corn Belt, including the Dakotas and part of Minnesota, stood out with high emission rates, in which corn and soybean cropping systems expanded in the most recent decade 32 , 33 , 34 . They are found to be larger than the pre-2008 GHG mitigation areas that resulted from reducing tillage, which are concentrated in the central Corn Belt, the lower Mississippi alluvial valley, and the US east coast. Fig. 6: Accumulated GHG fluxes (gCO 2 e m – 2 ) due to tillage intensity changes across the US corn–soybean cropping system. a – f , Soil fluxes of CO 2 ( a , b ), N 2 O ( c , d ), and their sum ( e , f ) for 1998–2008 (accumulated over 11 yr, a , c , e ) and 2009–2016 (accumulated over 8 yr, b , d , f ). Full size image Discussion Uncertainties with fall tillage practice considered Most tillage is implemented in spring. It should be noted that autumn tillage may be adopted before and in addition to spring tillage on some farmlands, but we do not exactly know where and when double tillage was implemented across the United States. Considering both spring and fall tillage, our model predicted that historical tillage practices could lead to a net GHG emission of 92.4 ± 22.0 TgCO 2 e yr −1 (mean ± SD) during the study period. This prediction shows an upper bound of estimated tillage impacts, approximately 44% higher than the estimate that considered only spring tillage. Assuming that all corn and soybean fields under tillage were tilled twice a year (in both spring and autumn), we estimate that the increased tillage intensity in the period 2009–2016 increased GHG emissions by 20.5 ± 7.2 TgCO 2 e yr −1 , while the reduced tillage intensity before 2008 caused a reduction in GHG emissions by 7.0 ± 6.6 TgCO2e yr −1 . As a result, the corresponding changes in accumulated GHG flux induced by changes in tillage intensity in 1998–2016 were found to be approximately 78% higher than the estimates driven by spring tillage only (that is, 87.5 TgCO 2 e under tillage twice a year versus 49.1 TgCO 2 e under spring tillage only). Despite the uncertainty caused by the lack of detailed tillage frequency data, our estimates provide a lower and an upper bound on tillage practice impacts, which strengthened our conclusion that there is substantial GHG mitigation potential through managing tillage practices. Proportions of continuous no-till The spatially explicit annual tillage maps used to drive the model simulation in this study were developed by combining time-series crop type and distribution maps, soil erodibility ranking maps and the Crop Reporting District (CRD)-level survey of farmers during 1998–2016. Due to limited information, we assume that no-till has the highest likelihood of being adopted on the highly erodible lands in each CRD. Our data show that the continuous no-till areas account for 10% of total corn and soybean acreage during 1998–2016, with an additional 5% of corn and soybean acreage under tillage only once since 1998. These numbers are lower than, but comparable with, the values reported in a four-year survey conducted by USDA ARMS, which shows 18% of corn-planted area under continuous no-till/strip-till in 2016 and the three years preceding the survey year, and 21% for soybean in 2012 and the three years before that 6 . The reasons for a lower proportion of continuous no-till in our spatial tillage maps include: (1) these data cover a longer time period; (2) no-till in our data is defined as the absence of tillage practice and where strip-till is excluded. The long-term survey data we rely on to develop tillage maps in this study have multi-year survey records for some farms (for example, area percentage under no-till, conservation and conventional tillage in each year), but plot-level arrangement may change over time and remain difficult to track. Even though our data have a lower share of continuous no-till than those reported by the four-year survey, we may still overestimate the no-till share over an approximately two-decade time frame considering the spatial tillage arrangement within a farm. The assumption that we made to always assign no-till to highly erodible lands first may have slightly underestimated the historical tillage-induced GHG emissions across the US corn–soybean cropping system, while overestimating GHG mitigation due to tillage intensity change during 1998–2008 and underestimating the re-intensified tillage-induced GHG emissions during 2009–2016. The uncertainty in the estimated GHG consequences of tillage practices indicates the importance of implementing a long-term survey and identifying the places and duration of continuous no-till practices across the United States. Contributions of tillage-related machinery use Tillage intensity changes can affect GHG emission beyond the soil, among which CO 2 emission from agricultural machinery is an GHG source that should be considered. Due to the lack of data, we used two extreme-case scenarios to estimate the amount of machinery CO 2 emissions associated with tillage practice changes. We assume that: (1) machinery change from decreasing tillage intensity was all attributed to the shift from conventional tillage to no-till, and vice versa for increasing tillage intensity; and (2) increasing no-till areas were all converted from conventional tillage and reducing no-till areas were all converted to conventional tillage. The tillage conversion area was counted repeatedly every year after the conversion occurred to estimate the fuel-derived CO 2 emissions. Based on the above assumptions, the no-till practice implemented on additional corn and soybean acreage were found to be 55.6 Mha, cumulatively, during 1998–2008; while the reduced no-till acreage summed to 29 Mha during 2009–2016. Using machinery emission data obtained from Adler et al. 35 (that is, 17.01 kgC ha −1 from ploughing versus 0 from no-till), this assumption will result in a reduced GHG emission of 0.315 TgCO 2 e yr −1 for the period 1998–2008, and an additional GHG source of 0.226 TgCO 2 e yr −1 for the period 2009–2016. Both scenarios show a minor contribution from the changed machinery emissions when compared with soil GHG emissions driven by tillage intensity changes. Outlook This study demonstrated a shift in national tillage intensity for the corn–soybean system during 1998–2016, and examined the role of tillage practices and their intensity changes in affecting GHG emissions from the US corn–soybean-planted soils. The findings suggest that the GHG mitigation benefit gained from the tillage intensity reduction during 1998–2008 has been offset by tillage practice reintensification since 2008. Without an effective strategy to control weeds, tillage intensity is expected to continue growing and so undermine the GHG mitigation achievements from other activities or other sectors. On the other hand, this study implies that the farmers’ choices in managing herbicide resistance, such as not applying glyphosate during consecutive growing seasons, using glyphosate during fewer years and combining it with other herbicides 14 , may help mitigate agricultural GHG emissions. Although we have assessed our estimation uncertainties caused by model assumptions, limited data about tillage practices and key parameter values, there remain knowledge gaps that hinder us from accurately predicting the consequences of tillage intensity changes. For example, it remains uncertain how sensitive crop growth and the coupled terrestrial carbon–nutrient–water cycling are to different tillage practices with a combination of environmental stressors including climate and emergence of herbicide-resistant weeds 30 , 36 . In addition, the environmental impacts of tillage differ between simplified and diversified cropping systems 37 , 38 . Research has also been very limited on how land conversion, crop rotation, and tillage practices interact in affecting agricultural GHG balance and climate mitigation. This increases uncertainty in accounting for both carbon debt 39 , 40 , soil carbon storage change 41 , 42 and GHG balance as indicated in this work. We therefore suggest that further research is needed to examine the previously overlooked patterns and drivers responsible for rotation and crop-specific tillage intensity change through long-term experiments and modelling, and to improve our understanding of responses and feedback between agroecosystem management and the climate system. Methods Data information The database we used to characterize corn and soybean tillage practice intensity and HT crop adoption rate was mainly from a nationwide survey of farmer choices at the field (that is, plot) level of analysis. The data were purchased from Kynetec, the most prominent commercial surveying company in US agriculture. Coverage purchased was for the 1998–2016 crop years. Farmers were queried about cultivation practice intensity, including no-till, conservation tillage (for example, ridge till, mulch till) and conventional tillage (for example, mouldboard plough, chisel plough, disk harrow) in each USDA CRD. Although location is available at the county level, data collection procedures are intended to establish representativeness at the USDA CRD level of disaggregation. A CRD is comprised of about nine contiguous counties that are also similar in cropping conditions. For each of the corn and soybean crops and for each year over the period, about 4,000–4,500 independent farm operators were paid to complete the survey. Respondents were identified by various means, including through federal information regarding participation in government programmes. Data used in this manuscript were collected primarily through telephone calls, involving multiple attempts to ensure high participation rates and so representativeness. The company implements rigorous protocols to ensure that interviewers and interview supervisors are trained to implement consistent, standardized procedures for data collection, quality screening and subsequent data transcription/processing activities. The 1998–2011 data have been used elsewhere to study how glyphosate-tolerant soybean seed has influenced tillage practices 14 , genetically engineered crops have affected pesticide use 43 and confusion in herbicide choices 44 . The soybean data have been recently used to examine the relation between the spread of glyphosate-resistant weeds and reduction in conservation tillage in soybean production 17 . As far as we know, no alternative data source on actual annual tillage choices in the United States exists that covers the period 2005–2016, a critical time in light of seed technology innovations. From this database we obtained annual information on seed varieties, including state-level adoption of HT crops, and annual percentage of tillage types at the CRD level. Weed species data were obtained from . To demonstrate the comprehensive dynamics of acreage changes under different tillage practices, we use a unitless indicator to represent national and state-level tillage intensity (TI) by considering the area share of no-till, conservation and conventional tillage. First, we identified the maximum tillage intensity from the sum of the three ratios during the period 1998–2016 for each state or entire country. Second, we normalized the weight of acres under intensive tillage practice to those under less-intensive tillage groups. Note that the index was normalized by the maximum tillage intensity in a given area (that is, state or entire United States), which depicts the temporal changes of tillage intensity in each specific region and is not comparable between regions. $$\begin{array}{lll}{\mathrm{TI}}_{\mathrm{max}} &=& {\mathrm{max}} \left({\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j} \right.\\&& \left.+ {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j}+{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j} \right)\end{array}$$ $$\begin{array}{lll}{\mathrm{TI}}_{i,j} =\\ \frac{{({\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j} + {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CV}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j} + {\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{CS}},i,j}/{\mathrm{A}}\_{\mathrm{Till}}_{{\mathrm{NT}},i,j})}}{{{\mathrm{TI}}_{{\mathrm{max}}}}} \times 100\% \end{array}$$ where A_Till CV , A_Till CS and A_Till NT stand for corn or soybean acreage under conventional tillage, conservation tillage and no-till, respectively. Subscript i denotes corn or soybean, and subscript j denotes year. We harmonized 1 km time-series gridded cropland distribution and type maps for the contiguous United States 45 , 46 with the CRD-level percentage data of corn/soybean acreage that adopted the three tillage types to spatialize the annual tillage-specific area data 47 . The annual maps of various tillage practices have been used to force the model to assess their impacts on GHG fluxes. More details on tillage intensity change can be found in Supplementary Figs. 1 – 6 . Modelling approach We adopted a process-based land ecosystem model, DLEM, to assess the impacts of tillage practices on net fluxes of CO 2 and N 2 O from the agricultural soils in the US corn–soybean cropping system. The DLEM is unique in incorporating multiple environmental drivers, grid-to-grid connectivity through river systems and simultaneous estimation of CO 2 , CH 4 and N 2 O fluxes 48 , 49 , 50 . Its agricultural module has been intensively calibrated and validated in upland and lowland croplands across countries and the entire world, and has been widely used to quantify the contributions of multifactor environmental changes to ecosystem functions 33 , 50 , 51 , 52 , 53 , 54 , 55 . We have validated the DLEM’s performance in simulating soil organic carbon (SOC) content and tillage impacts on SOC dynamics across the United States in our previous work 32 , 46 , 47 . In this study, we implemented additional model validation by comparing model estimates with measured N 2 O fluxes under no-till and conventional tillage practice at a corn-planting site in Tennessee (Supplementary Fig. 7 ). To distinguish the impacts of tillage practice change from other environmental drivers and human activities, we set up a series of simulation experiments by turning on and off tillage practice changes at a few time points (see Supplementary Information , section 3.4, for more details). To characterize other natural environmental changes and human practices and to force the model, several time-series gridded data sets have been developed at the same resolution spanning from 1850 to 2016. In addition to tillage practice, the model input data include daily climate condition (maximum, minimum and mean temperature, precipitation, short-wave solar radiation and relative humidity), monthly atmospheric nitrogen deposition, air CO 2 concentration, annual land use and cover change, and major agricultural management practices (such as crop-specific nitrogen fertilizer use, manure nitrogen application, tile drainage, crop rotation) at a resolution of 5 arcmin × 5 arcmin. More details regarding the input data can be found in Supplementary Information , section 3.3. Our analysis focused on the period 1998–2016, during which consistently collected annual tillage practice data for the corn–soybean cropping system were available. In experiment I, the model was driven by historically varying tillage intensity and other aforementioned time-series gridded input drivers across the contiguous United States. This experiment provided our ‘best estimates’ of biogenic GHG fluxes in the US corn–soybean cropping systems which were comparable to observations. We examined the GHG fluxes under tillage practices in the pre- and post-2008 time periods because tillage intensity in both corn- and soybean-planted lands was found to be the lowest in 2008. Experiments II and III fixed the location and cropland area under conservation and conventional till at the 1998 and 2008 levels, respectively. The difference between these two experiments and experiment I can be used to quantify the impacts of tillage intensity change (TIC) on GHG fluxes during the periods 1998–2008 and 2009–2016, respectively. We set up experiment IV to represent a hypothetical case in which the no-till practice was adopted in all the cropland area since 1998. The difference between experiments I and IV represented the impact of the historical tillage practice pattern in the corn–soybean system (Supplementary Table 1 and Supplementary Fig. 8 ). We calculated CO 2 fluxes as the year-by-year SOC changes excluding dissolved organic carbon (DOC) leaching and CH 4 fluxes. Because the CO 2 assimilation into crop biomass will be eventually consumed somewhere else, we only counted CO 2 emissions from soils in this study. Likewise, only soil direct N 2 O emissions were included for estimating the net GHG emissions here. Methane (CH 4 ) fluxes were not included when calculating the net GHG balance because their total amount was negligible in the corn/soybean-planted areas. We used 100 yr global warming potential to convert the fluxes of CO 2 and N 2 O from gram C and gram N into gram CO 2 e (refs. 1 , 50 ): $$F_{{\mathrm{CO}}_{2}^i} = ({\mathrm{SOC}}_{i - 1} - {\mathrm{SOC}}_i) - F_{{\mathrm{DOC}}_{\mathrm{leaching}}^i} - F_{{\mathrm{CH}}_{4}^i}$$ (1) $$E_{{\mathrm{CO}}_{2}^i} = (F_{{\mathrm{CO}}_{2}^i}/12) \times 44$$ (2) $$E_{{\mathrm{N}}_{2}{\mathrm{O}}^i} = (F_{{\mathrm{N}}_{2}{\mathrm{O}}^i}/28) \times 44 \times 265$$ (3) $$E_{{\mathrm{net}}^i} = E_{{\mathrm{CO}}_{2}^i} + E_{{\mathrm{N}}_{2}{\mathrm{O}}^i}$$ (4) where \(F_{{\mathrm{CO}}_{2}^i}\) and \(F_{{\mathrm{N}}_{2}{\mathrm{O}}^i}\) are CO 2 and N 2 O fluxes in TgC yr −1 and TgN yr −1 , respectively, and \(E_{{\mathrm{CO}}_{2}^i}\) and \(E_{{\mathrm{N}}_{2}{\mathrm{O}}^i}\) are CO 2 and N 2 O emissions in TgCO 2 e. Negative values represent GHG uptake from the atmosphere, whereas positive values represent GHG emissions from soils. In equation ( 1 ), we approximated CO 2 flux as the between-year SOC storage change minus DOC leaching and CH 4 emissions. We estimated the annual net fluxes of CO 2 and N 2 O in each simulation experiment, and the impacts of historical tillage practices and tillage intensity change were quantified as the differences between experiments as described above. For our ‘best-estimate’ simulations, tillage was implemented in spring when corn or soybean was planted. Generally, previous-year autumn tillage may also be adopted before spring tillage in part of the study areas, but it remains uncertain where they are and how often farmers have undertaken more than one tillage practice per year 56 . Therefore, we designed two types of experiments to quantify the impacts of with- and without-autumn tillage practice following the protocol described in our previous study 47 . More specifically, autumn tillage was assumed to have been implemented two weeks after harvest. In this study, we used simulations driven by spring tillage as our ‘best estimate’, while the experiments with corn–soybean land tilled twice annually (that is, both autumn and spring tillage) represented more intensive soil disturbance scenarios and provided the upper bound on tillage impact estimations. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The tillage maps used in this study were developed from a proprietary national survey conducted annually by Kynetec Group. The purchase agreement requires that the data remain confidential. Source data supporting the figures are provided with this paper. Source data are provided with this paper. Code availability The code used to perform analyses in this study is generated in ENVI/IDL and is available upon request.
A new study that combines survey data and cutting-edge computer modeling found a growing trend in tillage intensity in U.S. corn and soybean production in recent years has led to an increase in greenhouse gas emissions from agricultural fields. The study, published recently in the academic journal Nature Food, drew on years of survey data that asked thousands of U.S. farmers about their tillage practices. The researchers then plugged the relevant data into sophisticated ecosystem models to see how tillage decisions affect soil emissions of greenhouse gases, including carbon dioxide and nitrous oxide. The survey data indicate farmers relied less on tillage during the period between 1998 and 2008, but that trend began to reverse around 2009 when tillage intensity started to rise. Chaoqun Lu, an Iowa State University associate professor of ecology, evolution and organismal biology and lead author of the study, said the growing resistance of weeds to the common herbicide glyphosate likely contributed to increased tillage. Genetically engineered herbicide-tolerant crops hit the agricultural scene in the late 1990s, and their adoption freed farmers from some of their reliance on tillage as a method of weed control. But growing numbers of weed species with resistance to the herbicide have emerged over the decades, reducing the effectiveness of the herbicide and making tillage a more attractive weed control option once again. And as tillage intensity grows, more carbon and nitrogen stored in the soil release into the atmosphere in the form of greenhouse gases, Lu said. "One of the interesting pieces that we found in this study is tillage intensity has shifted from a declining trend to an increasing trend since 2008," Lu said. "Our regression analysis suggests this trend is correlated to the wide adoption of herbicide-tolerant crops before 2008 and emerging weed resistance after 2008. We can't assert a strict causal relationship, but regression analysis reveals a strong relationship between them." The survey asked questions about farmers' decisions on seed varieties and cultivation practice intensity. Survey topics included no-till, conservation tillage (e.g., ridge till, mulch till), and conventional tillage (e.g., moldboard plow, chisel plow, disk harrow). The data show no-till grew by roughly 12 million acres for corn production and nearly 17 million acres for soybeans between 1998 and 2008. But no-till corn acres declined by nearly a half million acres between 2009 and 2016 and declined by nearly 6 million soybean acres during that period, according to the survey. Corn acreage under conservation tillage and soybean acreage under conservation and conventional tillage showed similar trends, first declining between 1998 and 2008 before climbing back to previous levels by 2016. Feeding the data into the land ecosystem models shows that gains in tillage intensity since 2009 have offset the greenhouse gas mitigation benefits achieved during the tillage declines from 1998 to 2008. Lu said the study uncovers a relationship between weed resistance, seed technology and greenhouse gas emissions that could lead to a better understanding of how farm practices can mitigate climate change. Her team's previous research showed that nitrous oxide emissions from farmland in the U.S. Corn Belt have increased in recent years, largely due to the widespread application of nitrogen fertilizers to agricultural land. The added nitrogen is partially used by crops, but the remainder either stays in soils or is lost to the environment. During this process, microorganisms living in soils consume nitrogen-containing compounds and give off nitrous oxide as a byproduct. Meanwhile, soil organic matter decomposes and partially converts into carbon dioxide. Both are powerful greenhouse gases that have potential to warm the climate. Intensive tillage practices disturb the soil, alter soil moisture and aeration status, and stir heavy crop residue into soils, which together change the production rates of soil greenhouse gases and allow more of them to escape, Lu said. Lu pointed to the use of alternative herbicides to combat glyphosate-resistant weeds, or using glyphosate in fewer consecutive years, as well as the diversification of crops beyond corn and soybeans as options to control weeds without increasing greenhouse gas emissions. "Without an effective strategy to control weeds, tillage intensity could continue to grow in the future and could undermine greenhouse gas mitigation achievements from other agricultural activities," Lu said.
10.1038/s43016-022-00488-w
Medicine
Cigarette damage to unborn children revealed in stem cell study
Baltasar Lucendo-Villarin et al, Modelling foetal exposure to maternal smoking using hepatoblasts from pluripotent stem cells, Archives of Toxicology (2017). DOI: 10.1007/s00204-017-1983-0
http://dx.doi.org/10.1007/s00204-017-1983-0
https://medicalxpress.com/news/2017-05-cigarette-unborn-children-revealed-stem.html
Abstract The liver is a dynamic organ which is both multifunctional and highly regenerative. A major role of the liver is to process both endo and xenobiotics. Cigarettes are an example of a legal and widely used drug which can cause major health problems for adults and constitute a particular risk to the foetus, if the mother smokes during pregnancy. Cigarette smoke contains a complex mixture of thousands of different xenobiotics, including nicotine and polycyclic aromatic hydrocarbons. These affect foetal development in a sex-specific manner, inducing sex-dependant molecular responses in different organs. To date, the effect of maternal smoking on the foetal liver has been studied in vitro using cell lines, primary tissue and animal models. While these models have proven to be useful, poor cell phenotype, tissue scarcity, batch-to-batch variation and species differences have led to difficulties in data extrapolation toward human development. Therefore, in this study we have employed hepatoblasts, derived from pluripotent stem cells, to model the effects of xenobiotics from cigarette smoke on human hepatocyte development. Highly pure hepatocyte populations (>90%) were produced in vitro and exposed to factors present in cigarette smoke. Analysis of ATP levels revealed that, independent of the sex, the majority of smoking derivatives tested individually did not deplete ATP levels below 50%. However, following exposure to a cocktail of smoking derivatives, ATP production fell below 50% in a sex-dependent manner. This was paralleled by a loss metabolic activity and secretory ability in both female and male hepatocytes. Interestingly, cell depletion was less pronounced in female hepatocytes, whereas caspase activation was ~twofold greater, indicating sex differences in cell death upon exposure to the smoking derivatives tested. Working on a manuscript? Avoid the common mistakes Introduction The liver is the body’s second largest organ playing a major role in the processing of xenotoxicants, which include alcohol, drugs and environmental pollutants. Cigarettes are an example of a widely used drug which can cause major health problems for adults, and constitute a particular risk to the developing foetus. Cigarettes contain a complex mixture of over 7000 different compounds (Rodgman et al. 2009 ) which include nicotine and the polycyclic aromatic hydrocarbons (PAHs). Nicotine is primarily metabolised by cytochrome P450 2A6 (CYP2A6) in the liver (Benowitz et al. 1994 ) into several metabolites, of which cotinine represents approximately 70–80% (Messina et al. 1997 ). PAHs are incomplete combustion products first identified as carcinogenic constituents of coal tar (Phillips 1983 ) and charcoal-grilled foods (Phillips 1999 ; Boström et al. 2002 ; Rodgman et al. 2009 ). PAHs are also detected in placental tissues and umbilical cord blood of smokers (Perera et al. 2005a ; Al-Saleh et al. 2013 ) reaching the foetal liver from the maternal circulation. This exposes the developing foetus to harmful agents and leads to corresponding changes in gene expression (O’Shaughnessy et al. 2011 ). In addition to toxicant exposure, smoking also disrupts foetal oxygen and carbon monoxide balance which can cause harmful effects, including impaired growth, premature birth, hormonal imbalances, increased predisposition to metabolic syndrome, liver disease and even death (Chen et al. 2006 ; Harvey et al. 2007 ; Rogers 2009 ; Mamsen et al. 2010 ; Fowler et al. 2011 , 2014 ; Hackshaw et al. 2011 ; Högberg et al. 2012 ; Behl et al. 2013 ; Filis et al. 2015 ). Moreover, it has been reported that maternal smoking affects the foetus in a sex-specific manner. For example, male offspring possess a higher risk of developing conduct disorders, whereas female offspring are predisposed to developing weight disorders and drug dependence (Weissman et al. 1999 ; Chen et al. 2006 ). In addition, maternal smoking induces sex-dependant molecular responses in the reproductive organs and the liver of the developing foetus (Fowler et al. 2008 ; O’Shaughnessy et al. 2011 ; Drake et al. 2015 ). To date, the effect of maternal smoking on the foetal liver has been studied in vitro using cell lines, primary tissue and animal models (Neumann 1986 ; Rao et al. 1988 ; Cho et al. 2006 , Choi et al. 2015 ; Baxter 2009 ; Sanchez et al. 2011 ; Van Kesteren et al. 2013 ; Williams et al. 2014 ). While these models have proven to be informative, the scarcity of human tissue, the rapid loss of cell phenotype, batch-to-batch variation and species differences have led to difficulties in data extrapolation toward the human. Moreover, the mature nature of primary cells used in vitro impairs the study of foetal development ‘in the dish’. In contrast to the above sources, human hepatocytes derived from pluripotent stem cells have been proven to represent a reliable human model to study liver biology in detail (Szkolnicka et al. 2014 , 2016 ; Villarin et al. 2015 ). To study the disruptive effects of smoking on human development, we have employed this renewable cell model. Pluripotent stem cell derived hepatoblasts were produced at scale from male and female cell lines. Following this, hepatocyte differentiation was performed in the presence of cotinine and PAHs and this led to sex-specific changes in cell biology. Methods and materials Cell culture and differentiation H9 and Man12 human embryonic stem cells (hESCs ) identity was confirmed using short tandem repeat typing. hESCs were cultured and differentiated as previously described (Cameron et al. 2015 ). Maintenance of hESCs was performed on pre-coated laminin 521 (Biolaminin) in mTeSR1 (STEMCELL Technologies) in a humidified 37 °C, 5% CO 2 incubator. For differentiation, hESCs were plated onto a pre-coated blend of laminins 521 and 111 (at a 1:3 ratio). Differentiation was initiated at 40% confluence by replacing serum-free medium with endoderm differentiation medium: RPMI 1640 containing 1× B27 (Life Technologies), 100 ng/mL Activin A (PeproTech), and 50 ng/mL Wnt3a (R&D Systems). The medium was changed every 24 h for 72 h. On day 3, endoderm differentiation medium was replaced with hepatoblast differentiation medium, and this was renewed every second day for a further 5 days. The medium consisted of knockout (KO)-DMEM (Life Technologies), Serum replacement (Life Technologies), 0.5% Glutamax (Life Technologies), 1% non-essential amino acids (Life Technologies), 0.2% β-mercaptoethanol (Life Technologies), and 1% DMSO (Sigma). On day 8, differentiating cells were cultured in the hepatocyte maturation medium HepatoZYME (Life Technologies) containing 1% Glutamax (Life Technologies), supplemented with 10 ng/mL hepatocyte growth factor (PeproTech) and 20 ng/mL oncostatin M (PeproTech). On day 10 maturation medium was replaced with HepatoZYME supplemented with and without smoking derivatives (Sigma-Aldrich) for a further 8 days, with media replaced every 48 h. Immunofluorescence Cell cultures were fixed in 100% ice-cold methanol at −20 °C for 30 min. Subsequently, fixed cells were washed twice with PBS at room temperature. Cell monolayers were blocked with 0.1% PBS-Tween containing 10% BSA for 1 h, and the monolayers were incubated with primary antibodies diluted in PBS-0.1% Tween/1% BSA at 4 °C overnight (Supplementary Table 1). The following day, the primary antibody was removed, and the fixed monolayers were washed three times with PBS-0.1% Tween/1% BSA. Following this, the cells were incubated with the appropriate secondary antibody diluted in PBS/0.1% Tween/1% BSA for 1 h at room temperature and washed three times with PBS. Cultures were then mounted with PermaFluor aqueous mounting medium (Thermo Scientific) and counterstained with NucBlue Hoechst 33342 (Sigma-Aldrich). The cells were imaged with an Axio Observer Z1 microscope with LD PlanNeoFluar objective lenses (Carl Zeiss). This microscope was coupled to a Zeiss AxioCamMR3 camera used for image acquisition. The images were captured using a Zeiss Axiovision SE 64 Rel 4.8 and analysed using Zeiss Axiovision software version 4.9.1.0. The percentage of positive cells (±standard deviation) was estimated from at least eight random fields of view. Albumin and α-fetoprotein ELISA hESC-derived hepatocyte protein secretion was measured by ELISA. Alpha-fetoprotein and albumin production was quantified using commercially available kits from Alpha Diagnostic International. The different media were collected at the day 18 in the differentiation process. Samples were run in duplicate and measured on a FLUOStar Omega multi-mode microplate reader (BMG Labtech). Protein production was expressed as either nanogram or microgram of protein per milliliter of medium per 24 h and per milligram of cellular protein [determined by the bicinchoninic acid (BCA) assay, Pierce] or as percentage of secretory capacity normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Cytochrome P450 assays CYP3A and CYP1A2 activity was measured in hepatocytes at Day 18 using pGlo technology (Promega) in accordance with the manufacturer’s instructions. CYP activity was expressed as either relative light units (RLUs) per milliliter of medium per milligram of protein (determined by the BCA assay, Pierce), or as a percentage of CYP activity normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Cell health assays Cell health was assessed measuring ATP production and Caspase 3/7 activity at Day 18 employing pGlo technology (Promega) in accordance with the manufacturer’s instructions. Levels of expression of both markers were expressed as percentage of relative light units (RLUs) per milliliter of medium and normalised to the vehicle control. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates. Detection of smoking derivatives in foetuses from smokers and non-smokers Cotinine was measured using LC–MS/MS methodology as follows. Cotinine and the internal standard 2 H 3 -cotinine were dissolved in methanol and diluted in pooled human plasma to give calibration standards in the range 1.5–500 ng/mL. Quality control samples were prepared in pooled human plasma at 2.5, 250 and 450 ng/mL cotinine. To a 10 µL aliquot of plasma, 10 µL (2 ng) of IS was added and 250 µL 0.1% formic acid in water. After mixing, the samples were kept on ice for 15 min. Following centrifugation at 14,800 rpm for 15 s, the plasma samples were applied to BondElut Plexa PCX cartridges (30 mg/1 mL, Crawford Scientific, UK) that had been pre-conditioned and equilibrated using 0.5 mL of methanol and 0.5 mL of 0.1% formic acid in water. The cartridges were washed with 0.5 mL 0.1% formic acid in water followed by 2 × 0.5 mL 95/5 methanol/0.1% formic acid in water and cotinine and the IS eluted with 0.5 mL 95/5 methanol/ammonium hydroxide. The eluate was evaporated to dryness under nitrogen at room temperature and the residue re-suspended in 60 µL 50/50/0.1 water/methanol/formic acid. Following centrifugation at 14,800 rpm for 5 min, 5 µL of the supernatant was injected onto the chromatograph. Chromatography was performed on a Thermo Surveyor (Thermo Scientific, UK) system using a 150 × 2.1 mm ACE 3µ C18-AR column (Hichrom, UK) maintained at 50 °C. The mobile phase consisted of 0.1% ammonium acetate (A) and methanol (B) and elution achieved with a linear gradient over 3 min from 10 to 100% B with a hold of 1 min at 100% B. The flow rate was 200 µL/min and the samples were maintained at 4 °C in the autosampler. Total run time was 8 min. A Thermo TSQ Quantum triple quadrupole mass spectrometer was used in positive electrospray ionisation mode for the detection of cotinine. Quantification was performed using single reaction monitoring (SRM) scan mode using the following transitions: cotinine m / z 177.0–80.1 and 2 H 3 -cotinine m / z 180.0–80.1. Flow injection analysis was used to optimise the MS/MS conditions as follows: spray voltage 4000 V, sheath gas pressure 60, auxiliary gas pressure 0, capillary temperature 375 °C, skimmer offset −10 V, collision pressure 1.7 mTorr and collision energy 25 V. Instrument control and peak integration and quantification were performed using Thermo Xcalibur software (v. 2.0.7 SP1). Weighted least squares linear regression with a weighting factor of 1/X was used to quantify the cotinine concentration in unknown samples by comparing peak area ratios (analyte/IS) with those obtained from a multi-level calibration standard curve. PAHs in human foetal livers were quantified in Fowler et al. ( 2014 ) and the results presented here in are in different format for display purposes. The collection of foetal material (Fowler et al. 2008 ) was approved by the National Health Service Grampian Research Ethics Committees (REC 04/S0802/21). Cell count Following compound exposure, cells were washed with 100 μL/well of 1×HBSS (Invitrogen) and fixed with 50 μL of 4% (wt/vol) paraformaldehyde (PFA) for 20 min at room temperature. Cells were permeabilized with 50 μL/well of 0.1% (vol/vol) Triton X-100 (Sigma-Aldrich for 15 min), followed by a wash with 100 μL/well of 1×HBSS and an incubation with 50 μL/well of a solution containing 2 drops/mL of NucBlue Live ReadyProbes ® Reagent (Molecular Probes) in 1×HBSS for 5 min at room temperature. Following incubation, a final wash of 100 μL/well of 1×HBSS was performed. Fluorescent images were acquired using the Operetta high content analysis system with the Harmony High-Content Imaging and Analysis Software (PerkinElmer). 20 fields of view were acquired across the well to obtain an average representation of the well. Nuclei were quantified using the Acapella Image Analysis software (PerkinElmer, version: 4.1.1.118082™). The experiments are representative of five biological replicates. Statistical analysis Unless indicated, all data were obtained from at least five biological replicates and are presented by Mean ± standard deviation (SD). The difference between control and treatment groups were tested by Student’s t test where P < 0.05 is denoted as *, P < 0.01 is denoted as ** and P < 0.001 is denoted as ***. Results Measuring foetal exposure to smoking-derived contaminants The active components of cigarette smoke are well known (Rodgman et al. 2009 ) and for the purposes of these experiments we focused on cotinine, the major bioactive metabolite of nicotine, and polycyclic aromatic carbons (PAHs). Both cotinine and PAHs are significantly increased in the foetus by maternal smoking (Fig. 1 ). Following previous in vivo experimentation, we therefore hypothesised that these compounds represent a threat to the normal development of the foetal liver and we wished to model this in vitro using a reliable developmental model. To test this, we generated hepatoblasts and hepatocytes from male and female human embryonic stem cells (hESCs) using established methodology (Cameron et al. 2015 ). Fig. 1 Concentrations of cigarette smoke derivates in the second trimester human foetus. a Polycyclic aromatic hydrocarbons (PAHs) in livers from 10 control and 12 smoke-exposed human foetuses. PAHs in human foetal livers were quantified in Fowler et al. ( 2014 ) and the results presented here are in different format for display purposes. b The predominant bioactive metabolite of nicotine, cotinine, in plasma from 16 control and 22 smoke-exposed foetuses Full size image Generation of hepatoblasts and hepatocytes from male and female human embryonic stem cells Both hepatoblasts and hepatocytes were produced at scale from hESCs using a stagewise process (Cameron et al. 2015 ) to generate highly pure populations (Fig. 2 a). During cellular differentiation, cells from both genders underwent similar morphological changes, culminating in typical hexagonal hepatocyte morphology (Fig. 2 b). Further characterisation of the hepatocyte populations demonstrated that albumin was expressed in 93% of female and 95% of male hepatocytes. In these populations HNF4α was detected in 87 and 85% of female and male hepatocytes. To determine if hepatocytes were polarised we examined E-Cadherin and Zonal Occludin1 (ZO-1) expression. E-Cadherin was expressed in 97 and 98% of female and male cells, whereas ZO-1 expression detectable in 99 and 98% of female and male cells, respectively (Fig. 2 c). Fig. 2 Characterisation stem cell derived hepatocytes. a Male and female human embryonic stem cells (hESC) were differentiated to hepatocytes employing a stepwise hepatocyte differentiation approach. b During differentiation, cells adopted different morphologies at each stage: definitive endoderm, hepatoblasts and hepatocytes. c Immunofluorescence was employed to examine the expression of the hepatocyte proteins HNF4α ( red ) and albumin ( green ), and epithelial markers E-cadherin ( green ) and Zona Occludens-1 ( green ). Morphological images were taken at ×10 magnification and scale bar represents 200 μm. For each condition eight random fields of view, containing at least 400 cells, were counted Full size image Generation of functional hepatocytes from male and female human embryonic stem cells Following basic characterisation of morphology and gene expression, hepatocyte metabolic and secretory capacity was studied using pGlo™ and ELISA technologies. Hepatocytes from both sexes demonstrated appreciable levels of CYP1A2 and CYP3A activity (Fig. 3 a, b). Cytochrome P450 activity was greater in female hepatocytes and aligned with a previous study (Villarin et al. 2015 ). Following this, albumin (ALB) and alpha-fetoprotein (AFP) secretion were measured. Female hepatocytes secreted 4.8 μg/mL/mg protein of ALB and 10.9 μg/mL/mg protein of AFP (Fig. 3 c, d), whereas male hepatocytes secreted 2.1 μg/mL/mg protein of ALB and 2 μg/mL/mg protein of AFP (Fig. 3 c, d). These experiments demonstrated that male and female hepatocytes displayed dependable performance and were suitable for the subsequent modelling experiments. Fig. 3 Stem cell derived hepatocytes display hepatocyte functions. Male and female hESCs derived hepatocytes were functionally characterised employing. a , b pGLO™ technology to study cytochrome p450 CYP3A and CYP1A2 function. c , d ELISA to measure the secretion of hepatocyte proteins albumin and alpha-fetoprotein. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates Full size image Determining the sex differences in hepatocyte biology following exposure to cotinine and PAHs Hepatocyte specification and maturation was performed in male and female hESCs lines in the presence or absence to smoking components. Cotinine at a concentration range of 1 to 300 nM, chrysene at a concentration range of 10 nM to 50 μM, fluorene at a concentration range of 10 nM to 100 μM, naphthalene at a concentration range of 10nm to 1 mM and phenanthrene, at a concentration range of 10 nM to 500 μM were used. Following 8 days exposure, cell health was determined by ATP production and caspase activity. Following this cell function was determined measuring CYP P450 3A and 1A2 activity. In order to measure the effectiveness of each component of cigarette smoke to inhibiting activity we used the half maximal inhibitory concentration (IC 50 ). Analysis of ATP levels revealed that, independent of the sex, the majority of smoking derivatives tested did not deplete ATP levels below 50%. The exception was phenanthrene, which reduced male ATP levels by 50% at 78 μM. Cell health was also studied by measuring caspase 3/7 activation in hepatocytes (Table 1 ; Supplementary Fig. 1 and 2). In these experiments we observed a sex-dependent response in caspase activation. While male and female hepatocytes were equally sensitive to continine, male hepatocytes were more sensitive to chrysene, fluorene and phenanthrene, whereas female hepatocytes were more sensitive to naphthalene. Subsequently, we studied the IC 50 for CYP P450 following exposure to smoking derivatives (Table 1 ; Supplementary Fig. 1 and 2). Loss of CYP1A2 function was more pronounced in female hepatocytes following exposure to cotinine, chrysene and phenanthrene whereas male hepatocytes were more sensitive to fluorene. Of note, naphthalene did not reduce CYP P450 activity but, instead, induced CYP1A2 activity in male and female hepatocytes (Table 1 ). Analysis of the CYP3A function in response to the smoking derivatives also showed sex differences. Both male and female hepatocytes responded in a similar fashion to cotinine, whereas male hepatocyte function was more sensitive to chrysene, fluorene and naphthalene than female hepatocytes. Both male and female hepatocytes responded in a similar fashion to cotinine, whereas loss of male CYP3A function was less sensitive to chrysene and more sensitive to fluorene, naphthalene and phenanthrene than female hepatocytes. Table 1 IC50 values of the tested compounds on cell health and cell metabolism Full size table Determining the sex differences in hepatocyte function following exposure to smoking derivatives CYP3A is a major phase 1 enzyme family involved in the third trimester of human foetal development. To study the effect of a cocktail composed of cotinine and PAHs, we incubated hepatocytes with these additives at the IC 50 values for CYP3A calculated in Table 1 . Phase contrast images of the cells after incubation with the drug cocktail revealed morphological deterioration in both sexes (Fig. 4 a). Descriptively, female hepatocytes lost hallmark hepatocyte features and displayed more fibroblast-like structures, whereas male hepatocytes exhibited a more rounded morphology. Cell function in both hepatocyte populations was also studied in detail. CYP3A activity was reduced by 54% in female hepatocytes and by 38% in male hepatocytes following exposure (Fig. 4 b). This was in contrast to CYP1A2 activity which was reduced to similar extents in female and male hepatocytes (Fig. 4 c). We also studied the effect that compound incubation had on protein secretion. In accordance with cell CYP P450 function, secretion of both ALB and AFP were reduced in male and female hepatocytes following exposure to the smoking derivatives (Fig. 4 d, e). Fig. 4 Cell morphology and metabolic activity following smoking component exposure. a Phase contrast images reveal a deterioration in the cell morphology in the presence of the mixture of drugs for 8 days compared with cells in the presence of the vehicle control. b , c pGLO™ technology was employed to measure cytochrome P450 activity. d , e ELISA was employed to study the secretion of albumin and alpha-fetoprotein. The images were taken at ×10 magnification and scale bar represents 200 μm. Levels of significance were measured by Student’s t test. The experiments are representative of five biological replicates Full size image Determining the sex differences in cell biology following exposure to smoking derivatives In addition to cell function, cell health was studied measuring ATP levels and caspase 3/7 activity. ATP levels were reduced to a greater extent in female hepatocytes than male hepatocytes (Fig. 5 a). Whereas caspase activation was greater in female hepatocytes (~twofold increase) than male hepatocytes (~1.2 fold) (Fig. 5 b). Following these experiments we examined the number of hepatocytes that remained in culture post exposure. Interestingly, male hepatocytes were depleted by 40%, whereas female hepatocytes were depleted by 30%. Taken together these data demonstrate that male hepatocytes were more likely detaching from the matrix and undergoing necrosis, whereas the female hepatocytes were undergoing cell dedifferentiation and apoptosis following expoure. Fig. 5 Measurement of the cell health in female and male hepatocytes following exposure to cocktail for 8 days. a Cell viability was studied by measuring the levels of ATP. b Cell apoptosis was measured using Caspase 3/7 activity. c Cell paint technology was employed to analyse the number of cells attached to the matrix Full size image Discussion The ability to study the effects of maternal smoking on the unborn foetus has traditionally relied on material from elective terminations, animal models and a variety of cell lines. While these approaches have generated highly informative datasets, they do suffer from some significant drawbacks which include tissue scarcity, individual variability and loss of cell phenotype. Therefore, to study of the effects of the maternal smoking on human foetal liver development, renewable cell based models are required which can be delivered from defined genetic backgrounds. Encouragingly, pluripotent stem cell based liver models have proven to be effective in modelling human liver exposure to drugs in the past (Medine et al. 2013 ; Villarin et al. 2015 ; Szkolnicka et al. 2014 , 2016 ). In these studies we have employed pluripotent stem cells to produce human hepatoblasts at scale and screen for developmental perturbation over an eight day time course. Cigarettes contain a complex mixture of chemicals. These compounds pose a risk to foetal development which includes increased risk of intrauterine growth restriction, small-for-gestational age and preterm delivery amongst others (Perera et al. 1998 , 2005a , b ; Dejmek et al. 2000 ). To identify the major players for our modelling experiments we employed gas chromatography and mass spectrometry in foetal plasma and livers to identify specific cigarette smoke components. From these experiments we identified cotinine and four PAHs present in foetal circulation of mothers who smoked. We used this information to study the effects of those derivatives on hepatocyte differentiation from male and female hESCs. On the whole, exposure to the compounds singly did not have a detrimental effect on hepatocyte biology, but in combination they displayed a more marked effect. Following exposure, female hepatocytes displayed greater cell number, induced caspase 3/7 activity and lower ATP levels than male hepatocytes. This suggested that female hepatocytes were likely undergoing apoptosis during cell dedifferentiation. Whereas, male hepatocytes appeared to be necrotic as they detached from the extracellular matrix. Whether these observations are a consequence of different levels of metabolic enzyme function in the hepatocyte populations, or the effects manifest due to other sex-dependent processes, will be the study of future experimentation. The sex differences reported in these studies are consistent with previous studies on maternal smoking on foetal development. Recently, Filis et al. demonstrated that in male foetuses maternal smoking affected pathways regulating liver fibrosis and cirrhosis, whereas in female foetuses glucose metabolism was more affected (Filis et al. 2015 ). Sex-specific responses to maternal smoking is also reflected in the balance of foetal endocrine signals (O’Shaughnessy et al. 2007 , 2011 ) and in the development of other organs, including the gonads (O’Shaughnessy et al. 2007 ; Fowler et al. 2014 ; Drake et al. 2015 ) and the placenta (Gabory et al. 2013 ). While sexual dimorphism exists in the expression of these pathways, there are also studies which indicate sex-independent responses leading to disease in the adult (Allina et al. 2011 ). In summary, our approach has shown that pluripotent stem cell derived hepatoblasts and hepatocytes represent a useful tool to model foetal liver biology ‘in the dish’, providing valuable information on sex differences that occur following exposure to components of cigarette smoke. Change history 03 October 2017 During manuscript proofing, the following sentence was not deleted in the section “Results” at the end of the paragraph: “Both male and female hepatocytes responded in a similar fashion to cotinine, whereas male hepatocyte function was more sensitive to chrysene, fluorene and naphthalene than female hepatocytes”.
Chemicals found in cigarette smoke have been shown to damage foetal liver cells. Scientists say the potent cocktail of chemicals in cigarettes is particularly harmful to developing liver cells and affects male and female foetuses differently. Researchers - led by the University of Edinburgh - have developed a novel way to study the effects of maternal smoking on liver tissue using embryonic stem cells. The stem cell technique will provide important information about the long-term effects of maternal cigarette smoking, say experts. The liver is vital in clearing toxic substances and plays a major role in regulating metabolism. Smoking cigarettes - which contain around 7000 chemicals - can damage foetal organs and may do lasting harm. Scientists used pluripotent stem cells - non-specialised cells that have the distinctive ability to be able to transform into other cell types - to build foetal liver tissue. Liver cells were exposed to harmful chemicals found in cigarettes, including specific substances known to circulate in foetuses when mothers smoke. The study showed that a chemical cocktail - similar to that found in cigarettes - harmed foetal liver health more than individual components. Findings also showed that cigarette chemicals damage the liver differently in male and female foetuses, with male tissue showing liver scarring and female tissue showing more damage to cell metabolism. The study was carried out in collaboration with the Universities of Aberdeen and Glasgow and is published in the journal Archives of Toxicology. Dr David Hay from the University of Edinburgh's Centre for Regenerative Medicine, said: "Cigarette smoke is known to have damaging effects on the foetus, yet we lack appropriate tools to study this in a very detailed way. This new approach means that we now have sources of renewable tissue that will enable us to understand the cellular effect of cigarettes on the unborn foetus." Professor Paul Fowler, Director of the Institute of Medical Sciences at the University of Aberdeen, said: "This work is part of an ongoing project to understand how cigarette smoking by pregnant mothers has harmful effects on the developing foetus. These findings shed light on fundamental differences in damage between male and female foetuses."
10.1007/s00204-017-1983-0
Medicine
Researchers restore function in a gene that can suppress liver cancer and enhance immunotherapy
Combining p53 mRNA nanotherapy with immune checkpoint blockade reprograms the immune microenvironment for effective cancer therapy, Nature Communications (2022). DOI: 10.1038/s41467-022-28279-8 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-022-28279-8
https://medicalxpress.com/news/2022-02-function-gene-suppress-liver-cancer.html
Abstract Immunotherapy with immune checkpoint blockade (ICB) has shown limited benefits in hepatocellular carcinoma (HCC) and other cancers, mediated in part by the immunosuppressive tumor microenvironment (TME). As p53 loss of function may play a role in immunosuppression, we herein examine the effects of restoring p53 expression on the immune TME and ICB efficacy. We develop and optimize a CXCR4-targeted mRNA nanoparticle platform to effectively induce p53 expression in HCC models. Using p53 -null orthotopic and ectopic models of murine HCC, we find that combining CXCR4-targeted p53 mRNA nanoparticles with anti-PD-1 therapy effectively induces global reprogramming of cellular and molecular components of the immune TME. This effect results in improved anti-tumor effects compared to anti-PD-1 therapy or therapeutic p53 expression alone. Thus, our findings demonstrate the reversal of immunosuppression in HCC by a p53 mRNA nanomedicine when combined with ICB and support the implementation of this strategy for cancer treatment. Introduction Loss of function in tumor suppressors is a driving force in tumorigenesis and the development of therapeutic resistance. The p53 tumor suppressor gene, a master regulator of cell cycle arrest, apoptosis, senescence, and other cellular pathways 1 , is frequently mutated in a myriad of human cancers, including hepatocellular carcinoma (HCC). Beyond cell autonomous tumor-suppressive effects, increasing evidence indicates that p53 protein can also regulate the immune tumor microenvironment (TME) by modulating interactions of tumor cells with immune cells 2 , 3 , 4 , 5 , 6 . For example, p53 has been shown to induce antitumor immune response via transcriptional regulation of genes encoding for key cytokines (e.g., TNF-α, IL-12, and IL-15) 7 , 8 , 9 , chemokines (e.g., CCL2, –20, and –28, and CXCL1, –2, –3, –5, and –8) 10 , 11 and pathogen recognition (e.g., Toll-like receptors, TLRs) 12 , 13 , all of which result in recruitment and activation of immune cells. Genetic restoration of p53 could induce the activation of myeloid cells to promote tumor antigen-specific adaptive immunity 14 and upregulate the NKG2D ligands on senescent tumor cells for activation of natural killer (NK) cells 15 . p53 may also play an important role in the suppression of pro-tumorigenic M2-type tumor-associated macrophage (TAM) polarization, thus facilitating antitumor immunity 16 , 17 . Moreover, recent studies suggest that immunogenic cancer cell death induced by cytotoxic agents may be associated with activation of the p53 pathway 18 , 19 . Despite these advances in understanding the role of p53, developing therapeutic approaches that directly and effectively address the loss of p53 function and its role in immunosuppression and immunotherapy resistance in HCC remains an elusive goal. HCC is the most prevalent liver cancer with a high mortality rate and dismal prognosis 20 , 21 , 22 . Enhancing anti-tumor immunity using immune checkpoint blockade (ICB), including anti-CTLA-4, anti-PD-1 (aPD1), and anti-PD-L1 (aPD-L1) antibodies, has demonstrated the potential to transform the therapeutic landscape of many cancers including HCC. However, responses are seen only in a limited fraction of patients, and majority of cancer patients do not benefit from the treatment. This may be mediated in part by insufficient tumor immunogenicity and the immunosuppressive TME. Different strategies are actively being developed to improve ICB therapy in HCC, with a major focus on combining ICB with other existing therapies (such as anti-VEGF therapy), which could significantly increase anti-tumor immunity. Such combinations have been shown to improve anti-tumor efficacy in animal models and increase the survival of patients in clinical trials 23 , 24 , 25 , 26 . However, an increasing majority of HCC patients show no responses, and thus, new combinatorial strategies are still desperately needed. In this work, we address the unmet need to implement p53 therapy and potentiate ICB response in HCC. We report a targeted mRNA nanoparticle (NP) platform designed to induce p53 expression and reprogram the TME, which we test in proof-of-concept studies in combination with ICB in p53 -null murine HCC models. We optimize the p53 mRNA NP platform for HCC targeting, evaluate its therapeutic efficacy in p53 -null HCCs growing in orthotopic and ectopic sites (alone or with aPD1 antibody), and study changes in the TME. This unique combinatorial strategy safely and effectively inhibits tumor growth in vivo, while prolonging survival and reducing ascites and metastases. Thus, combining p53 mRNA nanotherapy with ICB immunotherapy could become a transformative approach for the treatment of HCC and potentially other cancers involving p53 deficiency. Results Engineering and optimization of CXCR4-targeted mRNA NPs We previously developed a robust self-assembly strategy for formulating lipid-polymer hybrid NPs for mRNA delivery 27 , 28 , composed of the ionizable lipid-like compound G0-C14 for mRNA complexation, a biocompatible poly(lactic-co-glycolic acid) (PLGA) polymer for forming a stable NP core to carry the G0-C14/mRNA complexes, and a lipid-poly(ethylene glycol) (lipid-PEG) layer for stability. We here engineered the hybrid NPs (Fig. 1a ) for selective HCC targeting and high mRNA transfection efficiency. To improve HCC targeting, we modified the NPs with the targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK; referred to as CTCE), which is specific to CXCR4, a chemokine receptor that is upregulated in cancer cells and is a validated selective target in HCC 29 , 30 . For comparison, we also prepared non-targeted NPs using a scrambled peptide (LYSVKRSGCGSRKVSYL; referred to as SCP). The CTCE or SCP peptide was first conjugated to 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[maleimide(polyethylene glycol)-3000] (DSPE-PEG-Mal) by the thiol-maleimide Michael addition click reaction, with a high chemical yield (≥82%). The chemical structures of DSPE-PEG-CTCE and DSPE-PEG-SCP were confirmed by 1 H-NMR analysis (Supplementary Fig. 1 ). To optimize the targeting efficacy of the mRNA NPs, we examined the effect of CTCE peptide surface density on the cellular uptake of RIL-175 murine HCC cells. As shown in Fig. 1b , CTCE-conjugated enhanced green fluorescent protein (EGFP) mRNA NPs (referred to herein as CTCE-EGFP NPs) showed significantly greater cellular uptake compared to non-targeting SCP EGFP mRNA NPs (referred to as SCP-EGFP NPs) due to the active targeting ability of the CTCE peptide towards HCC cells. We found that 5% or 6% CTCE peptide provided maximum cellular uptake in RIL-175 cells while maintaining NP stability. The uptake of the 5% CTCE-EGFP NPs was >15-fold higher than that of the 5% SCP-EGFP NPs, which was also confirmed by confocal fluorescence microscopy in RIL-175 cells (Fig. 1c ). The 5% peptide density was selected for further analyses. Fig. 1: CXCR4-targeted nanoparticles (NPs) for p53 mRNA delivery to hepatocellular carcinoma (HCC). a Schematic of CXCR4-targeted p53 mRNA NPs and combinatorial strategy using anti-PD-1 therapy to reprogram the immunosuppressive tumor microenvironment for effective treatment of p53-deficient HCC. The combination of CTCE-p53 NPs and PD-1 blockade effectively and globally reprogrammed the immune TME of HCC, as indicated by activation of CD8 + T cells and NK cells, favorable polarization of TAMs towards the anti-tumor phenotype, and increased expression of anti-tumor cytokines. b Flow cytometric analysis of cellular uptake of CTCE-EGFP mRNA NPs with different CTCE peptide densities versus SCP-EGFP mRNA NPs with 5% SCP density in RIL-175 HCC cells ( n = 3 cell samples/group). c Confocal fluorescence imaging of RIL-175 cell uptake of SCP-Cy5-Luciferase (Luc) mRNA NPs versus CTCE-Cy5-Luc mRNA NPs after 4 h treatment. Scale bar: 100 µm. d Effect of different cationic lipid-like materials G0-Cm on the transfection efficacy of Luc-mRNA NPs (mRNA concentration: 0.25 μg/mL, n = 3 samples/group). e TEM image of CTCE-mRNA NPs. Scale bar, 200 nm. f Average particle size and zeta potential of the p53 NPs, SCP-p53 NPs, and CTCE-p53 NPs ( n = 3 samples/group). Data in b , d , and f are presented as mean values ± SD. For c and e : a representative image from one of five independent fields of view in a single experiment. Source data are provided as a Source Data file. Full size image To identify efficacious ionizable lipid-like materials for mRNA complexation and translation, a series of G0-Cn compounds (Supplementary Fig. 2a ) was synthesized through ring opening of epoxides by generation 0 of poly(amidoamine) (PAMAM) dendrimers (Supplementary Fig. 2b ) and screened for using a model luciferase-mRNA. The chemical structures of G0-Cn were confirmed by 1 H-NMR spectrum (Supplementary Fig. 3 ). Analysis of luciferase-mRNA NPs transfection results (Fig. 1d and Supplementary Fig. 4 ) showed that G0-C8 had the most effective mRNA transfection ability and was thus chosen as the ionizable lipid-like material for formulating targeted mRNA NPs for in vivo treatments. To explore the possible mechanisms behind this, we studied the mRNA encapsulation efficiency and cellular uptake of the mRNA NPs formulated with different G0-Cn. As shown in the Supplementary Table 1 , G0-Cn had negligible effect on the mRNA encapsulation efficacy. However, their effect on cellular uptake seemed to play an important role for the mRNA delivery efficacy (Supplementary Fig. 5 ), with the G0-C8 NP showing higher cellular uptake than other G0-Cn NPs. The hybrid CTCE-conjugated p53 mRNA NPs (referred heretofore as CTCE-p53 NPs) were ~110 nm in size as measured by dynamic light scattering (DLS), and their spherical and uniform structure was confirmed by transmission electron microscopy (TEM) imaging (Fig. 1e, f ). The addition of the targeting ligand (CTCE) and the scrambled peptide (SCP) to the NP surface slightly increased the particle size as well as the zeta potential, due to the positive charges of both peptides (Fig. 1f ). In addition, we characterized all the nanoformulations used in this study, including Luc mRNA NPs, GFP mRNA NPs, and p53 mRNA NPs. As shown in Supplementary Fig. 6 , all the nanoformulations used in this study exhibited similar average size and zeta potential. The organic solvent DMF (dimethylformamide) had no effect on the integrity or stability of EGFP mRNA, either as naked mRNA or encapsulated in NPs (Supplementary Fig. 7a ). Moreover, we detected no obvious changes in the size of p53-mRNA NPs over a period of 96 h in the presence of 10% serum, suggesting the in vivo stability of our targeted mRNA NPs (Supplementary Fig. 7b ). To further evaluate the stability of the p53-mRNA NPs, the cell viability was measured using RIL-175 cells after treatment with p53-mRNA NPs pre-incubated with 10% serum for various time points up to 96 h (at 37 °C). Comparable cell viability in all the groups (Supplementary Fig. 8 ) further supported the stability of these p53-mRNA NPs. Notably, pH played a crucial role in complexing mRNA for the ionizable G0-C8, and effective mRNA complexation with G0-C8 was achieved only in acidic conditions. As shown by agarose gel electrophoresis assay at pH 7.4 (Supplementary Fig. 9 ), G0-C8 could not fully complex mRNA even at a weight ratio of 200 G0-C8/mRNA. In comparison, when the pH was adjusted to 3.5 in citrate buffer solution, the mRNA could be completely complexed at a weight ratio of G0-C8/mRNA as low as 2. In addition, this ratio is favorable for mRNA delivery in vivo because it reduces the need to use ionizable lipid-like materials and may thus improve the safety of the mRNA NPs. A cytotoxicity assay was further performed to evaluate the in vitro cytotoxicity of G0-C8/EGFP mRNA (Supplementary Fig. 10 ), which showed ~100% viability at various ratios of G0-C8/mRNA from 1 to 20 in RIL-175 cells. In addition, in vitro cytotoxicity was further examined in both RIL-175 and normal hepatocyte THLE-3 cells. The near-100% cell viability at all tested concentrations in both cell lines (Supplementary Fig. 11 ) indicated the safety of our mRNA NPs. CXCR4-targeting improves mRNA NP delivery to HCC cells in vitro and in vivo We then investigated the CTCE-targeting effect of our mRNA NPs on cellular uptake and mRNA transfection in p53 -deficient murine HCC cells (RIL-175) using flow cytometry. We first examined the transfection efficacy of the targeted mRNA NPs and non-targeted mRNA NPs in vitro using EGFP-mRNA as the model mRNA, by counting EGFP-positive cells (Fig. 2a ). Both SCP-EGFP NPs and CTCE-EGFP NPs showed markedly higher fractions (>90%) of EGFP-positive cells after mRNA NP-transfection compared to controls (free/naked EGFP mRNA). Notably, the CTCE-EGFP NPs induced a ~4.5-fold higher mean fluorescence intensity in cells compared to the SCP-EGFP NPs (Supplementary Fig. 12 ). The higher transfection efficiency of CTCE-EGFP NPs was confirmed by fluorescence microscopy (Fig. 2b ). To further verify the selectivity of the CTCE-mRNA NPs, we also examined the targeting effect of CTCE peptide by blocking the CXCR4 receptor on RIL1-75 cell surface using free CTCE peptide (Supplementary Fig. 13 ). Upon treatment with CTCE peptide, the fluorescence intensity of RIL-175 cells co-incubated with CTCE-Cy5-Luciferase mRNA NPs was significantly lower than that without blocking. Moreover, we generated a CXCR4-knockout (CXCR4-KO) RIL-175 cell line by CRISPR/Cas9 editing and performed in vitro cellular uptake. As evidenced by Western blotting (WB, Supplementary Fig. 14 ), CXCR4 expression of the RIL-175 cells were effectively knocked out by CRISPR/Cas9 editing. In vitro cellular uptake study (Supplementary Fig. 15 ) showed that the fluorescence intensity of CXCR4-KO RIL-175 cells co-incubated with CTCE-Cy5-Luciferase mRNA NPs was significantly reduced than that of the sgControl RIL-175 cells (without CXCR4-KO). These results demonstrate the CXCR4-mediated active targeting effect of the CTCE-NPs on the RIL-175 cell line. Fig. 2: CXCR4-mediated HCC-targeting of CTCE-mRNA NPs in vitro and in vivo. a Flow cytometry analysis of in vitro transfection efficiency (%GFP positive cells) of SCP-EGFP NPs vs. CTCE-EGFP NPs in p53 -null RIL-175 cells. b Immunofluorescence of RIL-175 cells transfected with SCP-EGFP NPs vs. CTCE-EGFP NPs (magnification, ×50). Cells were treated with SCP-EGFP NPs or CTCE-EGFP NPs for 12 h and further incubated for 24 h with fresh cell culture medium (mRNA concentration: 0.5 μg/mL). Scale bar: 100 µm. c Circulation profile of free Cy5-Luc mRNA, SCP-Cy5-Luc NPs, and CTCE-Cy5-Luc NPs (mRNA dose: 350 μg/kg) after i.v. administration. d , e Quantification of biodistribution of free Cy5-Luciferase mRNA, SCP-Cy5-Luciferase (Luc) NPs, and CTCE-Cy5-Luc NPs in orthotopic ( d ) and ectopic ( e ) HCC grafts ( n = 3 mice/group;) at 24 h post-i.v. injection (mRNA dose: 350 μg/kg). f Western blot analysis of p53 protein expression after treatments (mRNA concentration: 0.5 μg/mL). β-actin was used as the loading control. g Immunofluorescence for p53 in RIL-175 cells after treatment with saline or CTCE-p53 NPs (p53 mRNA concentration: 0.25 μg/mL). Scale bar: 50 µm. h RIL-175 cell growth rate after treatment with control (saline), CTCE-EGFP NPs, empty NPs, SCP-p53 NPs, or CTCE-p53 NPs (mRNA concentration: 0.5 μg/mL) ( n = 3 cell samples/group). i RIL-175 cell viability after treatment with control (saline), empty NPs, control NPs (CTCE-EGFP NPs), or CTCE-p53 NPs with different mRNA concentrations (0.0625–0.75 μg/mL) ( n = 3 cell samples/group). Statistical significance was calculated using one-way ANOVA with a Tukey post-hoc test. Data in c , d , e , h , and i are presented as mean values ± SD. ** P < 0.01; *** P < 0.001; **** P < 0.0001. For b and g : a representative image from one of five independent fields of view in a single experiment. For f : this experiment was repeated five times independently with similar results. Source data are provided as a Source Data file. Full size image Next, intracellular uptake of the mRNA NPs in RIL-175 cells was examined by confocal fluorescence microscopy after incubating Cy5-labeled Luciferase-mRNA NPs (CTCE-Cy5-Luc NPs) with RIL-175 cells for 0.5, 2, 4, or 6 hrs. The intensity of red fluorescence from Cy5-Luc mRNA in the cells increased in proportion to incubation time (Supplementary Fig. 16 ), suggesting the successful intracellular delivery of our mRNA NPs. To test the efficiency of CXCR4-mediated HCC-targeting of CTCE-mRNA NP delivery in vivo, we next conducted pharmacokinetics (PK) and biodistribution (BioD) studies. We first evaluated PK parameters by administering targeted or non-targeted Cy5-Luc-mRNA NPs or free Cy5-Luc-mRNA into healthy C57Bl/6 mice via the tail vein. The PK results showed that free mRNA was rapidly cleared, with a dramatic decrease to ~8% after 15 min (Fig. 2c ). In contrast, similar to Cy5-Luc NPs without peptide modification, both SCP-Cy5-Luc NPs and CTCE-Cy5-Luc NPs showed prolonged mRNA circulation, with >30% of the Cy5-Luc-mRNA still circulating after 60 min. After 4 h, nearly 20% of both NPs were still detectable, while most free mRNA was cleared within 1 h. This result also indicated that the presence of the targeting moiety (i.e., CTCE) did not alter the PK profile of the mRNA NPs. We then evaluated the BioD and tumor accumulation of these NPs in both orthotopically and ectopically (s.c.) grafted RIL-175 HCCs. Tumor-bearing mice were administered free Cy5-Luc-mRNA, non-targeted SCP-Cy5-Luc-mRNA NPs, or targeted CTCE-Cy5-mRNA NPs by tail vein. As shown in Fig. 2d, e and Supplementary Fig. 17 , in both HCC models, both NPs exhibited considerable intratumoral accumulation, while the fluorescent signal of free Cy5-mRNA was barely detectable in the tumor tissue 24 h post-injection. Notably, there was ~1.5 and 2.7-fold greater intratumoral accumulation of CTCE-targeted NPs than non-targeted NPs in the orthotopic and ectopic models, respectively. Taken together, the evidence suggests that CTCE-targeted NPs demonstrated significantly enhanced cellular uptake, mRNA transfection efficiency, and intratumoral accumulation compared to non-targeted NPs irrespective of tumor site/stroma, supporting the use of CTCE peptide ligands for selective HCC cell targeting. CXCR4-targeted mRNA NP increases p53 protein expression and reduces HCC cell viability in vitro To determine whether the targeted p53-mRNA NPs could induce the expression of therapeutic p53 in p53 -null RIL-175 cells, we first checked p53 protein expression after treatment with CTCE-p53 NPs versus SCP-p53 NPs. Both WB and immunofluorescence (IF) staining (Fig. 2f, g ) confirmed the successful restoration of p53 expression in RIL-175 cells. The WB data further showed that targeted NPs exhibited enhanced level of p53 expression compared with non-targeted NPs. In addition, the IF images showed that p53 protein was mainly localized in the cytoplasm of RIL-175 cells. Next, we tested cell growth and cell viability after treatment with CTCE-p53 NPs versus SCP p53 NPs. Figure 2h shows that the number of viable cells was dramatically decreased after 10-day treatment with SCP-p53 NPs or CTCE-p53 NPs compared to control-treated cells, or to cells treated with CTCE-EGFP NPs or empty CTCE-NPs. Of note, the CTCE-p53 NPs elicited greater growth inhibition than non-targeted SCP-p53 NPs, consistent with higher p53 expression. Moreover, CTCE-p53 NPs significantly decreased cell viability in a dose-dependent manner compared to the control, free mRNA, and control NPs (Fig. 2i ). These results indicate that the CTCE-targeting NP system effectively delivers p53 mRNA to HCC cells, restoring functional p53 activity and reducing HCC cell viability. In addition, we tested whether the CTCE-p53 NPs could induce the suppressing function of p53 in p53 -wild type murine HCC cell line HCA-1. As shown in the Supplementary Fig. 18 , modest cytotoxicity was observed at high doses in HCA-1 cells, whereas empty NPs and control NPs (CTCE-EGFP NPs) had no effects on HCA-1 cell viability. Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade inhibits tumor growth and reprograms the immune TME in orthotopic p53 -null murine HCC To examine the role of p53 in immunosuppression in HCC, we tested the CTCE-p53 NPs and aPD1 against p53 -null HCC. Mice with established orthotopic RIL-175 tumors were treated with either CTCE-p53 NPs at a mRNA dose of 350 µg/kg by intravenous (i.v.) injection, aPD1 by intraperitoneal (i.p.) injection, or their combination, every 3 days for 4 cycles (Fig. 3a ). Tumor growth was monitored by high-frequency ultrasound imaging (Fig. 3b ). In vivo results revealed that CTCE-p53 NPs treatment or aPD1 therapy alone inhibited HCC growth compared to IgG-treated control mice, but their combination was significantly more effective than either treatment alone (individual growth curves in Fig. 3c , mean tumor volumes in Fig. 3d , and mean tumor weight in Supplementary Fig. 19a ). We also performed immunohistochemistry (IHC) analysis to confirm the expression of p53 in the orthotopic tumors. As shown in Fig. 3e , p53 was expressed at the highest levels in the CXCR4-p53 NP-treated groups, confirming the successful delivery of p53 mRNA to the orthotopic tumors. Fig. 3: PD-1 blockade combined with CXCR4-targeted p53 mRNA NPs reprograms the immune TME and promotes anti-tumor immunity in HCC. a Timeline of tumor implantation and treatment schedule in the orthotopic HCC model. The mice with orthotopic RIL-175 tumor were treated with CTCE-EGFP mRNA NPs or CTCE-p53 mRNA NPs every 3 days for 4 i.v. injections. Anti-PD-1 (aPD1) was given at 10 mg/kg every 3 days by i.p. injection. b High-frequency ultrasound images of the RIL-175 orthotopic tumor-bearing C57BL/6 mice at Day 7, 10, 13, 16, and 19 ( n = 7 mice/group). c , d Tumor growth profile of each indicated treatment group ( n = 7 mice/group). e Immunofluorescence staining of p53 expression in RIL-175 tumors (red signals) in different groups. Scale bar: 200 µm. f – n Flow cytometry analysis ( n = 7 samples for CTCE-EGFP-NPs and aPD1group; n = 6 samples for CTCE-p53 NPs and CTCE-p53 NPs+aPD1 group) of tumor CD8 + cytotoxic T cells ( f ), IFN-g + TNF-α + cells among CD8 + T cells ( g ), CD4 + T cells ( h ), CD11b + cells when gating on NK cells ( i ), KLRG1 + cells when gating on CD11b + NK cells ( j ), IFN-g + cells when gating on NK cells ( k ), IFN-gR + cells when gating on NK cells ( l ), M1-like tumor-associated macrophages (TAMs) ( m ), and M2-like TAMs ( n ). o – q Increased levels of expression of TNF-α ( o ), IL-1β ( p ), and IFN-γ ( q ) in RIL-175 tumor tissues by protein array measurements after combination treatment ( n = 4 tumor samples/group). Statistical significance was calculated via one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.E.M. For e : this experiment was repeated thrice independently with similar results. * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001. Source data are provided as a Source Data file. Full size image We then examined the impact of treatment on immune cell infiltration and activation in the RIL-175 tumors by flow cytometry analyses of digested HCC tissues. Compared to treatment with CTCE-EGFP NPs, CTCE-p53 NPs, or aPD1 alone, we found that the combination of CTCE-p53 NPs with aPD1 significantly increased the number of infiltrating CD8 + T cells (Fig. 3f ). Importantly, the fraction of activated (IFN-γ + TNF-α + ) CD8 + T cells was significantly increased in the HCC tissue after combination therapy (Fig. 3g ). In addition, the fraction of infiltrating CD4 + FoxP3 – effector T cells (Fig. 3h ), mature (KLRG1 + CD11b + ) NK cells (Fig. 3i, j ), and activated (IFN-γ + and IFN-γR + ) NK cells (Fig. 3k, l ) all increased after combined treatment with CTCE-p53 NPs and aPD1. Moreover, we found that combination therapy effectively polarized tumor-associated macrophages (TAMs) towards the M1-like phenotype and decreased M2-like TAMs in HCC (Fig. 3m, n ). It is worth noting that CTCE-p53 NPs alone increased the fractions of mature NK cells and M1 TAMs while reducing M2 TAMs (Fig. 3l–n ); in contrast, aPD1 alone had the opposite effect by polarizing TAMs toward the M2-phenotype (Fig. 3m, n ). We also examined changes in key immune cytokines post-treatment by multiplexed array analysis of whole tumor tissue protein extract. We found that CTCE-p53 NPs and aPD1 significantly increased TNF-α and IL-1β levels; they also tended to increase IFN-γ + and IL-2 and decrease IL-6 but neither IL-10 nor MCP1 (CCL2) (Fig. 3o–q and Supplementary Figs. 19b–d ). Collectively, these results suggest that the combination of CTCE-p53 NPs and PD-1 blockade effectively and globally reprogrammed the immune TME of HCC by increasing effector immune cells and cytokine levels in the tumor. We further compared side-by-side the survival benefit of the combination of CTCE-p53 NPs with aPD1 against a regimen similar to the new standard of care in HCC patients (i.e., anti-VEGFR2 antibody+ aPD-L1 antibody) in the orthotopic RIL-175 tumor model (Supplementary Fig. 20 ). Results showed that both treatments were effective and comparable in increasing overall survival and delaying disease morbidity in the p53 -null murine HCC model. In addition, the in vivo therapeutic efficacy of the combination of CTCE-p53 NPs with aPD1 were also evaluated in an orthotopic p53 -wild type HCC tumor model (HCA-1) in C3H mice. Though the CTCE-p53 NPs showed modest in vitro cytotoxicity in HCA-1 cells (Supplementary Fig. 18 ), this modest in vitro effect did not translate into an in vivo survival benefit (Supplementary Fig. 21 ) with the same dosage and dosing frequency used in the RIL-175 model. Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade is effective in ectopic p53 -null murine HCC To determine whether the comprehensive reprogramming of the immune TME was dependent on the localization of tumor within the liver, we next evaluated in vivo p53 expression, anti-tumor immune response, and anti-tumor efficacy in a subcutaneously grafted HCC model in immunocompetent C57Bl/6 mice. We administered four injections of CTCE-p53 NPs i.v. (350 µg/kg body weight) and aPD1 i.p. (100 μg per dose) every 3 days in mice with established tumors (Supplementary Fig. 22a ). Tumor-bearing mice treated with CTCE-EGFP NPs served as controls. We first evaluated the anti-tumor effect of CTCE-p53 NPs and aPD1 by bioluminescence imaging of the luciferase-expressing RIL-175 tumors to estimate viable tumor burdens (Fig. 4a ). The combination treatment markedly limited the increase of bioluminescence signals compared to CTCE-p53 NPs or aPD1 treatment alone, indicating a potent anti-tumor effect. Moreover, RIL-175 tumor-bearing mice treated with CTCE-EGFP NPs showed aggressive tumor growth, while aPD1 treatment and CTCE-p53 NPs alone delayed the growth of RIL-175 tumors (Fig. 4b and Supplementary Fig. 22b ). The combination of CTCE-p53 NPs with anti-PD1 showed a significantly greater anti-tumor effect than either treatment alone, significantly reducing tumor volume and inducing tumor regression after 4 cycles of treatment (Fig. 4b ). Next, protein extracts from tumor tissues from the different treatment groups were analyzed by WB. As shown in Fig. 4c , CTCE-p53 NP treatment alone and combined with aPD1 treatment both elicited high levels of p53 protein expression in ectopic p53 -null RIL-175 tumors, whereas neither the aPD1 nor the control NPs (i.e., CTCE-EGFP NPs) had any effect on p53 expression. IHC analysis of tumor sections further confirmed p53 expression (Supplementary Fig. 22c ). These results demonstrate that the p53 mRNA NPs effectively restored p53 expression in vivo and significantly enhanced the anti-tumor effects of aPD1 therapy in HCC growing outside the liver. Fig. 4: Combining CXCR4-targeted p53 mRNA NPs with PD-1 blockade reprograms the immune TME and promotes antitumor immunity in ectopic HCC. a Bioluminescence images of the luciferase-expressing RIL-175 tumors grafted subcutaneously in C57Bl/6 mice after 6, 12, and 18 days of treatment ( n = 3 mice/group). b Tumor growth rate in each treatment group ( n = 7 mice/group; *** P < 0.001). c Western blotting analysis on the expression levels of p53 protein in the s.c. RIL-175 tumors after treatment. GAPDH was used as the loading control. d – f Flow cytometry analysis ( n = 3 tumor samples from each group) of lymph node CD80 + CD86 + dendritic cells gating on CD11c + cells ( d ) , and tumor-infiltrating CD8 + CD3 + T cells ( e ) and M2-like CD206 + F4/80 + CD11b + macrophages ( f ). g Representative immunofluorescence for CD8 (in red) to confirm intratumoral T cell infiltration after treatment with CTCE-EGFP NPs, anti-PD-1 (aPD1), CTCE-p53 NPs, or the combination. Scale bar: 200 µm. h – k Protein array analysis of differential expression of cytokines in s.c. HCC tissues after treatment ( n = 3 samples per group): TNF-α ( h ), IL-1β ( i ), IFN-γ ( j ), and IL-6 ( k ). Statistical significance was calculated using one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.D. For c and g : this experiment was repeated thrice independently with similar results. * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001. Source data are provided as a Source Data file. Full size image Using the same model, we also harvested tumors and lymph nodes to examine the number and phenotype of immune cells and the changes in secreted cytokines after four cycles of treatment. CTCE-p53 NPs alone or in combination with aPD1 induced a significant increase in CD80 + CD86 + lymph node-resident dendritic cells (LNDCs) and intratumoral CD8 + T cells (Fig. 4d, e ), and a significant decrease in M2-type TAMs (Fig. 4f ). IF analysis of tumor tissues confirmed the increased intratumoral infiltration by CD8 + T cells after combination treatment (Fig. 4g ). Multiplexed array analysis revealed, similar to orthotopic HCCs, increased expression of cytokines associated with immune cell activation (e.g., TNF-α, IL-1β, IFN-γ, and IL-2) and also decreased expression of immunosuppressive cytokines (e.g., IL-10 and MCP-1) in the ectopic HCCs after combination treatment (Fig. 4h–k and Supplementary Fig. 23 ). Moreover, we also studied the role of p53 on MHC class I expression by WB and IF. Results in Supplementary Figs. 24 and 25 revealed an association between p53 and MHC class I expression, indicating the potential role of p53 restoration in inducing immune responses. These results demonstrate that targeting HCC cells with CTCE-p53 NPs combined with aPD1 therapy triggers anti-tumor immunity and reprograms the immune TME of HCC both in the liver and in other organs. Combination therapy prolongs survival and reduces bloody ascites, pleural effusions, and lung metastases Using the orthotopic RIL-175 tumor model, we further evaluated the therapeutic efficacy of combining aPD1 with CTCE-p53 NPs in mice with established tumors (Fig. 5a ). We treated the mice by i.v. injection of NPs and i.p. injection of aPD1 for four cycles and then monitored the tumor growth by ultrasound imaging and the survival. CTCE-p53 NPs alone and aPD1 alone modestly inhibited tumor growth, but their combination elicited a significant delay in tumor growth (Fig. 5b, c ). Notably, the group treated with CTCE-p53 NPs plus aPD1 showed a significant and substantial survival benefit (median overall survival of 43.5 days, almost double that in the control group in this model, HR = 0.26; p = 0.0001) (Fig. 5d ). In addition, only the combination treatment reduced the incidence of bloody ascites (Fig. 5e ) and pleural effusions (Fig. 5f ), which are potentially lethal adverse effects of orthotopic HCC. Moreover, when we assessed the lung metastatic burden by enumerating metastatic nodules, we found it significantly reduced in the group that received a combination of CTCE-p53 NPs with aPD1 (Fig. 5g ). These findings suggest that p53 restoration using CXCR4-targeted mRNA NPs can markedly improve the efficacy of aPD1 therapy in p53 -deficient HCC. Fig. 5: Therapeutic efficacy of the combination of CTCE-p53-mRNA NPs with anti-PD-1 (aPD1) in orthotopic HCC model. a Timeline of tumor implantation and treatment schedule for survival studies in HCC models. b , c Tumor growth profile of each indicated treatment group ( n = 12 mice/group). d Survival data from the RIL-175 orthotopic mouse model ( n = 12 mice/group). e , f The combination of CTCE-p53-mRNA NPs with aPD1 reduces ascites ( e ) and pleural effusion ( f ). g The combination of CTCE-p53-mRNA NPs with aPD1 reduces lung metastasis ( n = 12 mice for each group). Statistical significance was calculated via one-way ANOVA with a Tukey post-hoc test. All data are presented as mean ± S.E.M. * P < 0.05; ** P < 0.01. Source data are provided as a Source Data file. Full size image Combination of p53 mRNA NPs with aPD1 is safe in vivo Finally, to evaluate the in vivo safety of CXCR4-targeted p53-mRNA NPs alone and in combination with aPD1, mouse weight was monitored during the above animal studies with the s.c. grafted and orthotopic models, and blood and major organs (e.g., heart, kidneys, liver, lung, and spleen) were harvested at the end of these studies. No significant change in body weight was observed in any of the treatment groups (Supplementary Figs. 26 and 27 ). We performed hematological analysis based on serum biochemistry and whole blood panel tests. A series of parameters were tested, including alanine aminotransferase (ALT), aspartate aminotransferase (AST), urea nitrogen (BUN), albumin, BUN, creatinine, globulin, calcium, cholesterol, phosphorus, glucose, total protein, red blood cells (RBC), white blood cells (WBC), hemoglobin (Hb), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), hematocrit (HCT), and lymphocytes (LY). As shown in Fig. 6 and Supplementary Fig. 28 , no obvious changes were detected in any hematological parameter across groups, indicating negligible side effects of the CTCE-p53 NPs and their combination with aPD1. We also examined the major organs by H&E staining. Histological analyses revealed no obvious abnormality and no differences in the main organs among the treatment groups ( Supplementary Figs. 29 and 30 ), further demonstrating the in vivo safety of the combination treatment. Fig. 6: In vivo safety of CTCE-p53 NPs and the combination with anti-PD-1 antibody. a – k Serum biochemistry analysis ( n = 4 samples for CTCE-EGFP NPs group; n = 5 samples for the left four groups). l – r Whole blood panel tests analysis ( n = 5 samples for each group). All data are presented as mean ± S.E.M. Source data are provided as a Source Data file. Full size image Discussion The last decade has witnessed a tremendous shift in cancer treatment toward immunotherapy with ICBs, significantly extending the survival of cancer patients, including those with HCC. However, benefits are seen in only a fraction of patients. Combinations of ICB therapy with other therapy modalities (e.g., chemotherapy, radiotherapy, and targeted therapy) are being actively explored for their ability to activate anti-tumor immune response and/or alter the immunosuppressive TME. These strategies are designed to increase the recruitment of activated effector T cells in ‘immunologically cold’ tumors that lack T cells and do not respond to ICB-based therapy. The tumor suppressor p53 is one of the most frequently mutated genes in a wide range of cancers and is strongly associated with tumorigenesis, tumor progression, treatment resistance, and adverse prognosis. Compelling evidence suggests that p53 dysfunction leads to immunosuppression and immune evasion. Restoration of p53 function thus may offer the opportunity to reverse immunosuppression of the TME and improve the anti-tumor efficacy of ICB therapy. Current efforts towards p53 reactivation include small molecules and DNA therapies 31 , 32 , 33 , 34 , 35 , 36 , 37 , which have shown notable outcomes but are also associated with formidable drawbacks 38 , 39 , highlighting the need for new therapeutic strategies to restore p53 functions. The use of synthetic mRNA has attracted tremendous attention, as exemplified by the recent clinical approval of COVID-19 mRNA nano-vaccines and the clinical trials of a number of mRNA nanotherapeutics for diverse diseases including cancer 28 , 40 , 41 , 42 , 43 . As a compelling alternative to DNA, mRNA requires only cytosolic delivery for translation, thus largely avoiding host genome integration and eliciting faster and more predictable protein expression. In this study, we developed a CXCR4-targeted mRNA NP platform for effective p53 restoration and tested it in combination with aPD1 immunotherapy using p53 -null murine HCC models. We extensively optimized the p53 mRNA NP platform by screening a series of ionizable lipid-like compounds and varying densities of CXCR4-targeting ligands for improving mRNA translation and HCC targeting in vivo. Our results demonstrate that the combination of CXCR4-targeted p53 mRNA NPs with aPD1 leads to a potent antitumor effect in intrahepatic and ectopic models of HCC with p53 loss. The combination of p53 mRNA NPs and aPD1 effectively and globally reprogrammed the immune TME by promoting MHC-I expression and anti-tumor immunity, and decreasing the expression of immunosuppressive cytokines in HCC, irrespective of organ location. These findings suggest that p53 mRNA nanotherapy could enhance the efficacy of ICB therapy, substantially improving the treatment of p53 -deficient HCC and potentially other p53 -deficient cancers. Further studies will be required to gain an in-depth understanding of the role of p53 in immune regulation, such as how the p53 status of cancer cells (e.g., p53 mutation) affects the immune TME and how the transfection of p53 mRNA NPs in immune cells (e.g., T cells, NK cells, and macrophages) affects their function in vivo. In addition, new combinatorial strategies between p53 targeting, ICB, with or without VEGF blockade may be required to increase durability of responses. If successfully translated, the mRNA nanotherapy-based p53 restoration strategy could be transformative and impactful in cancer immunotherapy. Methods Materials Ester-terminated PLGA (with inherent viscosity of 0.55-0.75 dL/g) was purchased from Durect Corporation. Lipid PEGs terminated with methoxyl groups (1,2-distearoyl-sn-glycero-3-phosphoethanolamine- N -[methoxy(polyethylene glycol)−3000] (ammonium salt), DSPE-MPEG (molecular weight (MW) of PEG, 3000 Da) were purchased from Avanti Polar Lipids. Cationic ethylenediamine core-poly(amidoamine) (PAMAM) dendrimer generation 0 (G0) were purchased from Sigma-Aldrich. CXCR4-targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK, CTCE) and scrambled peptide (LYSVKRSGCGSRKVSYL, SCP) were custom synthesized by GL Biochem (Shanghai) Ltd. Lipofectamine 2000 (L2K) was purchased from Invitrogen. Firefly Luciferase mRNA (Luc mRNA, L-7202), Enhanced Green Fluorescent Protein mRNA (EGFP mRNA, L-7201), and Cyanine 5 Firefly Luciferase mRNA (Cy5-Luc mRNA, L-7702) were purchased from TriLink Biotechnologies (San Diego, CA). Murine p53 mRNA with chemical modification (full substitution of Pseudo-U and 5-Methyl-C, Capped (Cap 1) using CleanCap® AG, Polyadenylated (120 A)) was custom-synthesized by TriLink Biotechnologies (San Diego, CA). InVivo MAb anti-mouse PD-1 (CD279) was purchased from Bioxcell. D-luciferin-K + salt bioluminescent substrate (no. 122799) was obtained from PerkinElmer. Primary antibodies used for western blot experiments as well as immunofluorescent and immunohistochemistry staining included: anti-p53 (sc-126, Santa Cruz Biotechnology, 1:500 dilution), anti-GAPDH (Cell Signaling Technology, # 5174; 1:2000 dilution), anti-beta-Actin (Cell Signaling Technology; 1: 2,000 dilution), and anti-rabbit and anti-mouse horseradish peroxidase (HRP)-conjugated secondary antibodies (Cell Signaling Technology). Secondary antibodies used in this study included: Alexa Fluor® 488 Goat-anti Rabbit IgG (Life Technologies, A-11034), and Alexa Fluor® 647 Goat-anti Mouse IgG (Life Technologies, A-28181). All other chemicals and solvents were purchased from Sigma-Aldrich and used without further purification. Synthesis of ionizable lipid-like compounds (G0-Cn) A series of ionizable lipid-like compounds termed G0-Cn were synthesized through ring opening of epoxides bearing different alkyl chain lengths by generation 0 of poly (amidoamine) (PAMAM) dendrimers (M1). Briefly, substoichiometric amounts of epoxide were added to increase the proportion of products with one less tail than the total possible for a given amine monomer. The amine (1 equiv, typically 1 millimole (mmol)) and epoxide (9 equiv, typically 1 millimole (mmol)) were added to a 50 mL round-bottom glass flask containing a magnetic stir bar. The flask was sealed, and the reaction was heated to 95 °C with homogeneous stirring for 2 days. The crude products were separated by chromatography on silica with gradient elution from CH 2 Cl 2 to 15:1 CH 2 Cl 2 /MeOH. The separated product was characterized by 1 H NMR spectrum. mRNA complexation ability of G0-C8 and its stability in organic solvent Gel electrophoresis was used to study the mRNA complexation ability of ionizable compound G0-C8 and optimize the ratio between G0-C8 and mRNA in the NPs with free EGFP-mRNA or EGFP-mRNA complexed with G0-C8. Free EGFP-mRNA was also incubated with DMF to evaluate the stability of mRNA in organic solvent (DMF). The EGFP-mRNA were first incubated with G0-C8 at different weight ratios (weight ratios of G0-C8/mRNA: 1, 2, 5, 10, and 20) or DMF for 20 min at room temperature. The volumes of samples were then adjusted with loading dye (Invitrogen) and run into an E-Gel 2% agarose (Invitrogen) gel for 30 min at 50 V. Ambion Millennium markers-Formamide (Thermo Fisher Scientific) was used as a ladder. Finally, the gel was imaged under ultraviolet and the bands were analyzed. Synthesis of lipid-PEG-CTCE HCC targeting peptide (DSPE-PEG-CTCE) and lipid-PEG- scrambled peptide (DSPE-PEG-SCP) We conjugated the CXCR4-targeting peptide CTCE-9908 (KGVSLSYRCRYSLSVGK, CTCE) and scrambled peptide (LYSVKRSGCGSRKVSYL, SCP) to DSPE-PEG-MAL to construct the HCC targeted NPs and the non-targeted control NPs, respectively. Synthesis of DSPE-PEG-CTCE and DSPE-PEG-SCP was achieved through the efficient thiol-maleimide Michael addition click reaction. In brief, DSPE-PEG-maleimide and the thiol-CTCE peptide (3:1) or thiol-scrambled peptide were each dissolved in dimethylsulfoxide (DMF). The peptide solution was diluted in 0.1 M sodium phosphate buffer, pH 7.4, and DSPE-PEG was then added to the mixture. The final reaction mixture was 1:1 DMF/(sodium phosphate buffer) with 5 mM peptide and 15 mM DSPE-PEG maleimide. The reaction was allowed to proceed for 2 h at room temperature and then dialyzed against DI water for purification. Lastly, the product was lyophilized to obtain white powder as the final product (DSPE-PEG-CTCE or DSPE-PEG-SCP). The chemical structures of DSPE-PEG-CTCE and DSPE-PEG-SCP were confirmed by 1 H-NMR spectrum. Optimization of the mRNA NPs: the effect of targeting ligand densities The cellular uptake of Enhanced Green Fluorescent Protein mRNA (EGFP mRNA) NPs engineered with seven different densities of CTCE peptide (EGFP-mRNA-CTCE NPs, CTCE density: 2%, 3%, 4%, 5%, 6%, 7%, and 10%, respectively) and 5% scrambled peptide (SCP) was studied to optimize the surface chemistry and targeting efficacy of the mRNA NPs by measuring GFP expression using flow cytometry (BD Biosystems, Heidelberg, Germany) and analyzed using Flowjo software (Flowjo V10). Preparation of mRNA NPs and the formulation optimization An optimized and robust self-assembly technique was employed to prepare mRNA-encapsulated polymer-lipid hybrid NPs based on our previous report 27 , but we extensively optimized the ratios among different NPs’ components, the pH of the solution for mRNA complexation, and the sequence in which reagents were added, which affected the encapsulation, morphology, and transfection efficiency of the mRNA. Briefly, G0-C8 and PLGA were dissolved separately in anhydrous DMF to form a homogeneous solution at concentrations of 2.5 mg/ml and 5 mg/ml, respectively. DSPE-MPEG, DSPE-PEG-CTCE and DSPE-PEG-SCP were dissolved in DNase/RNase-free HyPure water (GE Healthcare Life Sciences, catalog no. SH30538) at the concentration of 1 mg/mL. All of the reagents listed above were sonicated for 5 min in a water-bath sonicator before use. Citrate buffer with pH 3.0–3.5 was first added to 80 μg of G0-C8 (in 32 μl of DMF), then 16 μg of p53 mRNA (in 16 μl of citrate buffer) was added, mixed gently (at a G0-C8/mRNA weight ratio of 5), and allowed to stay at room temperature for 15 min to ensure the sufficient electrostatic complexation. Afterwards, 250 μg of PLGA polymers (in 50 μl of DMF) was added to the mixture and gently mixed. The final mixture was added dropwise to 10 ml of DNase/RNase-free HyPure water consisting of 1 mg hybrid lipid-PEGs under uniform magnetic stirring (1000 rpm) for 30 min. An ultrafiltration device (EMD Millipore, MWCO 100 kDa) was used to remove the organic solvent and free compounds from the NP dispersion via centrifugation at 4 °C. After washing 3 times with DNase/RNase-free HyPure water, the mRNA NPs were collected and finally concentrated in pH 7.4 PBS buffer. The NPs were used fresh or stored at −80 °C for further use. Physicochemical characterization and stability of mRNA NPs The hydrodynamic diameter, zeta potential, and morphology of the p53-mRNA NPs were measured to assess their physicochemical properties. Sizes and zeta potentials of both CTCE- p53-mRNA NPs and SCP-p53-mRNA NPs were measured by dynamic light scattering (DLS, Brookhaven Instruments Corporation) at 20 °C. Diameters are reported as the intensity mean peak average. To prepare NPs for Transmission Electron Microscopy (TEM) to characterize their morphology and shape, CTCE-p53-mRNA NPs were negatively stained with 2% uranyl acetate and then imaged with a Tecnai G2 Spirit BioTWIN microscope (FEI Company). To verify the in vitro stability of the synthesized polymer-lipid hybrid mRNA NPs in an environment mimicking the physiological milieu, CTCE-p53-mRNA NPs were incubated in 10% serum-containing PBS solution at 37 °C in triplicate for 96 hr with constant stirring at 100 rpm. At each time point, an aliquot of NP solution was withdrawn for particle size measurement using DLS and analyzed at various time intervals to evaluate any change in size distribution. To test the encapsulation efficiency (EE%) of mRNA in the NPs, Cy5-Luc-mRNA NPs were prepared according to the aforementioned method. Dimethyl sulfoxide (DMSO, 100 μl) was added to 5 μl of the NP solution to extract the mRNA encapsulated in the NPs, and the fluorescence intensity of Cy5-Luc-mRNA was measured using a multi-mode microplate reader (TECAN, Infinite M200 Pro). The amount of loaded mRNA in the engineered NPs was calculated to be ~67.5%. Cell culture The p53 -null murine HCC cell line RIL-175 was used throughout. RIL-175 (a p53 -null/Hras mutant line syngeneic to C57Bl/6 mouse strain background, Luciferase-tagged) was kindly provided by Dr. Tim Greten (NIH). All other cells were purchased from American Type Culture Collection (ATCC). Dulbecco’s Modified Eagle’s Medium (DMEM; ATCC) was used to culture RIL-175 cells. The cell culture medium was supplemented with 10% fetal bovine serum (Hyclone, SH30071.03), Pen-Strep (100 U ml −1 and 100 μg ml −1 , respectively). Cell culture and all biological experiments were performed at 37 °C in 5% CO 2 conditions and the normal level of O 2 in a cell culture incubator. All cell lines were routinely tested using a mycoplasma contamination kit (R&D Systems) before any in vitro cell experiments or in vivo tumor model preparation. Cell viability and transfection efficiency of EGFP-mRNA NPs CTCE-EGFP-mRNA NPs and SCP-EGFP-mRNA NPs were prepared for evaluated the cell viability of the mRNA NPs along with their transfection efficiency of EGFP-mRNA. For the cell viability tests, RIL-175 cells were plated in a 96-well plate at a density of 5 × 10 3 cells per well. After 24 h of cell adherence, cells were treated with EGFP-mRNA at various mRNA concentrations (0.0625, 0.125, 0.250, 0.500, and 0.750 μg ml −1 ) for 24 hr, the cells were washed with PBS buffer (pH 7.4), followed by changing the culture medium to 0.1 ml fresh complete medium per well and further incubation for another 24 hr to evaluate cell viability by the Alamar Blue assay according to the manufacturer’s protocol and a microplate reader (TECAN, Infinite M200 Pro). To test the transfection efficiency, RIL-175 cells were seeded at a density of 5 × 10 4 cells per well on a 6-well plate and allowed to attach and grow until ~80% confluence. Cells were transfected with EGFP-mRNA NPs at the mRNA concentration of 0.5 μg ml −1 for 24 h followed by washing with fresh complete medium and further incubated for 24 h to assess transfection efficiency by measuring GFP expression using flow cytometry (DXP11 Flow Cytometry Analyzer). The percentages of GFP-positive cells were calculated and analyzed using Flowjo software (Flowjo V10). Establishment of CXCR4-KO RIL-175 cells The precise gene-editing system of CRISPR (clustered regularly interspaced short palindromic repeat)/Cas9 (CRISPR associated) was performed to knock out the CXCR4 gene in RIL-175 cells. Briefly, the single guide RNA (sgRNA) targeting CXCR4 was designed on the online tool ( ) including sgRNA1 (forward: 5′-CACCGTCGAGAGCATCGTGCACAAG-3′, reverse:5′-AAACCTTGTGCACGATGCTCTCGAC-3′) and sgRNA 2 (forward: 5′-CACCGGGACTTACACTCACACTGAT-3′, reverse: 5′-AAACATCAGTGTGAGTGTAAGTCCC-3′), and sequentially were phosphorylated and annealed. At one time, the lentiviral expression lentiCRISPRv2 plasmid (Addgene, cat. no. 52961, USA) was digested and dephosphorylated with BsmBI enzyme (ThermoFisher, cat. No. ER0451) following by running DNA gel and gel purify the larger band leaving the 2 kb filler piece. Next, the ligation reaction of lentiCRISPRv2 and sgRNAs was established for incubating 10 min at room temperature. After finishing the process of transformation in Stbl3 bacteria and validation by DNA sequencing, the lentiCRISPv2 inserted with sgRNAs targeting CXCR4 was selected out. Then the lentivirus system including lentiCRRISPv2 and the packaging plasmids pVSVg (AddGene, cat. No.8454) and psPAX2 (AddGene, cat. No.12260) were co-transfected into HEK293T cells to produce the complete lentivirus and further transfected into RIL-175 wide type cells. The puromycin (2 μg/μl) previously included in the lentiCRISPRv2 was used to screen out the positive cells successfully transfected with the complete lentivirus. Finally, the quantitative PCR and western blotting were performed to detect the expression of CXCR4 from both transcriptional and protein levels. Cellular uptake of dye-labeled mRNA-encapsulated NPs To monitor the cellular uptake of the NPs, Cy5-Luc-mRNA-NPs were prepared. RIL-175 cells were first seeded in 35 mm confocal dishes (MatTek) at a density of 5 × 10 4 cells per well and incubated at 37 °C in 5% CO 2 for 24 h. The cells were then incubated with medium (DMEM) containing Cy5-Luc-mRNA-NPs at different time intervals. The cells were then washed with PBS, counterstained with Hoechst 33342 (Thermofisher), and analyzed using an Olympus microscope (FV1200, Olympus). In vitro cell growth inhibition assay with p53-mRNA NPs RIL-175 or HCA-1 cells were plated in 96-well plates at a density of 5 × 10 3 cells per well. After 24 h of cell adherence, cells were treated with empty NPs (blank NPs), free p53 mRNA, p53-mRNA NPs at different mRNA concentrations (0.0625, 0.125, 0.250, 0.500, and 0.750 μg ml −1 ). After 24 h of incubation, the cells were washed with PBS buffer (pH 7.4) and further incubated in fresh medium for another 24 h. AlamarBlue cell viability was used to verify the in vitro cell growth inhibition efficacy of p53-mRNA NPs. Immunoblotting Protein extracts from cells taken from dissected tumors in each group were prepared using lysis buffer (1 mM EDTA, 20 mM Tris-HCl pH 7.6, 140 mM NaCl, 1% aprotinin, 1% NP-40, 1 mM phenylmethylsulphonyl fluoride, and 1 mM sodium vanadate), and supplemented with protease inhibitor cocktail (Cell Signaling Technology) and boiled at 100 °C for 10 min. Equal amounts of protein were determined with a bicinchoninic acid protein assay kit (Pierce/Thermo Scientific) according to the manufacturer’s instructions. After gel electrophoresis and protein transformation, membranes were blocked with 3% bovine serum albumin (BSA) in TBST (150 mM NaCl, 50 mM Tris-HCl at pH 7.4, and 0.1% Tween 20) for 1 h at room temperature with gentle shaking. Membranes were rinsed and then incubated overnight at 4 °C with appropriate primary antibodies. The immunoreactive bands were visualized using an enhanced chemiluminescence (ECL) detection system (Cell Signaling Technology). Immunofluorescence staining and microscopy For immunofluorescence staining, cells or tumor tissues from each treatment group were washed with ice-cold PBS and fixed with 4% paraformaldehyde (Electron Microscopy Sciences) in PBS for 20 min at room temperature, followed by permeabilization in 0.2% Triton X-100-PBS for 10 min. Samples were followed by blocking with PBS blocking buffer containing 2% normal goat serum, 2% BSA, and 0.2% gelatin for 1 h at room temperature. Then, the samples were incubated in primary antibodies at the appropriate concentration for 1 h at room temperature, washed with PBS and incubated in goat anti-rat-Alexa Fluor 647 (Molecular Probes) at 1:1000 dilution in blocking buffer for another 1 h at room temperature. Finally, stained cells were washed with PBS, counterstained with Hoechst 33342 (Molecular Probes-Invitrogen, H1399, 1:10000 dilution in PBS), and mounted on slides with Prolong Gold antifade mounting medium (Life Technologies). The slides were imaged under a confocal laser scanning microscope (Olympus, FV1100). Animals For the s.c. tumor model, all animal procedures were performed in ethical compliance and with approval by the Institutional Animal Care and Use Committees at Harvard Medical School. Immunocompetent male and female C57BL/6 mice (5-6 weeks old or 6–8 weeks old) were obtained from Charles River Laboratories and housed in a pathogen-free animal facility of Brigham and Women’s Hospital, Harvard Medical School. For each experiment, mice were randomly allocated to each group. Mice were put for at least a 72 h acclimation period prior to use in order for physiological parameters to return to baseline after shipping and transferring. All animals were housed in single-unit cages with 12-h alternate light and dark cycles and at controlled ambient temperature (68-79 °F) with humidity between 30%-70%. For the orthotopic tumor model, all animal experiments were performed after approval by the Institutional Animal Care and Use Committee of the Massachusetts General Hospital. Pharmacokinetics study Healthy C57Bl/6 mice (5–6 weeks old, n = 3 per group) were injected intravenously with free Cy5-Luc-mRNA, CTCE-Cy5-Luc-mRNA NPs, or SCP-Cy5-Luc-mRNA NPS through the tail vein at the mRNA dose of 350 μg per kg of animal weight. Blood was collected retroorbitally at different time points (5 min, 30 min, 1 h, 2 h, 6 h, 12 h, and 24 h) and the fluorescence intensity of Cy5-Luc-mRNA was measured using a microplate reader (TECAN, Infinite M200 Pro). Pharmacokinetics was evaluated by calculating the percentage of Cy5-Luc mRNA in blood at various time points. HCC tumor model preparation Two p53 -null RIL-175 HCC tumor models, an ectopic (s.c.) grafted model and an orthotopic model, were developed for in vivo biodistribution, modulation of the immune microenvironment, therapeutic efficacy, and in vivo toxicity studies. An orthotopic p53-wild type HCA-1 HCC tumor model was also developed for the in vivo therapeutic efficacy study. For the s.c. grafted model, ~1 × 10 6 RIL-175 cells in 100 μl of culture medium mixed with 100 μl of matrigel (BD Biosciences) were implanted subcutaneously in the right flank of C57Bl/6 mice (6–8 weeks old). Mice were monitored for tumor growth every other day according to the animal protocol. To develop the RIL-175 orthotopic model, ~1 million RIL-175 cells 1:1 in Matrigel (Mediatech/Corning, Manassas, VA) were grafted into the left extrahepatic lobe of C57Bl/6 mice (6–8 weeks old). Tumor growth was monitored by high-frequency ultrasonography every 3 days according to the animal protocol. For the HCA-1 orthotopic model, approximately 1 million HCA-1 cells 1:1 in Matrigel (Mediatech/Corning, Manassas, VA) were grafted into the left extrahepatic lobe of C3H mice (6–8 weeks old). Tumor growth was monitored by high-frequency ultrasonography every 3 days according to the animal protocol. When the tumor volume reached about ~100 mm 3 (for ectopic model) or ~5 mm in diameter (for orthotopic model), mice were randomly assigned to a treatment group. Biodistribution of mRNA NPs in the RIL-175 HCC tumor model The biodistribution and tumor accumulation of mRNA NPs were assessed in C57Bl/6 mice bearing with s.c. grafted RIL-175 tumor (~100–200 mm 3 ) and in the RIL-175 orthotopic model (~5 mm in diameter), respectively. In brief, RIL-175 bearing C57Bl/6 mice (5–6 weeks old, n = 3 per group) were injected intravenously with free Cy5-Luc-mRNA, CTCE-Cy5-Luc NPs or SCP-Cy5-Luc NPs via the tail vein at a mRNA dose of 350 μg per kg of animal weight. After 24 h, all the mice were sacrificed, and dissected organs and tumors were visualized using a Syngene PXi imaging system (Synoptics Ltd). The data were analyzed by Image J software. Flow cytometry and cytokine analysis Tumor immune-environment responses were assessed in the s.c. grafted and orthotopic HCC models by cytokine detection and flow cytometry after treatment. RIL-175 tumor-bearing C57Bl/6 mice (6–8 weeks old, n = 3 per group) were systemically (i.v. via tail vein) injected with CTCE-targeted p53 mRNA NPs or control groups (i.e., PBS or CTCE-EGFP NPs) every 3 days for four injections (at the murine p53 or EGFP mRNA dose of 350 μg/kg animal body weight). For the combinatorial immunotherapy group, one day after each i.v. injection of CTCE-p53 NPs, mice underwent intraperitoneal (i.p.) administration of aPD1 (100 μg per dose). The tumor inoculation and treatment schedule are depicted in Fig. 3a and Supplementary Fig. 22a . Forty-eight hrs post treatment, mice were euthanized and tumor tissue was harvested and homogenized for flow cytometry and cytokine analysis. For flow cytometry, tumor tissues were resected and minced, and fragments were incubated in HBSS with 1.5 mg/mL of hyaluronidase and 15 µg/mL of collagenase for 30 minutes at 37 °C. Digested tissues were passed through a 70-µm cell strainer and washed twice with phosphate-buffered saline (PBS)/0.5% bovine serum albumin. Prior to immunostaining, cells were washed with the buffer and fixed and permeabilized with FoxP3/Transcription Factor Staining Buffer Set (eBioscience/Thermo Fischer Scientific) to stain the intracellular markers. Harvested cells were incubated in Dulbecco’s Modified Eagle Medium with cell activation cocktail with BD Leukocyte Activation Cocktail, with BD GolgiPlug™(1:500, Biolegend) for 6 h at 37 °C. The cells were stained with the antibodies of cell surface and intracellular marker in the buffer with brefeldin A. Cells were stained with fluorescence-labeled antibodies CD11c (Biolegend, cat. no. 117310, clone N418), CD80 (Biolegend, cat. no. 104722, clone 16-10A1), CD 86 (Biolegend, cat. no. 105005, clone clone GL-1), CD4 (Biolegend, cat. no. 100412, clone GK1.5), CD3 (Biolegend, cat. no. 100204, clone 17 A2), CD8 (Biolegend, cat. no. 140408, clone 53–5.8), CD11b (Biolegend, cat. no. 101208, clone M1/70), F4/80 (Biolegend, cat. no. 123116, clone BM8), CD206 (Biolegend, cat. no. 141716, clone C068C2), Gr-1 (Biolegend, cat. no. 108412, clone RB6-8C5), CD45 (Biolegend, cat. no. 103108, clone 30-F11), TCR (Biolegend, cat. no. 109243, clone H57-597), CD39 (Biolegend, cat. no. 143805, clone Duha59), Ki67 (Biolegend, cat. no. 652423, clone 16A8), CD11b (Biolegend, cat. no. 101243, clone M1/70), CD206 (Biolegend, cat. no. 141717, clone C068C2), Forkhead box protein P3 (FoxP3; Biolegend, cat. no. 126419, clone MF-14), IFN-γ Receptor βchain (Biolegend, cat. no. 113605, clone MOB-47), CD119 (BD Bioscience, cat. no. 740897, clone GR20), FITC (Biolegend, cat. no. 503805, clone JES6-5H4) following the manufacturer’s instructions. All antibodies were diluted 200 times, except for FoxP3 and CD119 staining, which were 1:100 dilution. The stained cells were measured on a flow cytometer (Accuri C6 Plus, BD Biosciences) and analyzed by FlowJo software (Flowjo V10). The numbers presented in the flow cytometry analysis images are percentage based. For cytokine studies, tissue samples were assayed in duplicate using the MSD proinflammatory Panel I, a highly sensitive multiplex enzyme-linked immunosorbent assay (ELISA) for quantitatively measuring 10 cytokines-IFN-γ, interleukin (IL)−1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-12p70, TNF-α, KC/GRO and IL-9, IL-15, IP-10, MCP-1, MIP-1α, MIP-2, IL-17A/F, IL-27p28/IL-30, IL-33 using electrochemiluminescence-based detection (MesoScale Discovery, Gaithersburg, MD). In vivo therapeutic efficacy The therapeutic effects of p53-mRNA NPs and their integrated antitumor effect with anti-PD1 were evaluated in the p53 -null HCC s.c. RIL-175 tumor model, p53 -null RIL-175 orthotopic tumor model, and p53 -wild-type HCA-1 orthotopic tumor model. For the s.c. model, RIL-175 tumor-bearing C57Bl/6 mice (6–8 weeks old, n = 5 per group) were monitored for tumor growth every other day after tumor implantation; tumor size was measured using a digital caliper and calculated as 0.5 × length × width 2 . When the tumor volume reached about ~100 mm 3 , mice were randomly divided into five groups ( n = 5), which received treatment with PBS, CTCE-EGFP NPs, CTCE-p53 NPs, aPD1, or the combination of CTCE-p53 NPs and aPD1 according to the schedule in Supplementary Fig. 22a at the mRNA dose of 350 μg/kg animal body weight, while the aPD1 were administrated by i.p. at 100 μg per dose one day after the p53-mRNA NPs treatment. Tumor growth was measured and calculated every 3 days. The body weights of all mice were recorded every three days during this period. Animals were euthanized upon showing signs of imperfect health or when the size of their accumulated tumors exceeded 1.0 cm 3 . For the orthotopic HCC tumor model, tumor growth was monitored by high-frequency ultrasonography every 3 days. When the tumor size reached ~5 mm in diameter, mice were randomly assigned to a treatment group ( n = 12). Treatments were administered according to the schedule in Fig. 3a . For the comparison of side-by-side the in vivo survival of the combination of CTCE-p53 NPs with aPD1 against the new standard of care in HCC patients (i.e., anti-VEGFR2 antibody + aPD-L1 antibody) in the orthotopic RIL-175 tumor model, treatments were administered i.p. every 3 days for 4 doses at 10 mg/kg of aPD-L1 antibody (Bioxcell, #BE0101, clone 10F.9G2), and 10 mg/kg of anti-VEGFR-2 antibody (Bioxcell, #BE0060, clone DC101) (Supplementary Fig. 20a ). For survival studies, the endpoint was moribund status, defined as signs of prolonged distress, >15% weight loss compared with the starting date, body condition score >2, or tumor size of >15 mm in diameter. Bioluminescence To further explore the therapeutic efficacy of our therapeutic strategy, tumors were also assessed using an in vivo bioluminescence imaging system (Bruker Xtreme scanner). Mice were monitored for tumor growth by bioluminescent in vivo imaging every 6 days (Day 0, 6, and 12); specifically, 8 minutes after intraperitoneal injection of 150 mg/kg D-luciferin substrate (PerkinElmer, Catalog#122799), mice from each treatment group ( n = 3) were imaged. Immunohistochemistry staining The expression of p53 protein and CD8 + cells in tumor tissue sections from different in vivo treatment groups were assessed by immunohistochemistry. Tumor sections were fixed in 4% buffered formaldehyde solution and embedded in paraffin. Paraffin-embedded sections were deparaffinized, rehydrated, and washed in distilled water. In order to retrieve the antigen, tumor tissue sections were incubated in 10 mM citrate buffer (pH = 6) for 30 min, washed in PBS, and immersed in 0.3% hydrogen peroxide (H 2 O 2 ) for 20 min, then incubated in blocking buffer (5% normal goat serum and 1% BSA) for 60 min. Tissue sections were then incubated with the appropriate primary antibodies (PBS solution supplemented with 0.3% Triton X-100) at 4 °C overnight in a humid chamber. After being rinsed with PBS, the samples were incubated with biotinylated secondary antibody at room temperature for 30 min, rinsed again with PBS, and incubated with the avidin-biotin-horseradish peroxidase complex (ABC kit, Vector Laboratories, Inc). After being washed again, stains were processed with the diaminobenzidine peroxidase substrate kit (Impact DAB, Vector Laboratories, Inc) for 3 min. Sections were evaluated using a Leica Microsystem after being counterstained with hematoxylin (Sigma), dehydrated, and mounted. In vivo toxicity evaluation The in vivo toxicity of p53-mRNA NPs was comprehensively studied in both the p53 -null HCC s.c. graft tumor model and the p53 -null orthotopic HCC tumor model. In brief, the major organs were harvested at the end point, sectioned, and H&E stained to evaluate the histological differences. In addition, blood was drawn, and serum was isolated at the end of the in vivo efficacy experiment. Various parameters including ALT, AST, BUN, RBC, WBC, Hb, MCHC, MCH, HCT, and LY were tested to evaluate toxicity. Statistical analysis A two-tailed Student’s t-test or a one-way analysis of variance (ANOVA) was performed when comparing two groups or more than two groups, respectively. Statistical analysis was carried out using Prism 8.0 (GraphPad) and Microsoft Excel. Data are expressed as standard deviation (S.D.) or standard error means (S.E.M) as described in the main text. Difference was considered to be significant if P < 0.05 (* P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 unless otherwise indicated). All studies were performed at least in triplicate unless otherwise stated. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that all data supporting the findings of this study are available within the Article, Supplementary Information or Source Data file. Source data are provided with this paper.
A team of researchers from Massachusetts General Hospital (MGH) and Brigham and Women's Hospital (BWH) has reprogrammed the tumor microenvironment of liver cancer by using mRNA nanoparticles. This technology, similar to the one used in COVID-19 vaccines, restored the function of the p53 master regulator gene, a tumor suppressor mutated in not just liver but also other types of cancer. When used in combination with immune checkpoint blockade (ICB), the p53 mRNA nanoparticle approach not only induced suppression of tumor growth but also significantly increased antitumor immune responses in hepatocellular carcinoma (HCC) laboratory models. The results of the study were published in Nature Communications. "The reprogramming of the cellular and molecular components of the tumor microenvironment could be a transformative approach for treating HCC and other cancers," says co-senior author Jinjun Shi, Ph.D., with the Center for Nanomedicine at BWH, who developed the platform with MGH liver cancer biologist and co-senior author Dan G. Duda, DMD, Ph.D. "By using this new approach, we're targeting specific pathways in tumor cells with mRNA nanoparticles. These tiny particles provide the cells with the instructions to build proteins, which, in the case of HCC, delayed tumor growth and rendered the tumor more responsive to treatment with immunotherapy." HCC is the most prevalent form of liver cancer, characterized by a high mortality rate and dismal prognosis for patients. Immune checkpoint blockers, a revolutionary new class of drugs that enable the body's immune system to recognize and attack cancer cells, have shown efficacy in treating HCC, but most patients do not benefit. To overcome this resistance, multiple strategies are being developed to improve ICBs by combining them with other existing therapies, such as anti-VEGF drugs and radiotherapy. However, even these approaches are expected to benefit only a small number of patients, creating an urgent need for new combination therapies. Encouraged by the success of mRNA in COVID-19 vaccines, Shi decided to apply the technology (with certain modifications) to targeting cancer cells. He teamed up with Duda, whose MGH lab had already created sophisticated animal models to analyze the microenvironment of liver tumors in response to immunotherapy. They developed and optimized an mRNA nanoparticle strategy to restore loss of function of p53, a tumor suppressor gene whose function is lost in more than one-third of HCC cases. In doing so, they uncovered evidence that p53 regulates the tumor microenvironment by modulating the interaction of cancer cells with immune cells as part of ICB therapy. "In our previous work we had developed nanoparticles to target CXCR4—a chemokine receptor expressed by liver cancer cells—and selectively co-deliver drugs such as kinase inhibitors," explains Duda. "We've now adapted this platform to use CXCR4 as a kind of ZIP code to selectively target the tumor with nanoparticles encapsulating therapeutic mRNAs. When we combined this nanomedicine with anti-programmed death receptor 1 (PD-1) antibodies, a standard immunotherapy for HCC patients, it induced global reprogramming of the tumor microenvironment and tumor response by restoring p53 expression." The next step for the team is to transfer their research from animal models to patients in a clinical trial. "Scientists have struggled for decades to find an effective way to target the tumor suppressor pathways," emphasizes Shi. "Our proof-of-concept study is an exciting development that clearly shows that p53 mRNA nanoparticles in combination with ICB not only works, but also could make a big difference by reversing immunosuppression in HCC and potentially other cancers." Shi is an associate professor of Anesthesia at Harvard Medical School (HMS). Duda is associate professor of Radiation Oncology at HMS and director of translational research in GI radiation oncology at MGH. Yuling Xiao, Ph.D., and Jiang Chen, MD, Ph.D., are the lead authors of the study and postdoctoral fellows at HMS.
10.1038/s41467-022-28279-8
Medicine
A new pathway to shrink cancerous tumors through body's immune cells
Lydia N. Raines et al, PERK is a critical metabolic hub for immunosuppressive function in macrophages, Nature Immunology (2022). DOI: 10.1038/s41590-022-01145-x Journal information: Nature Immunology
https://dx.doi.org/10.1038/s41590-022-01145-x
https://medicalxpress.com/news/2022-04-pathway-cancerous-tumors-body-immune.html
Abstract Chronic inflammation triggers compensatory immunosuppression to stop inflammation and minimize tissue damage. Studies have demonstrated that endoplasmic reticulum (ER) stress augments the suppressive phenotypes of immune cells; however, the molecular mechanisms underpinning this process and how it links to the metabolic reprogramming of immunosuppressive macrophages remain elusive. In the present study, we report that the helper T cell 2 cytokine interleukin-4 and the tumor microenvironment increase the activity of a protein kinase RNA-like ER kinase (PERK)-signaling cascade in macrophages and promote immunosuppressive M2 activation and proliferation. Loss of PERK signaling impeded mitochondrial respiration and lipid oxidation critical for M2 macrophages. PERK activation mediated the upregulation of phosphoserine aminotransferase 1 (PSAT1) and serine biosynthesis via the downstream transcription factor ATF-4. Increased serine biosynthesis resulted in enhanced mitochondrial function and α-ketoglutarate production required for JMJD3-dependent epigenetic modification. Inhibition of PERK suppressed macrophage immunosuppressive activity and could enhance the efficacy of immune checkpoint programmed cell death protein 1 inhibition in melanoma. Our findings delineate a previously undescribed connection between PERK signaling and PSAT1-mediated serine metabolism critical for promoting immunosuppressive function in M2 macrophages. Main Macrophages, a critical component of the innate immune system, are a group of heterogeneous cells present in all tissues. Due to this wide distribution, macrophages are uniquely poised to exert essential processes for human health—from pathogen clearance, tissue repair and maintenance of homeostasis 1 , 2 . The ability of macrophages to serve these functions reflects their ability to execute disparate cellular programs in response to distinct extracellular cues. As a result, immunosuppressive (M2) and proinflammatory (M1) macrophages represent two distinct polarization phenotypes in response to either tumor and helminthic insults or bacterial and viral infection 3 . Moreover, the revitalization of immunometabolism and epigenetics research has uncovered new insights into these polarization phenotypes, revealing major and largely nonoverlapping alterations in gene expression that are closely associated with distinctive metabolic pathways 4 , 5 . These distinct phenotypes are dependent on cues from the surrounding microenvironment, and inflammatory milieus are known to impose stress signals that affect the energetic demands and cellular fitness of infiltrating immune cells 6 , 7 . However, to induce phenotypic changes, these signals must be incorporated and translated intracellularly. The major organelle responsible for coordinating extrinsic challenges and intrinsic cellular demands is the ER where the progression of inflammatory diseases can provoke the unfolded protein response (UPR). The UPR is commonly associated with the maintenance of proteostasis; however, recent findings show that activation of the UPR is linked to the development and function of immune cells 8 , 9 , 10 , including dendritic cells 11 , 12 , myeloid cell-driven immunosuppressive cells (MDSCs) 13 and also T cells 14 , 15 . The UPR signaling cascade is primarily initiated by the type I transmembrane kinase, inositol-requiring enzyme-1α (IRE1α), the type II transmembrane protein, activating transcription factor (ATF) 6 and PERK (encoded by Eif2ak3 ) 16 . Recent studies have suggested that IRE1α-mediated, X-box-binding protein (XBP1) signaling plays a crucial role in macrophages during inflammatory diseases 17 , 18 . Yet, these findings have reached inconclusive and/or contradictory conclusions. This raises an important question about whether other arms of the UPR contribute to the metabolic adaptation necessary to support the immunosuppressive characteristics of macrophages. Activated PERK phosphorylates the downstream mediator eukaryotic translation initiation factor 2α (eIF2α) 16 , leading to the induction of stress-responsive ATF-4 activation 19 . PERK signaling induces mitochondrial function 20 , whereas ATF-4 activation has been suggested to upregulate a set of targets involved in amino acid anabolism 21 . In the present study, we show that the PERK arm of the UPR is uniquely upregulated in macrophages responding to the helper T cell 2 (T H 2) cytokine interleukin-4 (IL-4) and also the tumor microenvironment (TME). This PERK signaling modality promotes mitochondrial respiration to fulfill cellular energy requirements while also signaling through ATF-4 to regulate PSAT1 activity to mediate the serine biosynthesis pathway. The process of PSAT1-mediated serine synthesis, in addition to supporting mitochondrial fitness, balances the production of α-ketoglutarate (α-KG) necessary for JMJD3-dependent histone demethylation and reinforces immunosuppressive M2 activation and cell expansion. These results highlight a previously uncharacterized role for PERK in cellular metabolism and epigenetic modification in M2 macrophages, and our findings may offer a new strategy for counteracting the immunosuppressive effects of M2 macrophages in human diseases. Results PERK supports macrophage immunosuppression To investigate the role of the ER stress response in immunosuppressive M2 macrophages, we first analyzed publicly available microarray and single-cell RNA-sequencing (RNA-seq) data and performed gene set enrichment analysis (GSEA) of IL-4/anti-IL-4 antibody complex (IL-4c)-treated mouse peritoneal macrophages (accession no. GSE54679 ) 22 and tumor-associated macrophages (TAMs) from patients with lung carcinoma (accession no. GSE97168 ) 23 . Our data indicated that, under IL-4 stimulation (Extended Data Fig. 1a ) and within the TME (Extended Data Fig. 1b ), macrophages upregulated genes associated with an ER stress response. By analyzing our RNA-seq dataset (accession no. GSE53053 ) 24 , we found that the PERK arm of the ER stress response was markedly induced by bone marrow-derived macrophages (BMDMs) after stimulation with IL-4 compared with naive (M0) and proinflammatory (M1) macrophages (Fig. 1a ). Moreover, we observed a positive correlation between CD68 messenger RNA of tumor macrophages and individual gene transcripts ( HSP5A , EIF2A , NFE2L2 and ATF4 ) of the PERK-signaling axis in different human cancer patient samples from The Cancer Genome Atlas (TCGA) program, including colon adenocarcinoma, lung adenocarcinoma and pancreatic ductal adenocarcinoma (Extended Data Fig. 1c ), suggesting that the activation of PERK may be required to support an immunosuppressive M2 phenotype. To confirm this, we stimulated BMDMs with the T H 2 cytokine IL-4 or assessed TAMs from animals bearing B16-F10 melanoma. We found that both IL-4-stimulated macrophages and TAMs exhibited a higher percentage of activated (phosphorylated) PERK protein compared with naive BMDMs and splenic macrophages from melanoma tumor-bearing mice, respectively (Fig. 1b,c and Extended Data Fig. 1d ). Of note, a conventional ER stress inducer, thapsigargin, could induce the phosphorylation of PERK (Extended Data Fig. 1d,e ), but was incapable of driving polarization toward an immunosuppressive phenotype in macrophages (Extended Data Fig. 1f ), suggesting that PERK activation itself is not sufficient to induce M2 polarization, but, rather, an initiating factor such as IL-4 or the TME is also necessary. Fig. 1: PERK stress signaling promotes an immunosuppressive phenotype in macrophages. a , Expression of genes encoding molecules involved in the PERK arm of the ER stress response in M0 (naive), M1 (LPS + IFN-γ) and M2 (IL-4) BMDMs, assessed by RNA-seq analysis. b , Geometric mean of fluorescence intensity (gMFI) of p-PERK + BMDMs cultured for 24 h with IL-4, measured by flow cytometry ( n = 3, mean ± s.e.m). Data are collected from three independent experiments. c , The gMFI of p-PERK + macrophages in the tumor and spleen from B16-F10 tumor-bearing mice ( n = 4 mice per group). Data represent two independent experiments. d , Expression of CD206, CD301, PD-L2 and Relmα in M0- and IL-4-treated BMDMs from Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice ( n = 3, mean ± s.e.m). Data are collected from three independent experiments. e , Representative histogram (left) and quantitative plot (right) of Relmα + peritoneal macrophages in mice after treatment with IL-4c ( n = 4 mice per group, mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. f , Absolute number of peritoneal macrophages ( n = 4 for Eif2ak3 fl/fl and n = 3 for Eif2ak3 fl/fl × LysM Cre mice; mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. g , Representative histogram (left) and quantitative plot (right) of Ki67 + peritoneal macrophages in mice after treatment with IL-4c ( n = 4 for Eif2ak3 fl/fl and n = 3 Eif2ak3 fl/fl × LysM Cre mice; mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. h , Representative expression of CD206 and CD301 by PERK wild-type or knockout BMDMs cocultured with either B16-F10 melanoma cells (top) or LLCs (bottom) for 72 h ( n = 4, mean ± s.e.m). Data represent two independent experiments. i , Proliferation of CTV-labeled CD8 OT-I T cells activated with anti-CD3 and anti-CD28, and cocultured with PERK wild-type or PERK-null BMDMs stimulated with IL-4 (M2) or LPS + IFN-γ (M1) at a ratio of 1:10 for 72 h ( n = 2, mean ± s.e.m). Data represent two individual experiments. j , k , Tumor growth ( Eif2ak3 fl/fl , n = 15; Eif2ak3 fl/fl × LysM Cre, n = 15; mean ± s.e.m) ( j ) and tumor weight ( k ) of B16-F10 melanoma. Data were taken from tumors harvested on either day 10 (D10) or day 16 (D16) post-tumor transplantation ( n = 5 mice per group D10; n = 15 mice per group D6; mean ± s.e.m). Each symbol represents one individual. Data represent at least two independent experiments. l , m , Absolute number of TAMs ( l ) and frequency of CD206 + TAMs ( m ) in Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre tumor-bearing mice from either D10 or D16 tumors ( n = 4 per group D10 or n = 15 per group D16; mean ± s.e.m). Each symbol represents one individual. Data represent at least two independent experiments. n – p , Absolute number of TILs ( n ) and frequency of IFN-γ + CD4 ( o ) and IFN-γ + CD8 ( p ) T cells in Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre tumor-bearing mice from either D10 or D16 tumors ( n = 4 per group, mean ± s.e.m). Each symbol represents one individual. Data represent two independent experiments. q , Survival analysis between Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice bearing B16-F10 melanoma ( n = 8 per group, mean ± s.e.m); data are collected from two independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( b – c , g – p ) or a Mantel–Cox test for survival ( q ). Source data Full size image Treatment with a selective PERK inhibitor GSK2656157 could significantly inhibit M2 polarization as measured by the expression of canonical M2 markers CD206 and CD301 (Extended Data Fig. 1g ). To further study the intrinsic effect of PERK in macrophages, we generated myeloid-specific conditional knockout mice deficient in Eif2ak3 by crossing Eif2ak3 fl/fl with LysM Cre mice ( Eif2ak3 fl/fl × LysM Cre; designated PERK cKO ). We observed that PERK deficiency did not affect the cell viability in naive macrophages (Extended Data Fig. 1h ), yet PERK-null macrophages upon IL-4 stimulation were significantly hindered in M2 polarization both in vitro (Fig. 1d ) and in vivo (Fig. 1e ). Importantly, we found that PERK supports the proliferative capacity of peritoneal macrophages in this T H 2 setting, because increases in cell number and Ki67 expression were greatly inhibited in PERK-deficient peritoneal macrophages (Fig. 1f,g ). In contrast, macrophages classically activated with lipopolysaccharide (LPS) + interferon-γ (IFN-γ) (M1 macrophages) exhibited low expression of PERK (Fig. 1a and Extended Data Fig. 1i ). Similar to a reported study 25 , PERK deficiency (PERK cKO ) had no impact on the proinflammatory expression of inducible nitric oxide synthase (iNOS) and tumor necrosis factor (TNF) (Extended Data Fig. 1j,k ). Furthermore, loss of PERK did not ‘rewire’ M1 macrophages to exhibit an anti-inflammatory phenotype (Extended Data Fig. 1i,l ) 26 and deletion of PERK did not detrimentally affect the expression of XBP1s (Extended Data Fig. 1m ), which has been linked to macrophage immunity in obesity 17 and the TME 18 . ER stress responses have been implicated in dysregulated dendritic cell antigen presentation 12 and T cell exhaustion 14 , 15 in the TME. We, therefore, sought to determine whether PERK activity is required to support the immunity of TAMs. PERK-deficient BMDMs cocultured with either melanoma cells (B16-F10) or Lewis lung carcinoma cells (LLCs) exhibited less M2 polarization compared with PERK-sufficient macrophages (Fig. 1h ). In addition, these PERK cKO M2 macrophages were less capable of blunting T cell proliferation (Fig. 1i ), suggesting that loss of PERK in macrophages can confer greater anti-tumor immunity. To test this, we transplanted wild-type ( Eif2ak3 fl/fl ) or PERK cKO mice with murine melanoma cells and measured tumor growth over the course of 16 days. Indeed, both tumor volume (Fig. 1j ) and tumor weight (Fig. 1k ) were significantly lower in the PERK cKO mice compared with wild-type mice. Moreover, reduced numbers of macrophages infiltrating into the tumor were found in the PERK cKO compared with the control mice (Fig. 1l ) and corresponded to a significant decrease in CD206 positivity (Fig. 1m ). Conversely, higher numbers of tumor-infiltrating T lymphocytes (TILs) (Fig. 1n ) and an increased frequency of IFN-γ-expressing CD8 + and CD4 + T cells (Fig. 1o,p ) were found in tumors from PERK cKO mice. Intriguingly, this inverse relationship between TAMs and TILs was detected as early as 10 d post-tumor implantation (Fig. 1k–p ) and this improved anti-tumor immunity corresponded with a noticeable increase in survival (Fig. 1q ). Together, these data indicate that PERK activation promotes an immunosuppressive M2 phenotype in macrophages and that loss of PERK not only inhibits this phenotype but also restores anti-tumor activity. Integrated stress response in macrophage activation UPR signaling partially overlaps with the integrated stress response (ISR) in which PERK is uniquely positioned to coordinate with both pathways by phosphorylating eIF2α 27 . Inhibition of eIF2α using a selective ISR inhibitor (ISRIB) was able to significantly repress M2 polarization without distorting iNOS expression in macrophages (Extended Data Fig. 2a–c ). Yet, in addition to PERK, ISR proteins general control nonderepressible 2 (GCN2), double-stranded RNA-dependent protein kinase (PKR) and heme-regulated eIF2α kinase (HRI) can respond and adapt to different environmental and pathological challenges necessary for maintaining cell homeostasis 28 . To determine whether anti-inflammatory M2 macrophages required other ISR support, we performed RNA-seq analysis of HRI, PKR and GCN2, and found that only the expression of GCN2 was significantly elevated in M2 macrophages (Extended Data Fig. 2d ). Interestingly, deletion of GCN2 did not decrease the expression of CD206, CD301, programmed cell death 1 ligand 2 (PD-L2) and Relmα in macrophages in response to IL-4 (Extended Data Fig. 2e,f ), nor adversely affect iNOS and TNF expression in M1 macrophages (Extended Data Fig. 2g,h ). Moreover, Gcn2 −/− macrophages exhibited only slight differences in metabolic function compared with Gcn2 +/+ controls responding to IL-4 and LPS + IFN-γ (Extended Data Fig. 2i,j ). Thus, PERK signaling, but not other ISR members, accounts for M2 activation. PERK signaling is crucial for metabolic reprogramming As metabolic reprogramming has been suggested to modulate the suppressive function of macrophages 29 , we next examined whether PERK-deficient M2 macrophages fail to sustain appropriate metabolic reprogramming and operation. To test this, we performed an unbiased RNA-seq analysis of M2 macrophages from wild-type and PERK cKO mice. Analysis of RNA-seq data by pathway enrichment and gene ontology (GO) of differentially expressed genes showed that genetic ablation of PERK denoted significant dysregulation of numerous pathways in macrophages, including lysosomal function, mitochondrial oxidative phosphorylation (OXPHOS), lipid metabolism, glutamine metabolism and amino acid synthesis (Fig. 2a,b )—all processes that have been deemed essential for supporting the function of M2 macrophages 24 , 30 , 31 . We then used targeted metabolomics analysis and confirmed that the levels of cellular metabolites were clearly distinguishable in PERK-deficient M2 macrophages compared with the wild-type controls (Fig. 2c,d ), which displayed a marked decline in the generation of mitochondrial metabolites, histidine/pyrimidine and several crucial amino acids (Fig. 2e ). Fig. 2: Multivariate analysis of transcriptomics and metabolomics data. a , b , GSEA performed and enrichment scores shown for Kyoto Encyclopedia of Genes and Genomes (KEGG) ( a ) and GO ( b ) pathway enrichment in PERK knockout M2 macrophages compared with PERK wild-type M2 macrophages ( n = 2 independent experiments). NES, normalized enrichment score. c – e , Metabolomics profiling performed and principal component analysis score plot ( c ), heatmap analysis of metabolites ( d ) and KEGG pathway enrichment in PERK knockout M2 macrophages compared with PERK wild-type M2 macrophages ( e ) ( n = 4 independent experiments). Full size image Although lipid metabolism and mitochondrial fitness have been explicitly tied to M2 macrophage immunity, the initiating factor for these processes has not been fully elucidated. Inhibition of PERK signaling suppressed mitochondrial respiration (oxygen consumption rate (OCR); Fig. 3a and Extended Data Fig. 2k ) and reduced overall ATP production in M2 macrophages (Fig. 3b ). PERK deficiency also deviated glycolytic metabolism (extracellular acidification rate (ECAR)) to levels similar to naive macrophages (Extended Data Fig. 3a ). This scenario could be recapitulated by the PERK inhibitor GSK2656157 (Extended Data Fig. 3c–e ). Although PERK deficiency did not curb the proinflammatory activity of macrophages stimulated with LSP + IFN-γ, it slightly reduced glycolytic metabolism not mitochondrial respiration (Extended Data Fig. 3a,b ). Furthermore, our transcriptomic and real-time quantitative PCR (RT-qPCR) analysis showed that a number of key genes responsible for lipid metabolism and mitochondrial respiration were markedly reduced by PERK inhibition (Fig. 3c,d ). This corresponded to a significant decrease in lipid uptake (Fig. 3e and Extended Data Fig. 3f ) and lipolysis (Fig. 3f and Extended Data Fig. 3g ) in M2 macrophages on IL-4 stimulation. Mitochondria are the major energy-generating source for the cell. This energy is produced within tightly organized structures called cristae, in which the tighter and more densely packed the cristae, the greater the surface area on which the respiratory complexes of the electron transport chain (ETC) can assemble. Thus, for cells such as M2 macrophages, which rely on enhanced mitochondrial function, the number and size of mitochondria available, as well as the density of the cristae within these mitochondria, are imperative for supporting their immunological functions. Importantly, it has been suggested that PERK signaling regulates mitochondrial morphology 32 and abnormal mitochondrial cristae ultrastructure is strongly correlated with dysfunction in mitochondrial respiratory capacity 33 . Using transmission electron microscopy (TEM), we found that PERK cKO M2 macrophages displayed overall lower numbers of mitochondria that were also smaller compared with wild-type cells (Extended Data Fig. 3h ). In comparison to wild-type, PERK cKO macrophages also appeared to exhibit disorganized cristae (Fig. 3g ), having overall fewer numbers (Fig. 3h ) and wider structure (Fig. 3i ) within the intermembrane space. In addition, mitochondrial mass (Fig. 3j and Extended Data Fig. 3i ) and membrane potential (Fig. 3k and Extended Data Fig. 3j ) were both significantly reduced in PERK cKO macrophages or cells treated with GSK2656157. Given the altered cristae morphology, we further investigated whether the complexes of the ETC were affected by PERK deficiency. We found that PERK cKO M2 macrophages had significantly reduced levels of mitochondrial ETC gene and protein expression (Fig. 3l,m ). These data suggest that these cristae differences contribute to an altered metabolic phenotype between PERK wild-type and deficient macrophages. Fig. 3: PERK activity is essential for metabolic reprogramming in M2 macrophages. a , Basal OCR of IL-4-stimulated BMDMs (M2) from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre ( n = 5, mean ± s.e.m). Data were collected from five independent experiments. b , ATP levels of IL-4-stimulated BMDMs from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice ( n = 3, mean ± s.e.m). Data are from three independent experiments. c , Expression of genes encoding CD36, lysosomal acid lipase (LIPA), peroxisome proliferator-activated receptor γ (PPAR-γ), PPAR-γ coactivator-1β (PGC-1β) and acetyl-CoA acetyltransferase 1 (ACAT1) in M2 (IL-4) macrophages from Eif2ak3 fl/fl and Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. d , Expression of CD36, LIPA, PPAR-γ and PGC-1β in PERK wild-type and deficient macrophages stimulated with IL-4 by RT-qPCR analysis ( n = 4, mean ± s.e.m). Data represent two independent experiments. a.u., arbitrary units. e , Representative histogram (left) and quantitative plot (right) of BODIPY FL C 16 staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. f , Representative histogram (left) and quantitative plot (right) of BODIPY (493/503) staining in BMDMs treated with IL-4 ( n = 4. mean ± s.e.m). Data represent three independent experiments. g , Representative images of mitochondrial and cristae (red arrows) from TEM of IL-4-stimulated BMDMs from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice. Scale bar, 500 nm. h , i , Measurements of cristae area ( h ) and cristae width ( i ) as determined using ImageJ. Each dot represents the average of all mitochondria from one cell ( n = 22. mean ± s.e.m). Data represent two biological replicates. j , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. k , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). Data represent three independent experiments. l , Expression of genes encoding molecules involved in the ETC reaction in M2 (IL-4) macrophages from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. m , Immunoblots of mitochondrial proteins ATP5A, UQCRC2, MTCO1, SDHB, NUDFB8 and PHB1 from PERK-sufficient or -deficient BMDMs stimulated with IL-4. Data represent three independent experiments. n , Mitochondrial calcium flux (Rhod-2) from PERK-sufficient or -deficient BMDMs treated with IL-4 determined and normalized by wild-type naive M0 macrophages. The arrow represents stimulation using 10 μM ionomycin ( n = 6, mean ± s.e.m); data represent three independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( a , b , d – f, h – k ) or a two-tailed, paired Student’s t- test. Source data Full size image Crosstalk between the ER and the mitochondria has been demonstrated to promote bioenergetics in cells 34 and is thought to be mediated by calcium signaling. We noted that the expression of genes encoding mitochondrial calcium transport was downregulated in PERK cKO M2 cells (Extended Data Fig. 3k ). Furthermore, we assessed the mitochondrial calcium flux in PERK intact or deficient M2 BMDMs and found that PERK ablation resulted in a profound reduction of calcium flux within the mitochondria (Fig. 3n and Extended Data Fig. 3l ). Collectively, these findings suggest that suppression of PERK signaling may differentially disrupt mitochondrial homeostasis by preventing adequate crosstalk between the ER and mitochondria. The result of this may then prevent M2 macrophages from being able to produce sufficient energy to fully sustain their immunosuppressive function. PERK induces serine biosynthesis via ATF-4 Proliferating cells, including M2 macrophages, require ATP to build biomass and thus require cellular building blocks such as nucleotides for genome replication, lipids for membrane integrity and amino acids for protein biosynthesis. Notably, however, the pentose phosphate pathway, a component of glycolysis that is crucial to the needs of rapidly dividing cells, is markedly lower in M2 compared with M1 macrophages 35 . It has been shown that the serine biosynthesis pathway (SBP) can provide one-carbon units for de novo nucleotide biosynthesis, supporting the expansion of T cells 36 and tumor cells 37 . We therefore hypothesized that M2 macrophages may also use SBP to support the biosynthesis of various macromolecules required for cellular proliferation and function. Intriguingly, we observed deletion of PERK to significantly reduce the metabolism of amino acids (that is, serine, glycine and threonine) and nucleotides (that is, histidine and pyrimidine) in macrophages responding to IL-4 (Fig. 2 ), suggesting a new role for PERK in tuning the SBP in M2 macrophages. Using GSEA, we compared serine/glycine one-carbon metabolism by IL-4-stimulated mouse peritoneal macrophages versus naive peritoneal macrophages 22 and TAMs versus nontumor macrophages from patients with lung cancer 23 . Our data revealed a significant enrichment of this pathway in both IL-4-stimulated macrophages and TAMs (Fig. 4a,b ). Next, we performed mass spectrometry (MS)-based metabolite profiling and found that the levels of intracellular 3-phosphoglycerate, serine and glycine were significantly elevated by IL-4-stimulated M2 cells compared with M1 macrophages (Fig. 4c ). We also used targeted metabolomics analysis and identified a number of metabolites associated with the serine/glycine one-carbon metabolic pathway to be significantly reduced in PERK cKO M2 macrophages compared with wild-type (Fig. 4d ). Deletion of PERK not only negatively impacted intrinsic serine levels, but also surprisingly downregulated the transcript levels of the key SBP genes ( Phgdh , Psat1 and Psph ) 38 induced by M2 macrophages (Fig. 4e,f and Extended Data Fig. 4a ). Similarly, inhibition of the PERK downstream target, eIF2α, could decrease the production of intracellular serine (Extended Data Fig. 2l ). It has been separately reported in other cells that activated PERK signaling induces the translational activation of ATF-4 (ref. 39 ) and that ATF-4 regulates a set of targets involved in serine/glycine metabolism and mitochondrial function 21 , 40 ; however, no studies have determined whether PERK signals through ATF-4 to support SBP in macrophages. We, therefore, investigated the interconnection between PERK–ATF-4 signaling and SBP in M2 macrophages by first analyzing a chromatin immunoprecipitation sequencing (ChIP-seq) dataset of ATF-4–DNA binding in M0 versus M2 macrophages obtained from a previously published study (see accession no. GSE140029 ) 41 . We found that ATF-4-bound DNA was enriched in M2 compared with M0 (histogram, Fig. 4g ); and, importantly, the enriched ATF-4 target gene loci were associated with downstream PERK signaling ( Nfe2l2 and Ddit3 ) and also serine one-carbon metabolism ( Psat1 , Shmt2 , Mtfhr , Mthfd1l and Mthfd2 ) in M2 macrophages (Fig. 4g ). We also found that ATF-4 could bind to genes involved in the regulation of mitochondrial respiration and lipid metabolism ( Uqcrq , Atp9b , Ndufa4l2 , Pparg and Abca1 ). Moreover, PERK ablation reduced the protein expression of ATF-4, phosphoglycerate dehydrogenase (PHGDH) and PSAT1 (Fig. 4h ). We next generated retrovirally mediated short hairpin (sh)RNA against ATF-4 to further understand the interrelationship of PERK, ATF-4 and M2 activation. Similar to the PERK cKO , ATF-4 knockdown reduced the expression of ATF-4, PHGDH and PSAT1 (Fig. 4i,j ), and also suppressed the IL-4-induced expression of CD206, CD301, PD-L2 and Relmα (Fig. 4k ). Overexpression of ATF-4 in PERK cKO could rescue the defective M2 activation and metabolic function (basal OCR and basal ECAR) in BMDMs stimulated with IL-4 (Fig. 4l-n ). Together, our results strongly imply that activated PERK regulates ATF-4 to reprogram cellular metabolic networks tailoring M2 activation. Fig. 4: PERK regulates intrinsic serine biosynthesis via ATF-4. a , Enrichment plot of serine glycine one-carbon metabolism (SGOC) genes in IL-4c-treated mouse peritoneal macrophages compared with naive (phosphate-buffered saline, PBS) macrophages by GSEA. FDR, false discovery rate. b , GSEA result comparing SGOC genes between TAMs and nontumor macrophages from patients with lung carcinoma. c , Abundance of 3-phosphoglycerate, serine and glycine in extracts of BMDMs cultured for 24 h in M1- (LPS + IFN-γ) or M2 (IL-4)-stimulating conditions, assessed by MS ( n = 4, mean ± s.e.m). Data are collected from four independent experiments. d , Targeted metabolomics profiling indicated metabolites from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre BMDMs stimulated with IL-4 ( n = 4 independent experiments). e , Intracellular serine levels in extracts of PERK wild-type or knockout BMDMs treated with IL-4 ( n = 3, mean ± s.e.m). The serine level from naive (M0) wild-type macrophages is indicated by a dotted horizontal line. Data represent two independent experiments. f , Expression of genes encoding PHGDH and PSAT1 in M2 (IL-4) macrophages from Eif2ak3 fl/fl or Eif2ak3 fl/fl × LysM Cre mice, assessed by RNA-seq analysis. g , Comprehensive heatmap of ATF-4-binding regions by ChIP-seq. TSS, transcription start site. h , Immunoblot analysis of PERK, ATF-4, PHGDH, PSAT1 and β-actin in macrophages from PERK wild-type or deficient BMDMs. Data represent three independent experiments. i , j , RT-qPCR ( i ; n = 4, mean ± s.e.m) and immunoblot ( j ) analysis of ATF-4, PHGDH, PSAT1 and β-actin in BMDMs transduced with retrovirus-expressing ATF-4 or luciferase (Luc) shRNA, and stimulated with IL-4. Data represent two independent experiments. a.u., arbitrary units. k , Expression of CD206, CD301, PD-L2 and Relmα in BMDMs transduced with retrovirus-expressing ATF-4 or Luc shRNA and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent two independent experiments. l – n , PERK wild-type (WT) or deficient (KO) BMDMs were transduced with either retrovirus overexpressing a control reporter gene (EV) or a reporter gene plus the Atf4 sequence ( Atf4 O/E ), and stimulated with IL-4. Representative expression of CD206 and CD301 was determined by flow cytometry ( l ; n = 4, mean ± s.e.m), and basal OCR ( m ) and basal ECAR ( n ) were measured using Seahorse Flux analyzer ( n = 3, mean ± s.e.m). Data represent two independent experiments. All data were analyzed using a two-tailed, unpaired Student’s t -test ( c , e , I , k ) or a one-way ANOVA with Dunnett’s multiple comparisons test ( m and n ). Source data Full size image Serine biosynthesis contributes to M2 activation We next sought to determine whether serine metabolism was necessary for immunosuppressive M2 macrophages. We treated BMDMs with the pharmacological PHGDH inhibitor CBR-5884 (ref. 42 ) or retrovirally mediated shRNA targeting Phgdh (Extended Data Fig. 4b ) and Psat1 (Extended Data Fig. 4c ). Inhibition of SBP enzymes substantially decreased the expression of CD206, CD301, PD-L2 and resistin-like molecule α (Relmα; Fig. 5a and Extended Data Fig. 4d,e ). We also observed that inhibition of PHGDH via another selective inhibitor NCT-503 (ref. 43 ) strikingly suppressed Relmα + M2 activation (Fig. 5b ) and proliferation (Fig. 5c ) in macrophages on IL-4c administration in the mouse peritoneal cavity in vivo. To study the effect of serine metabolism in macrophages, we generated myeloid cell conditional Psat1 knockout animals by crossing Psat1 fl/fl with LysM Cre mice ( Psat1 fl/fl × LysM Cre; designated as PSAT1 cKO ). Similar to the effects found with pharmacological inhibition and genetic knockdown, lower expression of CD206, CD301, PD-L2 and Relmα was detected in PSAT1 cKO BMDMs stimulated with IL-4 compared with those from wild-type ( Psat1 fl/fl ) mice (Fig. 5d,e ). In addition, PSAT1 deficiency had no effect on M1 polarization toward an anti-inflammatory phenotype (Extended Data Fig. 4f,g ). In comparison with wild-type macrophages, PSAT1 cKO BMDMs cocultured with B16-F10 or LLCs exhibited a marked attenuation of the suppressive M2 phenotype (Fig. 5f ) and exhibited less capacity to restrain T cell proliferation in vitro (Fig. 5g ). To define the intrinsic role of PSAT1 in immunosuppressive macrophages during tumorigenesis, we transplanted Psat1 fl/fl × LysM Cre and wild-type mice with B16-F10 melanoma cells. Delayed growth and reduced tumor weight were observed in Psat1 fl/fl × LysM Cre mice compared with controls (Fig. 5h,i ). A significant reduction in cell number and M2-positive phenotype was detected in TAMs from Psat1 fl/fl × LysM Cre mice (Fig. 5j,k ). We also found that elimination of PSAT1 in macrophages could enhance anti-tumor T cell responses, resulting in increased percentages of TILs and IFN-γ-expressing CD8 + and CD4 + T cells from B16-F10 tumor-bearing mice (Fig. 5l–n ). Fig. 5: Serine biosynthesis promotes an immunosuppressive phenotype in macrophages. a , Expression of CD206 and CD301 in BMDMs transduced with retrovirus-expressing PHGDH shRNA (middle) or BMDMs transduced with retrovirus-expressing PSAT1 shRNA (bottom) ( n = 3, mean ± s.e.m). Data represent three independent experiments. b , c , Representative histogram (left) and quantitative plot (right) of Relmα + ( b ) or Ki67 + ( c ) mouse peritoneal macrophages after treatment with IL-4c in the presence or absence of NCT-503 ( n = 6 mice per group, mean ± s.e.m). Each data symbol represents one individual. d , e , Expression of CD206, CD301 ( d ), PD-L2 and Relmα ( e ) in IL-4-stimulated BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre ( n = 3, mean ± s.e.m). Data represent three independent experiments. f , Expression of CD206 and CD301 by BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre mice cocultured with B16-F10 melanoma cells (top) and LLC cells (bottom) for 72 h ( n = 2, mean ± s.e.m). Data were collected from two independent experiments. g , Proliferation of CTV-labelled OT-I CD8 T cells activated with anti-CD3 and anti-CD28 and cocultured with PSAT1 wild-type or knockout BMDMs treated with IL-4 (M2) or LPS + IFN-γ (M1) in a ratio of 1:10 for 72 h ( n = 2, mean ± s.e.m). Data were collected from two independent experiments. h , i , Tumor growth ( h ) and tumor weight ( i ) of B16-F10 melanoma from Psat1 fl/fl or Psat1 fl/fl × LysM Cre mice ( n = 5 mice per group, mean ± s.e.m). Data were collected from two independent experiments. j , k , Absolute number of TAMs ( j ) and frequency of CD206 + TAMs ( k ) in PSAT1 wild-type or knockout mice ( n = 4 mice per group, mean ± s.e.m). l – n Absolute number of TILs ( l ) and the frequency of IFN-γ + CD8 ( m ) or CD4 ( n ) T cells in PSAT1 wild-type or knockout mice ( n = 4 mice per group, mean ± s.e.m). Data represent two independent experiments. All data were analyzed using a two-tailed unpaired Student’s t -test ( b – n ) or a one-way ANOVA with Dunnett’s multiple comparisons test ( a ). Source data Full size image PSAT1 is required for mitochondrial fitness Cellular serine is actively transported into the mitochondria 44 and its availability is known to maintain mitochondrial function and support mitochondrial fatty acid metabolism 45 . We, therefore, hypothesized whether the mitochondrial dysfunction present in the PERK cKO macrophages was due to diminished SBP metabolism. We observed that the level of intracellular serine was markedly reduced in PSAT1 cKO M2 macrophages compared with wild-type controls (Fig. 6a ) and, similar to PERK cKO , inhibition of PSAT1-mediated serine biosynthesis appeared to decrease fatty acid oxidation (FAO; OCR), glycolytic activity (ECAR) and ATP generation in macrophages under IL-4 stimulation (Fig. 6b,c and Extended Data Fig. 4h–j ). Yet, it is interesting that this reduction in energy production was not due to dysregulation of mitochondrial mass and membrane potential (Fig. 6d–g ). In addition, we found that loss of PSAT1 did not exhibit a negative impact on ETC assembly compared with controls (Fig. 6h ). However, calcium flux into the mitochondria was significantly compromised in the PSAT1-null M2 macrophages (Fig. 6i ). Together, these findings indicate that serine biosynthesis plays an important role in mitochondrial fitness by supporting FAO and mitochondrial calcium flux, but has no direct bearing on the mitochondrial respiratory chain assembly. Fig. 6: Serine biosynthesis contributes to mitochondrial fitness independent of respiratory chain assembly. a , Intracellular serine levels in extracts of IL-4-stimulated BMDMs from Psat1 fl/fl or Psat1 fl/fl × LysM Cre ( n = 4, mean ± s.e.m). The serine levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. b , Basal OCR (left) and basal ECAR (right) of PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data were collected from three independent experiments. c , ATP production of PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 4, mean ± s.e.m). Data represent two independent experiments. d , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in BMDMs transduced with retrovirally expressing Luc, PHGDH or PSAT1 shRNA, and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. ns, not significant. e , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in BMDMs transduced with retrovirally expressing Luc, PHGDH or PSAT1 shRNA, and stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. f , Representative histogram (left) and quantitative plot (right) of MitoTracker Green + staining in PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. g , Representative histogram (left) and quantitative plot (right) of MitoTracker Orange + staining in PSAT1 wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). Data represent three independent experiments. h , Immunoblot analysis of mitochondrial ETC complexes from PSAT1 wild-type or knockout BMDMs stimulated with IL-4. Data represent two independent experiments. i , Mitochondrial calcium uptake (Rhod-2) of PERK wild-type or knockout BMDMs stimulated with IL-4 ( n = 3, mean ± s.e.m). The arrow indicates stimulation using 10 μM ionomycin. Data were collected from three independent experiments. All data were analyzed using a two-tailed, an unpaired Student’s t- test ( a , b , f and g ), a two-tailed, paired Student’s t- test ( i ) or an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( d and e ). Source data Full size image PERK–PSAT1 signaling facilitates epigenetic regulation In addition to assisting mitochondrial function, SBP plays an important role in amino acid homeostasis by regulating intracellular α-KG levels for the maintenance of epigenetic modifications 5 , 46 . PSAT1 requires glutamine-derived glutamate as a substrate for a transamination reaction, resulting in the production of serine as well as α-KG. In our data, we found that loss of PERK resulted in a decreased expression of glutamine and glutamate metabolism (Fig. 2b,e ). In support of this, we quantified the levels of intracellular glutamine and α-KG and found that PERK cKO led to a notable reduction in glutamine consumption and α-KG production in IL-4-stimulated M2 macrophages (Fig. 7a,b ). As expected, the cellular level of α-KG was also strongly dysregulated in PSAT1-ablated M2 macrophages compared with wild-type controls (Fig. 7c ). These findings suggest that PERK fine-tunes PSAT1 activity to produce α-KG in M2 macrophages. Fig. 7: Dysregulation of SBP suppresses JMJD3-mediated histone demethylation. a , b , Glutamine consumption ( a ) and intracellular α-KG levels ( b ) from PERK wild-type or knockout BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). The levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. c , Intracellular α-KG levels from PSAT1 wild-type or knockout BMDMs treated with IL-4 ( n = 4, mean ± s.e.m). The levels from M0 are indicated by a dotted horizontal line. Data represent two independent experiments. d , e , Immunoblot analysis of histone methyl mark H3K27me3, PSAT1, PERK, histone H3 from PERK ( d ) or PSAT1 ( e ) wild-type and knockout BMDMs treated with IL-4. Data represent three independent experiments. f , H3K27me3 histone modifications of Irf4 , Pparg , Phgdh and Mgl2 from PERK wild-type and knockout BMDMs stimulated with IL-4. Data represent three independent experiments. g , h , RT-qPCR analysis of indicated M2 genes from PERK ( g ) or PSAT1 ( h ) wild-type and knockout BMDMs cultured with IL-4 for 6 h in the presence or absence of dmKG (1 mM) or GSK-J4 (25 μM) ( n = 3, mean ± s.e.m). Data were collected from three independent experiments. All data were analyzed using either a two-tailed, unpaired Student’s t- test ( a – c ) or an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( g and h ). Source data Full size image α-KG is an essential cofactor for JMJD3 histone demethylation. Previously, JMJD3–α-KG signaling has been implicated in M2 activation in macrophages 29 . We therefore reasoned that the decrease in immunosuppressive M2 properties by PSAT1 and PERK deficiencies may be due to reduced histone demethylation as a result of lower α-KG availability. Indeed, we found that histone methylation marks on H3K27 were elevated by PSAT1 cKO and PERK cKO (Fig. 7d,e ). This hypermethylation was not due to decreased expression of Jmjd3 mRNA (Extended Data Fig. 5a,b ) or protein (Extended Data Fig. 5c,d ). To understand whether the hypermethylation of H3K27 was specifically occurring on M2-related genes, we performed an unbiased ChIP-seq analysis of PERK-sufficient and -deficient macrophages. In comparison to PERK wild-type macrophages, PERK cKO had significantly more regions with increased H3K27 methylation in M2 macrophages compared with M1 macrophages (Extended Data Fig. 5e ). As expected, we observed increased H3K27 methylation at the loci of M2 genes, including Irf4, Pparg and Mgl2 in PERK-deficient M2 cells (Fig. 7f ). In contrast, the H3K27 methylation state of those M2 gene promoters was unaffected in both naive macrophages (M0) and macrophages stimulated with LPS + IFN-γ (Extended Data Fig. 5f ). Moreover, the distributions of H3K27me3 in M1-related genes were not affected by PERK deficiency in M0, M1 or M2 conditions (Extended Data Fig. 5g ). We then asked whether supplementation of α-KG could rescue the immunosuppressive phenotype in PERK- and PSAT1-deficient M2 macrophages. We found that the expression of M2-associated genes was restored in both PSAT1 cKO and PERK cKO M2 macrophages by supplementation with dimethyl α-KG (dmKG) (Fig. 7g,h ), and this phenomenon could be reversed by the inhibition of JMJD3 using a selective inhibitor GSK-J4. Collectively, our data strongly suggest that JMJD3-dependent histone modifications are important for M2 gene expression and are sensitive to the α-KG availability mediated by an unconventional PERK–PSAT1 metabolic pathway. Inhibition of PERK signaling promotes anti-tumor immunity To recapitulate our findings in therapeutic models, we next evaluated the anti-tumor effects of GSK2656157 (PERK inhibitor) and NCT-503 (PHGDH inhibitor) (Extended Data Fig. 6a ). We found that delayed tumor progression and development was observed in B16-F10 tumor-bearing mice treated with both small molecule inhibitors (Fig. 8a,b ) and corresponded with a profound reduction in the numbers and immunosuppressive activity of intratumoral TAMs (Fig. 8c,d ). These treatments induced higher expansion of CD8 + and CD4 + T cells (TILs) expressing IFN-γ, but this increase was only statistically significant for the GSK2656157-treated mice (Fig. 8e–g ). In addition, both treatments of GSK2656157 and NCT-503 had no marked impact on other tumor-infiltrating immune cell populations (Extended Data Fig. 6b ), but were able to significantly extend survival in tumor-bearing mice (Fig. 8h ). We noted that, although NCT-503 could strikingly reduce tumor growth and weight, this treatment had much more variable effects on mouse body weight, suggesting poorer tolerance (Extended Data Fig. 6c ). Given that GSK2656157 appeared to have better outcomes overall, we then tested whether it could synergistically work with anti-PD-1 immunotherapy to further suppress tumor progression. We observed that GSK2656157 potentiated the anti-tumor efficacy of anti-PD-1 monoclonal antibody against B16-F10 melanoma (Extended Data Fig. 6d and Fig. 8i ). Overall, our data demonstrate a crucial role for PERK signaling in macrophage suppressive activity and that a PERK inhibitor provides significant anti-tumor efficacy. Moreover, PERK inhibition with the combination of immune checkpoint blockade can reprogram the TME toward a more immunostimulatory environment, leading to better cancer treatment. Fig. 8: Therapeutic PERK inhibition suppresses tumorigenesis. a , b , Tumor growth ( a ) or weight ( b ) from mice bearing B16-F10 melanoma treated with either GSK2656157 or NCT-503. Drug administration is indicated by arrows ( n = 9 in vehicle, n = 10 in GSK and NCT treatment groups; mean ± s.e.m). c , Absolute number of TAMs ( n = 7 in vehicle, n = 8 in GSK and NCT treatment groups; mean ± s.e.m). d , Frequency of CD206 + TAMs ( n = 8, mean ± s.e.m). e – g , Absolute number of TILs ( e ), frequency of IFN-γ + CD8 ( f ) and IFN-γ + CD4 ( g ) T cells ( n = 7 in vehicle, n = 10 in GSK and NCT treatment groups; mean ± s.e.m). h , Survival analysis of mice treated with vehicle, GSK2656157 or NCT-503 ( n = 10 mice per group). i , Tumor growth from mice bearing B16-F10 melanoma and treated with IgG2a control, αPD-1, GSK2656157 or αPD-1 + GSK2656157 (n = 7 mice per group). All data were collected from two independent experiments. Data were analyzed by an ordinary one-way ANOVA with Dunnett’s multiple comparisons test ( a – g ) or the Mantel–Cox test for survival ( h ). Source data Full size image Discussion Classic T H 2 immune responses such as helminth infections are required not only to promote worm clearance but also to resolve inflammatory tissue damage 47 . Similarly, tumor sites, which are considered a form of chronic stress, also induce sustained inflammation 48 . An emerging body of research has suggested that macrophages, mediated by cellular metabolism and/or type 2 cytokines, play critical roles in orchestrating tissue inflammation during pathogenic insults 49 , 50 . Yet, the molecular drivers that contribute to the metabolic and energetic rewiring of macrophage effector function remain elusive. ER stress responses have recently emerged as a crucial regulatory process underlying multiple essential cellular functions in addition to proteostasis. In the present study, we found that the ER protein PERK acts as a metabolic nexus point that connects extracellular cues to intracellular reprogramming. We found that both IL-4 and the TME are initiation factors that stimulate the immunosuppressive phenotype, and these environmental cues are coordinated with and ‘translated’ by PERK signaling to control the processes necessary for promoting the immunosuppressive activity of M2 macrophages. The metabolic pathways mediated by PERK activity could effectively result in suppressed T cell effector functions, and inhibition of PERK was able to restore the cell expansion and IFN-γ production of T cells, evoking better anti-tumor immunity. It has become evident that metabolism dictates immunity 6 . The widespread metabolic derailment found in PERK cKO macrophages is evidence that PERK acts as a vital metabolic hub to effectively convey the cellular signals and/or demands for effector function. Our results indicated that PERK signaling was required for mitochondrial bioenergetics, upregulating mitochondrial mass, cristae ETC activity and calcium exchange. We found that the mitochondrial respiration and FAO critical for meeting the energy demands of M2 macrophages were mediated by PERK signaling. Moreover, activated PERK signaling induced the downstream transcription factor ATF-4 to regulate PSAT1-mediated serine biosynthesis, which in turn generated serine and supported mitochondrial FAO and calcium signaling. We also found that the activity of PSAT1 produced α-KG, which could then support JMJD3 in epigenetic histone modifications to promote immunosuppressive gene expression in macrophages. The necessity of these functions underlies the metabolic pathways required for macrophage M2 activation under harsh pathological insults. Together, these data uncover an unexpected molecular interconnection between PERK and other important organelles within the macrophage, and the relationship between PERK and PSAT1-mediated serine biosynthesis provides potential strategies to reprogram or edit M2 macrophages that could benefit the treatment of cancers or other inflammatory diseases. Previous studies have illustrated that deletion of transcription factor ATF-4 affects mitochondrial respiration, glycolysis and amino acid metabolism, leading to impaired CD4 + T cell proliferation and effector function 51 . In addition, defective nonessential amino acid metabolism, such as serine synthesis, attenuates the proliferative capacity of T lymphocytes to modulate adaptive immunity 36 . We have shown a previously undescribed role for activated PERK and downstream ATF-4 signaling in inducing the expression of enzymes (PHGDH and PSAT1) involved in diverting intermediary metabolites from glycolysis to de novo serine biosynthesis. Perturbations of PHGDH and PSAT1 or inhibition of PERK all effectively lowered the cellular pool of serine. In addition, genetic ablation of PSAT1 significantly decreased mitochondrial FAO and mitochondrial calcium flux in M2 macrophages. Notably, the expansion of suppressive M2 macrophages present in mice after IL-4c administration or challenge with tumor did not occur when de novo serine biosynthesis was inhibited. This finding may reflect reduced proliferation and function of these cells due to diminished essential macromolecule biosynthesis and mitochondrial FAO supported by the serine metabolic program 38 , 45 . Glutamine-derived glutamate can contribute to the production of glycine via the serine synthesis pathway and through the transamination of 3-phosphohydroxypyruvate to phosphoserine by PSAT1. Reports have indicated that PSAT1 governs the intracellular levels of serine/glycine and α-KG and is a metabolic vulnerability in cancer cells 52 , 53 , 54 . Our genetic PERK- and PSAT1-deficient models provide a unique avenue to explore these metabolic dynamics in macrophages. We found that M2 activation was associated with increased levels of glutamine/glutamate metabolism, and glutamine utilization was markedly diminished by PERK ablation. Importantly, inhibition of PSAT1 or PERK significantly reduced the cellular concentrations of α-KG in M2 macrophages. α-KG is known not only to support the activity of the citrate cycle 55 , but also to serve as an essential cofactor for the histone demethylase JMJD3. JMJD3 has been previously found to promote immune cell functionality 31 , 56 ; thus, loss of α-KG potentially prevented the activity of JMJD3, resulting in hypermethylation of histone H3K27. It is also likely that α-KG can mediate the activity of other epigenetic regulators such as ten-eleven translocation (TET) enzymes to control immune cell fate 5 . It has been reported that the activity of TET2 is important to repress proinflammatory activity in macrophages 57 and loss of TET2 fails to promote/sustain the immunosuppressive function of macrophages in the TME 58 . Nevertheless, our results suggest that the cellular production of α-KG is necessary for the metabolic reprogramming and epigenetic modification of M2 macrophages and is, in part, mediated by PERK–PSAT1 signaling. The relationship between PERK and PSAT1 suggests that this would make an effective therapeutic target against melanoma. We found that treatment of melanoma with GSK2656157 or NCT-503 markedly delayed tumor growth and suppressed the immunosuppressive activation of TAMs. However, only GSK2656157 could further enhance T cell anti-tumor immunity, as demonstrated by increased IFN-γ + TILs. GSK2656157 was also able to boost the efficacy of anti-PD-1 immunoblockade and confer protection against melanoma. Collectively, our study provides a new role for PERK in the regulation of metabolic circuits and epigenetic regulation that supports immunosuppressive M2 activation and function in macrophages. Our findings suggest that modulation of PERK signaling may offer therapeutic potential in diseases in which immunosuppressive M2 macrophages have deleterious effects. Methods Animals and in vivo experiments C57BL/6J, LysM Cre and Gcn2 −/− mice were purchased from Jackson Laboratory. Psat1 tm1a(KOMP)Wtsi / Mmucd ( Psat1 fl/fl ) mice were purchased from the Mutant Mouse Resource and Research Center (MMRCC). Eif2ak3 fl/fl and OT-I mice were provided by S. Adoro and A. Huang, respectively. All mice were bred and maintained in specific pathogen-free conditions under protocols approved by institutional animal care at Case Western Reserve University School of Medicine, and both male and female mice were used at age 8–12 weeks. For IL-4c experiments, age- and sex-matched C57BL/6J or PERK wild-type or PERK cKO mice were injected intraperitoneally with or without 30 mg kg −1 of NCT-503 (Cayman Chemical) and with 300 μl of 3% thioglycolate (Sigma-Aldrich) immediately before IL-4 complexed to the monoclonal antibody anti-IL-4 (IL-4c; containing 5 μg of IL-4 (PeproTech) and 25 μg of anti-IL-4 clone 11B11, BioXcell) 59 . For all tumor experiments, 5 × 10 5 B16-F10 melanoma cells were subcutaneously transplanted into age- and sex-matched mice as described. Tumors were measured every 2 d using calipers starting at day 7 post-tumor injection until day 15, 16 or 18. Tumor volume was calculated using the formula ((length × width × (length × width) 0.5 ) × π/6). For the therapeutic drug treatment: mice were given an intraperitoneal injection of either 100 μl of dimethylsulfoxide, 30 mg kg −1 of GSK2656157 (Selleck Chem) and 30 mg kg −1 of NCT-503 (Selleck Chem) on days 8, 10 and 12 post-tumor injection. Tumors were measured every 2 d starting at day 8 and until day 16. Maximal tumor size cut-off was determined to be 2 cm for non-endpoint studies. For the anti-PD-1 tumor experiment: mice were given control 200 μg per mouse of immunoglobulin (Ig)G2a (Lienco Technologies), 200 μg per mouse anti-PD-1 (Lienco Technologies), 30 mg kg −1 of GSK2656157 or combination therapy of anti-PD-1 + GSK2656157 on days 8, 10 and 12 post-tumor injection. Tumors were measured every 2 d starting on day 8 until day 18. Cell lines B16-F10 (CRL-6475) melanoma and LL/2 (LLC; CRL-1642) LLCs were purchased from American Type Culture Collection and maintained using complete medium (RPMI 1640 containing 2 mM l -glutamine, 100 U ml −1 of penicillin–streptomycin and 10% fetal bovine serum (FBS)). Cell lines were passaged for a maximum of 20 passages. Tumor digestion and cell isolation Tumors were minced in RPMI with 2% FBS, DNase I (1 μg ml −1 , Sigma-Aldrich) and collagenase (1 mg ml −1 , Sigma-Aldrich), followed by digestion at 37 °C for 1 h and filtration through a 45-μm cell strainer. Filtered cells were incubated with ACK Lysing Buffer (Thermo Fisher Scientific) to lyse red blood cells and then washed with FACS buffer (phosphate-buffered saline (PBS) + 1% FBS + 0.1% sodium azide). Leukocyte enrichment was performed by density gradient centrifugation (800 g , 30 min) at 25 °C with 40% and 80% Percoll (GE Healthcare). Preparation of macrophages from bone marrow and macrophage activation Bone marrow cells were differentiated in the presence of recombinant mouse macrophage colony-stimulating factor (M-CSF; 20 ng ml −1 ; PeproTech) in complete medium (RPMI 1640 containing 10 mM glucose, 2 mM l -glutamine, 100 U ml −1 of penicillin–streptomycin and 10% FBS) for 7 d. Fresh complete medium with M-CSF was supplemented on day 6 before use. Day 7 macrophages were washed and variously stimulated with IL-4 (20 ng ml −1 ; PeproTech) or LPS (20 ng ml −1 ; Sigma-Aldrich) plus IFN-γ (50 ng ml −1 ; PeproTech) in the absence or presence of 5 μM GSK2656157 (Cayman Chemical), 30 μM CBR-5884 (Cayman Chemical) or 5 μM ISRIB (Cayman Chemical). Macrophages were harvested after 24 h and analyzed by flow cytometry for the expression of M2 or M1 activation. For tumor coculture experiments, macrophages and tumor cells were collected on day 7. Tumor cells were plated at a density of 5–6 × 10 5 in a 12-well plate for at least 1 h before the addition of macrophages. Once tumor cells were attached, 2 × 10 5 macrophages were added to each well for 72 h. For some experiments, macrophages were cultured with IL-4 in the presence or absence of 1 mM dmKG (Sigma-Aldrich) or 25 μM GSK-J4 (Selleck Chem) for 6 h and cells were harvested for the further experiments as indicated. In vitro T cell proliferation assay Mouse splenic CD8 + T cells from OT-I mice were isolated using EasySep Mouse CD8α Positive Selection Kit (STEMCELL Technologies). Isolated OT-I T cells were labeled with CellTrace Violet (CTV) Cell proliferation Kit (Thermo Fisher Scientific) according to the manufacturer’s instructions. The CTV-labeled CD8 + T cells (0.5 × 10 5 per well) were cultured in plate-bound 1 µg ml −1 of anti-CD3 (clone 145-2C11; Thermo Fisher Scientific) and 5 µg ml −1 of anti-CD28 (Clone 37.51; Thermo Fisher Scientific) with complete RPMI medium containing 55 µM β-mercaptoethanol and 10 ng ml −1 of IL-2. IL-4- or LPS + IFN-γ-stimulated macrophages were then added into T cell cultures at a ratio of 1:10 (macrophages:T cells) for 72 h. Cells were then harvested and CTV-positive signal in the CD8 + gate was measured by flow cytometry. Flow cytometry For surface staining, cells were kept at 4 °C and blocked with 5 μg ml −1 of anti-CD16/32 (clone 93; eBiosciences) before staining to CD45 (clone 30-F11, BioLegend), Gr1 (clone 8C5, BioLegend), CD3 (clone 17A2, eBiosciences), CD3 (clone 145-2C11, BioLegend), CD4 (clone GK1.5, BioLegend), CD8 (clone 53-6.7, BioLegend), CD19 (clone 6D5, BioLegend), F4/80 (clone BM8, BioLegend and eBiosciences), CD11b (clone M1/70, BioLegend and eBiosciences), CD206 (clone C068C2; BioLegend), CD301 (clone ER-MP23, BioRad) and PD-L2 (clone TY25, eBiosciences). For intracellular staining of Relmα (PeproTech), Ki67 (eBiosciences), NOS2 (clone C-11, Santa Cruz Biotechnology), TNF (clone MP6-XT22, BioLegend) or p-PERK(T980) (Bioss), cells were fixed with BD Cytofix/Cytoperm buffer (BD Biosciences) and stained with appropriate primary antibody followed by incubation with appropriate fluorochrome-conjugated anti-rabbit IgG (Jackson Immunoresearch) or anti-mouse IgG (clone Poly4053, BioLegend). For IFN-γ staining, cells were cultured in complete RPMI medium containing phorbol 12-myristate 13-acetate (50 ng ml −1 ; Sigma-Aldrich), ionomycin (750 ng ml −1 ; Sigma-Aldrich) and GolgiStop (1,000×; BD Biosciences) for 4 h at 37 °C. After surface staining, the cells were fixed using BD Cytofix/Cytoperm buffer and stained with IFN-γ antibody (clone XMG1.2; BioLegend). Lipid uptake was stained with 1 μM BODIPY FL C 16 (Invitrogen) and measured by flow cytometry. Intracellular neutral lipids were stained with 500 ng ml −1 of BODIPY 493/503 (Invitrogen) and measured by flow cytometry. Mitochondrial mass and membrane potential were stained with 50 nM MitoTracker Green (Invitrogen) and 50 nM MitoTracker Orange (Invitrogen), respectively, and measured by flow cytometry. Retroviral transduction Retroviral transduction of macrophages was accomplished using protocols that we have used previously 24 . Sequences for luciferase and Phgdh , Psat1 and Atf4 shRNAs were obtained from Open Biosystems and cloned into the MSCV-LTRmir30-PI8 retroviral vector, encoding human CD8 (huCD8) as a reporter. For overexpression, the Atf4 sequence was cloned into MSCV–IRES retroviral vector, encoding huCD8 as a reporter. Day 3 BMDC cultures were spin infected with retrovirus. At day 7 of culture, macrophages were harvested and transduced cells were identified by huCD8 expression. Cell fractionation and Immunoblot analysis Cells were lysed in radioimmunoprecipitation assay buffer (Thermo Fisher Scientific) with protease and phosphatase inhibitors (Cell Signaling Technologies). Anti-PERK (C33E10), anti-PHB1, anti-XBP1s (E9V3E), anti-trimethyl-histone H3 Lys27 (C36B11) and anti-histone H3 (D1H2) were all purchased from Cell Signaling Technologies. Polyclonal antibody p-PERK(T980) was purchased from Bioss (catalog no. BS-3330R). Anti-JMJD3 was purchased from Abcepta. Anti-PHGDH and anti-PSAT1 antibodies were purchased from Protein Tech. Total OXPHOS rodent antibody cocktail was purchased from Abcam. Anti-ATF-4 (C-20) was purchased from Santa Cruz Biotechnology. Anti-β-actin was purchased from Sigma-Aldrich. Primary antibody staining was followed by peroxidase-linked secondary antibody and ECL immunoblot detection (BioRad). Immunoblots for p-PERK(T980) and PERK were performed using a phos-tag-based acrylamide gel (FUJIFILM Wako Chemicals). RNA extraction and RT-qPCR RNA was extracted using TRIzol reagent (Life Technologies). Complementary DNA was generated using PrimeScript RT Reagent Kit with gDNA Eraser (Takara Bio) according to the manufacturer’s instruction. The TaqMan or SYBR green method was used for RT-qPCR with primers from Applied Biosystems or IDT. The assay was performed on a BioRad CFX96 machine. Relative expression was normalized by β-actin in each sample. The following primer sequences were used: for SYBR green, β-actin (forward, 5′-TCCATCATGAAGTGTGACGT-3′; reverse, 5′-TACTCCTGCTTGCTGATCCAC-3′), Atf4 (forward, 5′-GCAAGGAGGATGCCTTTTC-3′; reverse, 5′-GTTTCCAGGTCATCCATTCG-3′) , Phgdh (forward, 5′-TGGCCTCGGCAGAATTGGAAG-3′; reverse, 5′-TGTCATTCAGCAAGCCTGTGGT-3′); and Psat1 (forward, 5′-GATGAACATCCCATTTCGCATTGG-3′; reverse, 5′-GCGTTATACAGAGAGGCACGAATG-3′); for TaqMan, β-actin (mm00607939_s1), Arg1 (mm00475988_m1), Cd36 (mm00432399_m1), Chil3 (mm00657889_mH), Lipa (mm00498820_m1), Mrc1 (mm00485148_m1), Pparg (mm01184322_m1), Ppargc1b (mm00504723_m1) and Irf4 (mm00516431_m1). Metabolic analysis For real-time analysis of ECARs and OCRs, macrophages were analyzed using an XF e 96 Extracellular Flux Analyzer (Agilent). Three or more consecutive measurements were taken under basal conditions followed by the sequential addition of 1 μM oligomycin to inhibit the mitochondrial ATP synthase, 3 μM fluorocarbonyl cyanide phenylhydrazone (FCCP), a protonophore, which uncouples ATP synthesis from oxygen consumption by the ETC, and 100 nM rotenone plus 1 μM antimycin A (all drugs for this assay were purchased from Sigma-Aldrich), which inhibits the ETC. In this assay, basal oxygen consumption can be established by measuring the OCR in the absence of drugs. ATP production was measured using the ATP Determination Kit (Invitrogen). Glutamine consumption was measured using the Glutamine Assay Kit (Abnova). Serine and α-KG levels were measured using the dl -Serine Assay Kit or the Alpha Ketoglutarate Assay Kit following the manufacturer’s instruction (Abcam). Metabolite levels of cholesterol, 3-phosphoglycerate, serine and glycine in stimulated macrophages were measured using Metabolon. Calcium flux analysis Cells were washed with calcium flux buffer (1× Hanks’ balanced salt solution), 0.1% bovine serum albumin, 25 mM Hepes, pH 7.4 and 2.5 mM probenecid). Cells were incubated with 1 μM Rhod-2 (Thermo Fisher Scientific) in calcium flux buffer in the dark for 30 min at 37 °C. After incubation, cells were washed with calcium flux buffer and incubated for an additional 5 min in calcium flux buffer without Rhod-2 at 37 °C. Rhod-2 was measured using a BioTek Gen5 plate reader. After 5 min of baseline reading, Rhod-2-loaded cells were stimulated with 10 μM ionomycin. Calcium flux measurements were taken every 30 s until a plateau was reached (approximately 20 min). RNA-seq and bioinformatics analysis The mRNA was extracted from lysates of cells that had been stimulated for 24 h. Random primers and reverse transcriptase of TruSeq Stranded kit were used to synthesize cDNA and cDNA library sequencing was performed using Illumina HiSeq. The differential expression test, GSEA and visualization were performed using R (v.5.2.0), ggplot2 (v3.2.1), edgeR (v.3.32.1), pheatmap (v.1.0.12), clusterProfiler (v.3.10) and MSigDB (v.7.0). Targeted metabolomics analysis Sample preparation Cell culture was extracted by the addition of MeOH:H 2 O (4:1) (1 ml) 60 , 61 . This solution containing scraped lysed cells was further homogenized in the Cryolys Precellys 24-sample Homogenizer (2× 20 s at 10,000 r.p.m., Bertin Technologies) with ceramic beads. Homogenized extracts were centrifuged for 15 min at 4,000 g at 4 °C and the resulting supernatant was collected and evaporated to dryness in a vacuum concentrator (LabConco). Dried sample extracts were resuspended in MeOH:H 2 O (4:1, v:v) before liquid chromatography (LC)–tandem MS (MS/MS) analysis according to the total protein content, using 75 µl as the minimal reconstitution volume corresponding to the sample with the lowest protein content. Liquid chromatography LC–MS/MS. Cell lysates were analyzed by hydrophilic interaction liquid chromatography coupled to tandem mass spectrometry (HILIC–MS/MS) in both positive and negative ionization modes using a 6495 triple quadrupole system (QqQ) interfaced with a 1290 UHPLC system (Agilent Technologies) 62 . In positive mode, the chromatographic separation was carried out in an Acquity BEH Amide, 1.7 μm, 100-mm, 2.1-mm inner diameter column. The mobile phase was composed of A (20 mM ammonium formate and 0.1% fatty acids in water) and B (0.1% formic acid in acetonitrile (ACN)). The linear gradient elution from 95% B (0–1.5 min) down to 45% B was applied (1.5−17 min) and these conditions were held for 2 min, followed by 5 min of column re-equilibration at the initial gradient conditions. The flow rate was 400 μl min −1 , column temperature 25 °C and sample injection volume 2 µl. In negative mode, a SeQuant ZIC-pHILIC (100 mm, 2.1-mm inner diameter and 5-μm particle size, Merck) column was used. The mobile phase was composed of A (20 mM ammonium acetate and 20 mM NH 4 OH in water at pH 9.7) and B (100% ACN). The linear gradient elution from 90% (0–1.5 min) to 50% B (8–11 min) down to 45% B (12–15 min) was applied, followed by a 9-min post-run for column re-equilibration. The flow rate was 300 μl min −1 , column temperature 30 °C and sample injection volume 2 µl. For both analyses, the electrospray ionization source conditions were set as follows: dry gas temperature 290 °C, nebulizer 35 p.s.i. (241.317 kPa) and flow 14 l min −1 , sheath gas temperature 350 °C and flow 12 l min −1 , nozzle voltage 0 V and capillary voltage ±2000 V. Data were acquired in dynamic multiple reaction monitoring (MRM) mode with a total cycle time of 600 ms. Pooled quality control (QC) samples (representative of the entire sample set) were analyzed periodically throughout the overall analytical run to assess the quality of the data, correct the signal intensity drift and remove the peaks with poor reproducibility (coefficient of variation (CV) > 30%) 63 . In addition, a series of diluted QCs were prepared by dilution with methanol: 100% QC, 50% QC, 25% QC, 12.5% QC and 6.25% QC, and analyzed at the beginning and end of the sample batch. This QC dilution series served as a linearity filter to remove the features that don’t respond linearly or correlation with dilution factor is <0.65 (ref. 64 ). Data processing and statistical analysis Raw LC–MS/MS data were processed using the Agilent Quantitative analysis software (v.B.07.00, MassHunter Agilent technologies). Relative quantification of metabolites was based on extracted ion chromatogram areas for the monitored MRM transitions. Data quality assessment was done in R ( ). Signal intensity drift correction was done within the LOWESS/Spline algorithm 65 followed by filtering of ‘not-well behaving’ peaks (CV (QC peaks) >30% and R 2 (QC dilution curve) <0.75). TEM For TEM analysis, cells were seeded on to 6-well plates with 12-mm diameter inserts (Corning Snapwell inserts) at a density of 1–2 × 10 5 cells per well. Cells were stimulated as described previously. After 24 h, the membrane filter with their attached cells was immersed in fixative. The initial fixative was 2.5% glutaraldehyde in cacodylate buffer, pH 7.3. The specimen was postfixed in ferrocyanide-reduced 1% osmium tetroxide. After a soak in acidified uranyl acetate, the specimen was dehydrated in ethanol, passed through propylene oxide and embedded in Embed-812 (Electron Microscopy Sciences). Thin sections (70 nm) were cut on an RMC MT6000-XL ultramicrotome. Sections were cut in a horizontal plane parallel to that of the membrane to provide panoramic views of the cells. These were mounted on Gilder square 300-mesh nickel grids (Electron Microscopy Sciences) and then sequentially stained with acidified methanolic uranyl acetate and stable lead-staining solution. These were coated on a Denton DV-401 carbon coater (Denton Vacuum LLC), and examined in an FEI Tecnai Spirit (T12) with a Gatan US4000 4kx4k CCD. ChIP-seq Five million cells were fixed with 1% formaldehyde in medium at 1 × 10 6 cells ml −1 for 10 min at room temperature with constant agitation. Fixation was stopped and quenched with 125 mM glycine for 5 min on ice. After one wash with PBS, cell pellets were snap-frozen with liquid nitrogen and stored at −80 °C until nuclei preparation. Nuclei were isolated using lysis buffer (50 mM Hepes, pH 7.5, 140 mM NaCl, 1 mM EDTA, 10% glycerol, 0.5% NP40 and 0.25% Triton X-100) and were washed once with wash buffer (10 mM Tris-HCl, pH 8.0, 200 mM NaCl, 1 mM EDTA and 0.5 mM (ethylenebis(oxonitrilo))tetra-acetate). The pellets were resuspended in shearing buffer (10 mM Tris-HCl, pH 8.0, 1 mM EDTA and 0.1% sodium dodecylsulfate (SDS)) and the chromatin was sheared using BioruptorPico (Diagenode) for 14 cycles (30 s on, 30 s off) at 4 °C. Then, 2.5 µg of chromatin was diluted to a final of 300 µl of RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1 mM EDTA, 1% NP40, 0.1% SDS and 0.5% sodium deoxycholate) and incubated with 2.5 µg of rabbit monoclonal anti-H3K27me3 antibody (Cell Signaling Technologies, clone C36B11, lot 19) overnight at 4 °C with constant rotation. For immunoprecipitation, the samples were incubated with 10 µl of precleared Protein A Magnetic Beads (Thermo Fisher Scientific) for 2 h at 4 °C. The beads were washed twice with RIPA buffer, once with high-salt buffer (50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 1 mM EDTA, 1% NP40 and 0.1% SDS), once with LiCl buffer (50 mM Tris-HCl, pH 8.0, 250 mM LiCl, 1 mM EDTA, 1% NP40 and 1% sodium deoxycholate) and finally once with TE buffer (10 mM Tris-HCl, pH 8.0 and 1 mM EDTA). All washes were incubated for 5 min at 4 °C with constant rotation. After the last wash, beads were resuspended with 200 μl of elution buffer (100 mM NaHCO 3 and 1% SDS) with 0.5 mg ml −1 of RNase A (QIAGEN), and incubated at 65 °C on a thermoshaker for 20 min at 1,200 r.p.m. For decrosslinking, proteinase K (0.5 mg ml −1 ; Ambion) and NaCl (200 mM) were added into the eluted samples and incubated in a thermocycler at 65 °C overnight. The samples were purified by using a Zymo PCR purification kit. Sequencing libraries were prepared using NEB Ultra II kits according to the standard protocol (New England Biobanks). Samples were sequenced with a Illumina Novaseq 6000 at 50 bp pair-ended with an SP flow cell (Nationwide Children Hospital, Columbus, OH, USA). Bioinformatics analysis of ChIP-seq Pair-ended reads were first analyzed using FastQC and then mapped to mm10 mouse reference genome GRCm38 (December 2011) using Bowtie 2 (v.2.4.2). Samtools was used to remove unaligned and duplicated reads. Peaks were called using HOMER and reads overlapped with the mm10 blacklisted regions (ENCODE 2016) were removed. Bigwig files were generated using Deeptools (v.3.5.1) with the command bamCoverage (--normalizeUsing RPKM). Count matrix was generated using DiffBind (v.2.0.2) (22217937) dba.count with normalization DBA_SCORE_TMM_MINUS_EFFECTIVE. The differential H3K27me3-enriched regions between were determined using edgeR in DiffBind with a cut-off of P ≤ 0.01. Heatmaps were generated using Deeptools. Data analysis and statistics Data were analyzed using Graphpad Prism (v.9). Comparisons for three or more groups were calculated using one-way analysis of variance (ANOVA) and, where indicated, unpaired or paired, two-tailed Student’s t -tests. Differences were considered significant when P values were <0.05. Pilot in vivo studies were used for estimation of the sample size required to ensure adequate power. No statistical methods were used to predetermine sample size, but our sample sizes are similar to those reported in previous publications 66 . Data distribution was assumed to be normal but this was not formally tested. Age- and sex-matched animals were randomly assigned to experimental conditions. Data collection and analysis were not performed blind to the conditions of the experiments. No data exclusion was performed. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability RNA-seq and ChIP-seq results are available in the Gene Expression Omnibus database under accession nos. GSE165836 and GSE183287 , respectively. Other data available on request to corresponding author. Source data are provided with this paper.
Cancer researchers at Case Western Reserve University School of Medicine say they have successfully suppressed the growth of some solid tumors in research models by manipulating immune cells known as macrophages. The researchers say this discovery is significant because many solid tumor cancers, such as lung cancer, are difficult to treat. According to the National Cancer Institute, breast, lung, prostate and colorectal cancers—all of which are solid tumor cancers—account for almost half of all new cancer cases in the United States. In this new research, the scientists discovered that altering the macrophage metabolism—and, in doing so, influencing their relationship with T cells—suppressed the tumor's growth. The result was a significant reduction in overall tumor size in some mouse models. "The race to find a cure for cancer never stops," said Stanley Huang, an assistant professor of immunology in the Department of Pathology at the School of Medicine, who led the research. "Our research creates a pathway to a [potential] new form of cancer treatment for those with solid tumor cancers." The study appeared recently in the journal Nature Immunology. T cells and macrophages Generally, the body's immune response to disease involves mobilizing white blood cells that attack invaders like germs and bacteria. Macrophages are specialized white blood cells that consume invading cells to destroy pathogens. They are considered the "frontline soldiers" of the body's immune system and can activate T cells, which are another type of white blood cell. Yet, despite their typically protective role, macrophages can be co-opted by tumor cells to encourage tumor growth. Targeting macrophages and PERK protein As tumors grow and macrophages interact with the tumor cells, they create a response protein, which the study linked to tumor growth. Huang said the team believed it was possible to target macrophages and that particular protein—known to scientists by its shorthand, PERK ("protein kinase R" (PKR)-like endoplasmic reticulum kinase)—to block tumor growth. "Knocking out PERK suppresses downstream metabolic signaling in tumor macrophages, resulting in more T cells to fight the cancer cells," said Huang. Findings and future steps The study's findings suggest that the PERK protein is involved in several key pathways of metabolism in macrophages—and when the gene is removed, macrophages can no longer promote tumor growth; meaning tumors become smaller. Follow-up experiments further revealed that combination treatment of a PERK inhibitor drug with an inhibitor called "anti-PD-1" could significantly reduce tumor growth. Next, the researchers hope to identify a clinical drug that will act as an inhibitor for the PERK protein. "There are several strategies to enhance anti-tumor immunity like targeting or editing cell metabolism," Huang said. "We can target genes and their pathways to enhance immune function and work toward future therapeutic treatment options."
10.1038/s41590-022-01145-x
Physics
First-of-its-kind experimental evidence defies conventional theories about how plasmas emit or absorb radiation
S. X. Hu et al, Probing atomic physics at ultrahigh pressure using laser-driven implosions, Nature Communications (2022). DOI: 10.1038/s41467-022-34618-6 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-34618-6
https://phys.org/news/2022-11-first-of-its-kind-experimental-evidence-defies-conventional.html
Abstract Spectroscopic measurements of dense plasmas at billions of atmospheres provide tests to our fundamental understanding of how matter behaves at extreme conditions. Developing reliable atomic physics models at these conditions, benchmarked by experimental data, is crucial to an improved understanding of radiation transport in both stars and inertial fusion targets. However, detailed spectroscopic measurements at these conditions are rare, and traditional collisional-radiative equilibrium models, based on isolated-atom calculations and ad hoc continuum lowering models, have proved questionable at and beyond solid density. Here we report time-integrated and time-resolved x-ray spectroscopy measurements at several billion atmospheres using laser-driven implosions of Cu-doped targets. We use the imploding shell and its hot core at stagnation to probe the spectral changes of Cu-doped witness layer. These measurements indicate the necessity and viability of modeling dense plasmas with self-consistent methods like density-functional theory, which impact the accuracy of radiation transport simulations used to describe stellar evolution and the design of inertial fusion targets. Introduction The physics of warm and hot dense matter can unravel the mysterious inner workings of planetary cores and stellar interiors 1 . These conditions span a large range of densities and temperatures ( ρ = 10 0 –10 6 g cm −3 and T = 10 3 –10 7 K), with pressures varying from ~1 Mbar (or, one million times that of Earth’s atmospheric pressure; 1 Mbar = 10 11 Pa) to ~500 Gbar (1 gigabar = 10 14 Pa). Understanding the physics of matter at such ultrahigh pressures can have many applications, including determining the age of the Universe through white dwarf cosmochronometry 2 , interpreting astrophysical observations 3 , 4 , 5 , and designing high-performance inertial fusion targets 6 , 7 , 8 . Thanks to technological advances in high-power lasers (including x-ray free electron lasers) and pulsed-power machines, this extreme state of matter can now be accessed in the laboratory 9 , 10 , 11 , but only for a short period of time (picosecond to microsecond timescales) depending on the driver and experimental geometry. Nonetheless, these techniques provide a unique “window” for interrogating the physics of matter at extreme conditions. The implosion spectroscopy measurements and model development presented in this work aim to reveal a more-detailed picture of atomic physics in dense-plasma environments at billion atmosphere (Gbar) pressures. Spherically-convergent techniques uniquely access the gigabar pressure regime in experiments, providing the necessary data to test atomic physics models for warm and hot dense plasmas. X-ray spectroscopy, a common and sometimes only means to diagnose and understand short-lived plasmas, measures x-ray emission and absorption with spatial, spectral, and/or temporal resolution 12 , 13 , 14 , 15 , 16 . Observing atomic line positions and spectral widths can reveal the physical processes that are occurring inside the system. Reliable atomic and plasma physics models are required to interpret these spectral signatures and have generally proven to be adequate for spectroscopically diagnosing classical/ideal plasmas 17 , 18 , 19 , 20 . In this regime, collisional-radiative equilibrium ( CRE ) models 21 , 22 are successfully used, which combine accurate atomic data from isolated atom calculations with appropriate continuum-lowering models to describe dilute plasma effects (e.g., ionization, screening, and broadening). This approach can provide guidance, for example, on the inference of plasma density and temperature 17 , 18 , 19 , 20 . However, with increasing energy density, experimental measurements over the last decade have revealed potential inconsistencies with traditional CRE treatments. For instance, experimental measurements 23 , 24 on the K-edge shift of solid-density aluminum plasmas (heated by x-ray free electron lasers) favored the continuum lowering model developed by Ecker and Kroll 25 , while shock-compression experiments 26 on the same material gave better agreement with a different continuum-lowering model by Stewart and Pyatt 27 . In addition, iron opacity measurements 28 at pressures below 1 Mbar showed very good agreement with traditional CRE -type opacity calculations, while significant disagreements 29 , 30 were found between measurements and theory at elevated densities and temperatures (for example, at around 10 Mbar for iron plasmas). It remains an “unsolved mystery” to this day, even though much effort has been applied to this open question from both theoretical and experimental perspectives 30 , 31 , 32 . Today, one can accurately compute the electronic energy levels of an isolated atom by solving the many-body Schrödinger or Dirac equations, for which the calculation precision can be improved systematically by varying the sophistication of the methods that are implemented, from the simplest Hartree–Fock method to advanced multi-configuration interactions. However, when atoms are put into a non-ideal (i.e., strongly-coupled and/or degenerate ) plasma environment, significant discrepancies appear between detailed spectroscopic measurements and calculations. One outstanding example is the inconsistency of hydrogen line broadening in the dilute, but cold ( n e = 10 15 –10 18 cm −3 and T = 10 3 –10 5 K) photospheric plasmas of white dwarfs 33 , in which plasma conditions inferred from the broadening of different lines in the same plasma can vary significantly, even amongst the best atomic physics models that are currently available. These variations can have significant implications for deducing the mass and age of white dwarfs by affecting the standard candle for cosmochronometry 2 . A similar situation occurs in warm dense plasmas under high-energy-density (HED) conditions, in which high-density effects (many-body coupling) and quantum electron degeneracy can drastically alter atomic physics relative to the isolated case. Reconciling how atomic physics changes in such non-ideal plasmas demands progress in both experiments and theory, which must account for the plasma environment self-consistently. Over the last few years, high-resolution absorption and fluorescence spectra have been used in magnetically driven inertial fusion (cylindrical liner) experiments to study the electronic structure of warm dense matter under extreme compression 16 , 34 . These studies have shown that a self-consistent field model based on density-functional theory (DFT) could reproduce K-edge and fluorescence line shifts at independently diagnosed, imploded plasma conditions (10 eV and n e = 10 24 cm −3 ), but collisional-radiative models with ad-hoc density effects could not reproduce the measured x-ray spectra 34 . A pure compressional experiment without thermal or ionization effects measured density-induced shifts in the K β line of cobalt at 8 Mbar, in good agreement with a self-consistent DFT model, and found significant differences among the predictions of several CRE models 35 . It is also noted that DFT-based modeling has been successfully applied to x-ray near-edge absorption spectroscopy (XANES) for warm-dense matter 36 , 37 , 38 . These earlier XANES experiments showed absorption features in good agreement with DFT calculations 36 , 37 , 38 . Extension of these studies to gigabar pressures are very important because of their relevance to fundamental dense plasma theory, inertial fusion energy, and laboratory astrophysics. Here, we report x-ray spectroscopy measurements at gigabar pressures using laser-driven implosions. These measurements are used to test a DFT-based multi-band kinetic model ( VERITAS ), which is developed in this work. The VERITAS model uses DFT-derived band (atomic level) information to compute the radiative transition rates that can be coupled to the radiation transfer equation to describe the radiation generation and transport processes in a dense plasma. With Cu (as a witness element) doped inside a 30-μm-thick plastic shell implosion, we performed time-integrated and time-resolved Cu K α emission (the 2p → 1 s transition) and 1s-2p absorption measurements during shell stagnation. Both of these inverse processes are observed on the same experiment; photo-ionization of 1 s electrons enables K α -emission, and thermal-ionization of 2p electrons enables 1s-2p absorption. These observations are directly connected to the time-dependent atomic ionization balance in the assembled dense plasma. The system is further constrained by integrated measurements of the compressed areal density (ρR), neutron yield and bang-time, and ion temperature, allowing the spectroscopic data to differentiate the DFT-based kinetic model from traditional treatments based on isolated-atom calculations and ad hoc continuum-lowering models. The paper is organized as follows: first, the necessity of a reliable atomic physics model for interpreting x-ray spectroscopic measurements is demonstrated using a surrogate dense-plasma object. The experimental results are then presented with a detailed spectral comparison between measurements and simulations based on traditional atomic physics models and the DFT-based approach that is developed in this work. Finally, the implications of these results for understanding dense plasma environments are discussed. Results Surrogate dense-plasma object To illustrate why a reliable atomic physics model is required a priori for interpreting dense plasma spectroscopy measurements, we construct a surrogate dense-plasma object in spherical geometry and compute synthetic x-ray spectra based on different atomic physics treatments. The surrogate plasma object consists of a 20-μm-radius core of 1%-Ar—doped deuterium plasma, having a given mass density of ρ = 10 g cm −3 and temperature of kT = 1000 eV, surrounded by four concentric homogenous shells of CH or Cu-doped CH with densities, temperatures, and thicknesses shown in Fig. 1a . The Cu-doped CH plasma serves as a “witness” layer (denoted as CHCu[2%]), which has 2% atomic fraction of Cu, uniformly mixed into the CH plasma. Fig. 1: Illustration of predicted spectroscopic differences for warm-/hot-dense plasmas by different atomic physics models. a Schematic of a surrogate dense-plasma object consisting of a Cu-doped CH plasma layer for spectroscopy. b The predicted Kα emission signal from the doped Cu layer of mass density of ρ = 20 g cm −3 and kT = 200 eV, by three different models: VERITAS (blue solid), collision radiative equilibrium ( CRE ) models with continuum lowering of Stewart–Pyatt (green long dash) and Ecker–Kroll (black dash-dotted). c The predicted 1 s—2p absorption feature from the doped Cu layer at mass density of ρ = 20 g cm −3 and temperature kT = 300 eV. Full size image Synthetic spectra, calculated with three different atomic physics models, are shown in Fig. 1b, c for the same CHCu[2%] density of ρ = 20 g cm −3 , but different temperatures ( kT = 200 eV and kT = 300 eV, respectively). The traditional CRE simulations used an atomic database (ATBASE), implemented by the Spect3D software package, based on a kinetic description for the atomic level populations, by which levels are populated and depopulated by radiative and collisional processes, and coupled to nonlocal radiation transport. As discussed above, these CRE models need to invoke continuum-lowering models to “destroy” bound levels and account for plasma effects (pressure ionization and lowering of ionization thresholds). The remaining results in Fig. 1b, c come from VERITAS , a new DFT-based multi-band kinetic model for dense plasma spectroscopy. The VERITAS code The details of VERITAS can be found in Methods; here we briefly describe its essential components: (1) the electronic structure of dense Cu-doped CH plasma is determined self-consistently by DFT through quantum molecular-dynamics simulations using all-electron potentials, for a given density and temperature grid; (2) certain electronic bands, such as the 1 s, 2p, 3p , and continuum , are chosen to be included in the model – the oscillator strengths among these bands are calculated for the considered radiative dipole transitions; and (3) the kinetic equation invoking the DFT-determined transition rates is used to describe the population change in these energy bands due to radiative transitions, which is coupled to the non-local radiation transport equation to ensure that the local radiation field is consistent with the band population. In contrast to a traditional CRE treatment, our DFT-based kinetic code – VERITAS —explicitly accounts for the microphysical interactions among ions and the dense plasma environment. Energy band shifting and ionization balance are self-consistently described, without invocation of an ad hoc continuum lowering model. This model development is based on the preliminary success of treating warm-dense plasmas as quantum many-body systems 39 , 40 , 41 , 42 , 43 , 44 with mean-field DFT. The VERITAS predictions for the surrogate dense-plasma object prescribed by Fig. 1a are indicated by the blue solid curves in Fig. 1b, c . For the case of kT = 200 eV, Fig. 1b shows that when the hot-spot radiation streams out through the CHCu[2%] layer, high-energy photons excite or ionize the 1 s core electron of Cu, leading to K α emission (due to the 2p → 1 s transition). As the temperature of the Cu-doped layer increases to kT = 300 eV, the spectroscopic feature changes from K α emission to the dominant 1s-2p absorption, shown by Fig. 1c . This feature change is caused by the appreciable depletion of the Cu 2p population at this higher temperature. Compared to the DFT-based VERITAS model, the two CRE models marked as “ATBASE + Stewart–Pyatt” (green dashed curve) and “ATBASE + Ecker–Kroll” (black dash-dotted curve) give quite different spectroscopic predictions for the same plasma conditions. These differences are quantitatively distinguishable: (1) the K α -emission peak shifts by ~20 eV in the two CRE models when compared to VERITAS for the low temperature case shown in Fig. 1b ; (2) the K α -emission peak from VERITAS is more than two-fold stronger than both CRE models, while the Ecker–Kroll model predicts a 1s-2p absorption feature even at kT = 200 eV; (3) at a higher temperature of kT = 300 eV, all models predict the 1s-2p absorption although the ATBASE + Ecker–Kroll model gives a wider and stronger absorption feature as indicated by Fig. 1c ; and (4) at this temperature the VERITAS and Stewart–Pyatt models give a similar absorption width, but the latter shows a “double-dip” feature. To investigate what detailed atomic physics drives these different observations, we compare in Table 1 the free-electron density, average Cu ionization ( \({Z}_{{Cu}}^{*}\) ), and Cu 2p population predicted by the following spectral models: ATBASE + Stewart–Pyatt ( Spect3D - a CRE code), DFT + QMD ( VERITAS ), DFT + AA (Muze ), FAC + AA [flexible atomic code with plasma environment inferred by an average-atom (AA) model] and one other CRE code ‒ SCRAM. The “FAC + AA” model uses FAC-code calculations for the atomic structure of a Cu atom that is “embedded” in a CH plasma mixture in which the plasma environment is described by an average-atom (AA)-type model. It embodies a similar “ spirit ” to DFT with a self-consistent-field (SCF) calculation of plasma screening for an atom embedded in a plasma mixture. Table 1 Comparisons of free-electron density (n e ), average ionization \({Z}_{{Cu}}^{*}\) and 2p-population ( f 2p ) of Cu in warm-/hot-dense plasmas Full size table The comparison indicates that both \({Z}_{{Cu}}^{*}\) (which governs K α shifts) and the depletion of 2p (which controls the K α intensity) are similar among the three DFT-based models [ VERITAS , Muze , and FAC + AA ]. By contrast, the traditional CRE models with similar ad-hoc continuum-lowering treatments differ from the self-consistent models and even from each other. These noticeable differences have motivated us to design and perform experiments in a similar regime, aiming to inform the development of a more-reliable HED atomic physics model for radiation generation and transport in dense plasmas. Experimental setup and diagnostics The experiment used a spherical, laser-driven implosion on the Omega Laser Facility. The target, shown schematically in Fig. 2a , consists of a 30-μm—thick polystyrene (CH) shell with a 10-μm—thick layer uniformly doped with 2% Cu (atomic fraction) and a 1%-Ar-doped deuterium (D2Ar[1%]) core fill. The 10- \(\mu m\) -thick Cu-doped layer begins ~3-μm from the inner surface of the CH shell. The target was imploded by 60 laser beams on OMEGA with a 1-ns square pulse having a total energy of ~26 kJ. When the laser pulse irradiates the spherical capsule, laser ablation launches a strong shock wave that compresses the target. After the shock breaks out of the inner surface of the shell into the gas-filled core, the shell is accelerated inwards until it stagnates at a certain radius. At stagnation, the contained gas is compressed and heated to form a hot core, which emits x-rays that probe the stagnating shell and enable our spectroscopic measurements. Fig. 2: Time-resolved x-ray spectroscopy experiment of warm-/hot-dense plasmas at white-dwarf’s envelope conditions of Gbar pressures. a Schematic targets for implosion spectroscopy on OMEGA. b Example of streaked spectra measured in experiments. c The pressure-density region probed by various HED experiments: GEKKO 47 , OMEGA 48 , Nova 45 , NIF by Doeppner et al. 11 , NIF by Kritcher et al. 51 , 52 , NIF by Fletcher et al. 49 , as well as non-Hugoniot work by Doeppner et al. 50 on NIF. d The density-temperature conditions of a typical white dwarf of 0.6M ʘ (0.6 solar mass) as it’s cooling down from hot and young state (right) to older and colder structures (left). Convective regions in the stars are shown in red. The regime probed by the experiments is shown by the green dashed circle. Inferred from DRACO simulations, the plasma temperature and density conditions of the imploding Cu-doped layer vary from kT ≈ 10‒50 eV and ρ ≈ 2‒10 g cm −3 (in-flight stage) to kT ≈ 200‒500 eV and ρ ≈ 10‒25 g cm −3 during the stagnation. Full size image Both time-integrated and time-resolved x-ray spectrometers were used to record the emergent radiation spectrum (see Methods for further details). Figure 2b shows a typical time-resolved x-ray spectrum in the photon energy range 7800 to 8600 eV, which clearly indicates the Cu emission and absorption features of interest during the shell stagnation and core flash. Implosion performance These high-adiabat ( α = 10) and low-velocity (~250 km s −1 ) implosions are stable to laser imprint and other perturbations, as indicated by one- and two-dimensional radiation-hydrodynamic simulations using the LILAC and DRACO codes (see below), as well as integrated experimental measurements of the implosion performance (see Table 2 ). Table 2 shows that the DD fusion neutron yield, neutron-averaged ion temperature < T i > n , neutron-averaged shell areal density < ρR > n , and neutron bang-time are in close agreement with LILAC and DRACO simulations. Based on these observations, we can reasonably process the radiation-hydrodynamic-simulations with atomic physics models to obtain synthetic x-ray spectra and compare them to experimental measurements. Table 2 Comparisons of implosion performance between experiment and DRACO simulation Full size table Compared to other shocked-CH studies 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 carried out mainly along the principal Hugoniot, our experiment has extended the pressure and density conditions at which both time-integrated and time-resolved x-ray spectroscopic measurements have been conducted: gigabar (Gbar) pressures and ~15–20 × solid-density, as indicated by Fig. 2c . Inferred from DRACO simulations, the plasma temperature and density conditions in the imploding Cu-doped layer vary from kT ≈ 10‒50 eV and ρ ≈ 2‒10 g cm −3 (in-flight stage) to kT ≈ 200‒500 eV and ρ ≈ 10‒25 g cm −3 during stagnation. The corresponding pressure in the compressed shell changes from ~50 Mbar to a maximum value approaching ~5 Gbar. When one casts these dense plasma conditions achieved on OMEGA to the density-temperature conditions of a typical white dwarf (0.6 M ʘ ) during its cooling phase, Fig. 2d shows the experiment can potentially probe the equation of state and transport properties of the convective region of a white dwarf’s envelope. Accurate knowledge about ionization balance in such conditions could directly affect the modeling of conduction and the radiative cooling of white dwarfs. Spectroscopic modeling Using the DRACO -simulated dynamic plasma conditions, we investigated x-ray generation and transport through the target using two CRE models (ATBASE and FAC) and the DFT-based kinetic code VERITAS . The predicted time-integrated spectra are compared with the experimental measurements in Fig. 3 , in which the x-ray signal is plotted as a function of photon energy [all normalized to the continuum signal level at 7800 eV]. The experimental spectra (Fig. 3b ) show both the pronounced K α -emission peaked at ~8042 eV and the 1s-2p absorption of Cu in the higher photon energy range of 8100–8250 eV. Both the location and amplitude of the emission and absorption features are appropriately captured by VERITAS (Fig. 3a ). Fig. 3: Comparison of time-integrated K α -emission and 1s-2p absorption signals between experiment and models. a The DFT-based VERITAS calculations. b The experimental measurement. c The CRE model calculations with atomic database (ATBASE) in combination with Stewart–Pyatt and Ecker–Kroll continuum lowering models. d The CRE model calculations using flexible atomic physics (FAC) code calculation with the two continuum lowering models. The time integration in calculations has been done from t = 1.7 ns to t = 2.4 ns during the hot-spot flash, with snapshots for each 20-ps time interval. Full size image Figure 3c, d show the Spect3D simulation results, in which either the atomic database (ATBASE) or the flexible atomic code (FAC) calculations are combined with the Ecker–Kroll and Stewart–Pyatt continuum lowering models. When these CRE results are compared to experiments, they give a conflicting conclusion about the continuum lowering model. Namely, the experimental emission and absorption features are qualitatively reproduced by the two CRE simulations of “ATBASE + Stewart–Pyatt” and “FAC + Ecker–Kroll” in Fig. 3d, e (though the emission peaks are too high), while the other two combinations drastically disagree with experiments. This illustrates again the dilemma of the traditional spectroscopic treatment for warm dense plasmas: which ad hoc continuum lowering model works better depends on the atomic physics model that is invoked. The resemblance between the FAC + Ecker–Kroll model (Fig. 3d ) and experiments is likely coincidental, as other recent measurements 53 of ionization-potential-depression have defied the Ecker–Kroll model. Overall, the DFT-based VERITAS model, without invocation of an ad hoc continuum lowering model, better resembles the observed x-ray signal in the experiments. Nonetheless, one can see that the VERITAS -predicted continuum slope, the K α -emission amplitude, and the 1s-2p absorption width are still slightly mismatched with respect to the experiment. This small spectroscopic discrepancy might be attributed to some unavoidable three-dimensional effects, even though the time-integrated implosion measurements overall agree with 2D DRACO simulations. For instance, the stalk perturbation could inject a small and localized portion of the Cu-doped layer closer to the hot spot, which, to some extent, could contribute to the measured spectra in ways that are not accounted for in the 2D model. These small differences are further discussed in Supplementary Information . Time-resolved x-ray spectrum Experimental and synthetic time-resolved x-ray signals are presented in Fig. 4 . The top three panels compare the measured streak-camera image of the x-ray emission and absorption over the core flash (Fig. 4b ) with predictions from the ATBASE + Stewart–Pyatt model (Fig. 4a ) and VERITAS (Fig. 4c ). Figure 4d–f give quantitative comparisons at t = 1.95 ns, t = 2.05 ns, and t = 2.15 ns. All these cases are normalized to each other with the same continuum signal level. The experimental time-resolved spectrum shows pronounced K α emission early in time (Fig. 4d ), which changes to a dominant 1s-2p absorption as time proceeds. Fig. 4: Comparison of time-resolved x-ray signals between experiment and models during the core flash. a The streaked spectra predicted by traditional CRE model ( Spect3D ) with isolated atomic database plus continuum-lowering (Stewart–Pyatt). b The experimental measurement. c The streaked spectra predicted by VERITAS (a DFT-based kinetic model). d – f The spectral comparisons among the three cases at three distinct time line-outs: t = 1.95-ns, 2.05-ns, and t = 2.15-ns. The experimental error bar of ±40% is mainly from x-ray photon statistics of the streaked signal. Full size image At early times, the DFT-based kinetic model ( VERITAS ) agrees well with the K α emission measurements, while the ATBASE + Stewart–Pyatt model over-predicts the emission peak and resolves the K α1 and K α2 spectral lines (due to less broadening), as shown in Fig. 4a . At t = 2.05 ns the heat wave reaches the Cu-doped layer, leading to a 1s-2p absorption “dip” in Fig. 4b . Again, the ATBASE + Stewart–Pyatt model gives a stronger absorption dip at lower photon energy in comparison to experiment, while VERITAS shows the same level of 1s-2p absorption depth. It is noted that the experimental 1s-2p absorption feature is somewhat wider than in the model predictions. This slight discrepancy might come from the possibility that regions of the Cu-doped CH were driven deeper towards the hot spot by the stalk or other 3D perturbations in the experiments. Finally, when the shock and heat-wave have propagated through most of the CHCu[2%] layer, VERITAS still catches the experimental absorption level and width appropriately (Fig. 4f ), while the ATBASE + Stewart–Pyatt model gives somewhat stronger absorption (green dashed curve). Discussion The spectroscopic evolution from K α emission to 1s-2p absorption is directly related to the plasma conditions that are dynamically changing in the Cu-doped layer during stagnation. The density and temperature contours of the imploding shell at stagnation are respectively depicted in the upper and lower panels of Fig. 5a , as predicted by DRACO simulations. It shows the formation of a hot D2Ar[1%] core, with a return shock reaching the Cu-doped-CH layer, and a heat wave from the hot core propagating outward by thermal conduction. Fig. 5: The rad-hydro-predicted warm-/hot-dense plasma conditions during core flash of Cu-doped CH target implosions on OMEGA. a The density (upper) and temperature (down) contour plots of dense plasma conditions at stagnation (t = 2.05 ns) from 2D DRACO radiation-hydrodynamic simulation. The inner and outer circles of dotted lines indicate the inner and outer boundary of the Cu-doped layer whose region is marked by the black arrow. b The time evolution of plasma ρ/T conditions as well as the population of Cu’s 2p state in the Cu-doping region (inferred by VERITAS ) during the core flash, in which red symbols represent the situation at the inner interface and blue symbols are for the outer interface. The stagnation time (t = 2.05 ns) is marked by the vertical dashed line. Full size image To further illustrate this process, we plot in Fig. 5b the angularly-averaged plasma density and temperature at the inner and outer surfaces of the CHCu[2%] layer as a function of time. One sees that the return shock reaches the inner surface of the sample layer at t = 1.95 ns, causing a density jump and shock heating to a temperature of kT = 250 eV; a heat wave follows the return shock due to strong thermal conduction from the hot core as a result of the large temperature gradient, leading to heating of the CHCu[2%] layer to kT ≈ 540 eV (mid-panel of Fig. 5b ); finally, the return shock approaches the outer surface of the sample layer at a later time of t = 2.2 ns. Using these plasma conditions, we show the history of the Cu 2p -band population, as predicted by VERITAS , in the lower panel of Fig. 5b . For a fully occupied 2p band in Cu, there are six electrons in this band (energy level). The population of this 2p band starts to deplete significantly at t = 2.0 ns when the heat wave raises the sample layer’s temperature to over 300 eV, leading to the onset of 1s-2p absorption, which is observed in the time-resolved spectra (Fig. 4e ). Before this time, the fully occupied 2p band does not allow 1s-2p absorption to occur, so that K α emission is the dominant feature in the x-ray spectra measured at early times (Fig. 4d ). The plasma conditions change throughout the sample layer as the return shock and heat wave propagate through the CHCu[2%] layer. The observed spectrum represents this competition between K α emission and shifted 1s-2p absorption from different radial locations. Namely, the unshocked and colder regions give pronounced K α emission, while the heated parts contribute dominantly to the 1s-2p absorption feature. These processes compete in generating and transporting radiation and determine what is measured by the x-ray spectrometers. Overall, the DFT-based VERITAS model reasonably describes the dynamic change in measured x-ray spectral features. The traditional CRE models might give the proper level of both K α emission and 1s-2p absorption, but their predictions tend to be highly dependent on their underlying atomic structure and continuum lowering models, which can make it difficult to isolate the physical effects of interest. For these high-adiabat and relatively-low velocity implosion studies on OMEGA, it is noted that the x-ray spectroscopy data are reproducible (see Supplementary Information ). To summarize, we have performed a theoretical and experimental study of atomic physics in Cu-doped plastic at several billion atmospheres of pressure. Overall, a DFT-based approach reproduces many of the emission and absorption features that are observed in the experiment, while traditional plasma spectroscopy treatments show sensitivity to the combination of atomic physics and continuum lowering models that are implemented. This sensitivity contributes to the present open questions on the validity of ad hoc continuum lowering models (see also ref. 54 ). This work indicates the necessity for a self-consistent treatment of dense plasma effects on altering atomic energy levels/bands and their populations at ultrahigh pressures. The DFT-based VERITAS approach, with potential future benchmarks using other buried metal and metal-alloy layers, could provide a reliable way for simulating radiation generation and transport in dense plasmas encountered in stars and inertial fusion targets. The experimental scheme reported here, based on a laser-driven implosion, can be readily extended to a wide range of materials in single- and multiple-shell geometries, opening the way for far-reaching investigations of extreme atomic physics and DFT models at tremendous pressures. Methods Implosion experiment on OMEGA The experiments were conducted by symmetric laser drive using 60 Omega laser beams. Standard implosion diagnostics were used for these experiments, including neutron yield detector 55 , wedge range filter for area-density measurement 56 , and the neutron-timing diagnostic (NTD) for ion temperature and bang time 57 . Measurement of x-ray emission spectra Bragg reflection crystal spectrometers recorded time-integrated and time-resolved x-ray spectra in the energy range of 7800–8600 eV. One spectrometer 58 was coupled to an x-ray streak camera to achieve 80-ps time-resolution; the other was coupled to x-ray–sensitive image plate. Conversion to source emission From pinhole camera measurements of such implosions, the estimated x-ray source size is ~100-μm in diameter. With respect to the x-ray spectrometers which are 13–19.3 cm away from the target chamber center, the imploded capsule can be represented as a point source. The measured spectra were converted to source emission \({S}_{\nu }\) incident on each resolution element: \({S}_{\nu }\left[\frac{{ph}}{{sr}\cdot {eV}}\right]=\frac{{I}_{{{{{{\rm{\nu }}}}}}}}{f\left(E\right)T\left(E\right)G\left(E\right)}\) , where \({I}_{{{{{{\rm{\nu }}}}}}}\) is the measured signal density (photo-stimulated luminescence (PSL) per pixel for IP and analog-to-digital units (ADU) per pixel for the streak camera), \(f(E)\) the instrument sensitivity function (signal (ADU or PSL) per photon), \(T\left(E\right)\) the filter transmission, and \(G\left(E\right)=R(E)\frac{{dE}}{d\theta }\frac{d\Omega }{{dA}}\) the spectrometer crystal response 59 . \(f(E)\) is constructed from calibration measurements and detector models for both the IP 60 and streak camera 61 , 62 . The integrated reflectivity \(R\left(E\right)\) is calculated from the x-ray optics software XOP 63 . \(\frac{{dE}}{d\theta }\) and \(\frac{d\Omega }{{dA}}\) are calculated from a geometric ray-trace for each spectrometer. Statistical uncertainty due to photon statistics in the time integrated spectrometer is low, of order 0.5% after averaging over pixels in the non-dispersive dimension. In the time-resolved spectrometer, stochastic processes inherent to the streak camera amplification dominate statistical uncertainty, yielding ~30% fractional uncertainty after averaging over a resolution element of 80 ps. Systematic uncertainties include calibration measurements, filter thicknesses, and the crystal integrated reflectivity. The overall resolving power of the x-ray spectrometers is calibrated to be E/ \(\triangle E\) ≈ 1100. The resolving power of the x-ray spectrometers is primarily limited by source size broadening and crystal rocking curve effects 58 . Streak camera time-base The time-base \(t\left(x\right)\) was measured on separate shots by irradiating a Au foil with a train of laser pulses of known timing. The integration time of each pixel (“dwell time”) \(\Delta t=\frac{{dt}\left(x\right)}{{dx}}\) is used to calculate the source emission rate \(\frac{d{S}_{{{{{{\rm{\nu }}}}}}}}{{dt}}\left[\frac{{ph}}{{sr}\cdot {eV}\cdot s}\right]=\frac{{S}_{{{{{{\rm{\nu }}}}}}}}{\Delta t}\) . X-rays at the high energies of this spectrometer predominantly originate from the core; assuming the x-ray emission closely follows neutron production, the time-base is shifted to align the time of peak x-ray emission with the time of peak neutron production. Radiation-hydrodynamic simulations The radiation-hydrodynamic simulations of implosion experiments have been performed with the 1-D LILAC 64 and 2-D DRACO 65 codes developed at the Laboratory for Laser Energetics. State-of-the-art physics models are employed in these simulations, including 3-D ray-tracing for laser energy deposition with a cross-beam energy transfer (CBET) model 66 , the iSNB 67 nonlocal thermal transport model, and the first-principles equation-of-state (FPEOS 8 , 68 , 69 ) and opacity tables (FPOT 70 , 71 ) for constituent materials. For radiation energy transport, a multi-group diffusion model was used in DRACO with 48-group opacity tables. Cylindrical symmetry was enforced in the 2D DRACO simulations, in which r-z coordinates are employed with the azimuthal symmetry axis along the z -axis. The quasi-1D nature of such high-adiabat implosions should lead to small 2D or 3D effects in x-ray emission calculations, justifying use of 2D (as opposed to 3D) simulations. Laser imprinting was simulated up to a maximum laser-speckle mode of \(l=150\) in DRACO , even though these simulations showed little effect of laser perturbations on such high-adiabat implosions with 1-ns square pulses at a laser intensity of \(\sim 1.1\times {10}^{15}\) W cm −2 . DRACO simulation results were compared with experiments (e.g., see Table 2 ). The time-dependent 2-D density and temperature profiles, predicted from DRACO simulations, were used for further processing by a variety of CRE models and VERITAS , for x-ray spectral comparisons with experimental measurements. Collisional radiative equilibrium ( CRE ) modeling Taking the DRACO -predicted plasma density and temperature profiles, we have applied the simulation package Spect3D 21 to perform the CRE modeling of x-ray spectra from these implosions. Spect3D uses atomic databases and continuum lowering models to track energy level populations, which are coupled to nonlocal radiation transport for x-ray generation (e.g., K α -emission of Cu), absorption, and propagation. The spectral resolving power \((E/\triangle E=1100)\) , temporal resolution ( \(\delta t=80\) ps), and spatial resolution ( \(\delta x=10\) μm) were applied to the synthetic x-ray spectra from Spect3D . In Spect3D simulations, we processed 10 equal-angle-spaced radial lineouts of DRACO -predicted density and temperature profiles along different angular directions with kinetic models incorporating detailed atomic physics and radiation transport. We then averaged the resulting spectra among radial lineouts. Given the largely 1D-like performance of such high-adiabat implosions (see Fig. 5a and Table 2 ), this quasi-2D treatment should be reasonable to avoid the time-consuming computations of 2D radiation transport with detailed atomic kinetics, for a relatively big 2D grid size of \(601\times 591\) . Nevertheless, the use of 1-D radial lineouts in lieu of a full 2D or 3D analysis could be an additional cause for the small discrepancy observed in VERTAS -experiment comparisons. VERITAS: DFT-based multi-band kinetic modeling The VERITAS code, developed in this work, is based on a density-functional theory (DFT) description of energy bands in dense plasma. The kinetic modeling of multi-band populations \(({n}_{{{{{{\rm{i}}}}}}})\) is coupled with radiation transfer, as described by the following coupled equations for the steady state condition: $$\left\{\begin{array}{c}\frac{d{n}_{{{{{{\rm{i}}}}}}}}{{dt}}={-n}_{{{{{{\rm{i}}}}}}}\mathop{\sum }\limits_{j\ne i}^{N}{W}_{{{{{{\rm{ij}}}}}}}(I,\nu )+\mathop{\sum }\limits_{j\ne i}^{N}{n}_{{{{{{\rm{j}}}}}}}{W}_{{{{{{\rm{ji}}}}}}}(I,\nu )=0,\,{{{{{\rm{for}}}}}}\,{{{{{\rm{band}}}}}}\,i\\ \mu \frac{\partial I(r,\nu )}{\partial r}+\frac{(1-{\mu }^{2})}{r}\frac{\partial I(r,\nu )}{\partial \mu }=\eta \left(r,\nu \right)-\chi (r,\nu )I(r,\nu )\end{array}\right.$$ (1) with \({W}_{{{{{{\rm{ij}}}}}}}\) being the transition rates among the total \(N\) bands considered, which may depend on the specific intensity \(I(r,\, \nu)\) of x-rays at radius r and frequency \(\nu\) (e.g., for photoionization and stimulated radiative processes). Here, the line of sight is along the \(z\) axis, which has an angle \(\theta\) relative to the 1-D spherical radial coordinate \(r\) , i.e., \(\mu={{\cos }}\left(\theta \right).\) The above rate equation describes the population change for each band at the steady state condition \(({dn}/{dt}=0)\) , due to radiative processes among the dipole-transition–allowed energy bands. For example, the population rate of change on band i (due to radiative coupling to band j with \({E}_{{{{{{\rm{i}}}}}}} < {E}_{{{{{{\rm{j}}}}}}}\) ) can be defined as the sum of the depopulating term \(-{{n}_{{{{{{\rm{i}}}}}}}W}_{{{{{{\rm{ij}}}}}}}={-{n}_{{{{{{\rm{ij}}}}}}}B}_{{{{{{\rm{ij}}}}}}}\bar{{I}_{{{{{{\rm{ij}}}}}}}}\) (stimulated absorption from i to j) and the populating term \({{n}_{{{{{{\rm{j}}}}}}}W}_{{{{{{\rm{ji}}}}}}}={{n}_{{{{{{\rm{ji}}}}}}}A}_{{{{{{\rm{ji}}}}}}}+{{n}_{{{{{{\rm{ji}}}}}}}B}_{{{{{{\rm{ji}}}}}}}\bar{{I}_{{ij}}}\) (spontaneous and stimulated emission from j to i) . Note that for the case of \({E}_{{{{{{\rm{i}}}}}}} > {E}_{{{{{{\rm{j}}}}}}}\) only the depopulating term appears in the rate equation for band i . Here, \({n}_{{{{{{\rm{ij}}}}}}}\) and \({n}_{{{{{{\rm{ji}}}}}}}\) are the maximum populations allowed for the corresponding radiative process. For instance, \({n}_{{{{{{\rm{ji}}}}}}}\) will depend on the number of “ holes ” (depletion) in band i and the weighted population in band j : \({n}_{{{{{{\rm{ji}}}}}}}={\min }[({n}_{{{{{{\rm{i}}}}}},{{{{{\rm{full}}}}}}}-{n}_{{{{{{\rm{i}}}}}}}),({n}_{{{{{{\rm{j}}}}}}}\times {g}_{{{{{{\rm{i}}}}}}}/{g}_{{{{{{\rm{j}}}}}}})]\) with \({n}_{{{{{{\rm{i}}}}}},{{{{{\rm{full}}}}}}}\) being the fully-occupied population on band i and \({g}_{{{{{{\rm{i}}}}}}}({g}_{{{{{{\rm{j}}}}}}})\) standing for the degeneracy of band i ( j ). The Einstein coefficients A and B are related to the oscillator strength between bands i and j , which can be calculated using DFT-determined orbitals. The frequency averaged mean radiation intensity is defined as \(\bar{{I}_{{{{{{\rm{ij}}}}}}}}=\int I\left(\nu \right){\times \phi }_{{{{{{\rm{ij}}}}}}}(\nu -{\nu}_{{{{{{\rm{ij}}}}}}})d\nu\) , with the Voigt line profile \({\phi }_{{{{{{\rm{ij}}}}}}}(\nu -{\nu}_{{{{{{\rm{ij}}}}}}})\) centered at the frequency \({\nu}_{{ij}}\) corresponding to the energy gap between bands i and j and with line broadening models discussed later. The emissivity \(\eta \left(r,\, n,\, \nu\right)\) and absorption coefficient \(\chi \left(r,\, n,\, \nu\right)\) have a dependence on the band population \((n)\) of dense plasmas at the local grid \(r\) and the radiation frequency \(\nu\) . The population changes on multiple energy bands are kinetically modeled by the rate equation (top equation), in which the radiative transition coefficients among different energy bands are calculated by using the DFT-determined orbitals, in contrast to the use of atomic databases of isolated atom plus continuum lowering in traditional CRE models. The DFT-based rate equation is then coupled with the radiation transfer equation (bottom equation) to simulate x-ray photoionization, emission, and band-band absorption processes throughout the density-temperature grid given by radiation-hydrodynamic codes. The radiation field at any spatial grid is determined by self-consistently solving the coupled rate and radiation transfer equations until a steady-state solution of the populations is achieved, similar to the procedure employed in Spect3D . Since the DFT description of energy bands in dense plasmas is self-consistent with the plasma environment, the band energy shift is naturally included for a given plasma condition. So, there is no need to invoke a continuum lowering model in VERITAS . In principle, many energy bands can be included in VERITAS , without prescribing which are bound and which belong to the continuum , even though such a designation can be determined from density-of-state (DOS) calculations. Specifically, we have included the measurement-relevant energy bands [ 1 s, 2p, 3p , and continuum of copper (Cu)] for modeling the implosion spectroscopy experiments. The main radiative processes considered among Cu’s energy bands are: \(1s\leftrightarrow {continuum}\) (photoionization/radiative-recombination), \(1s\leftrightarrow 2p\) (band-band absorption/K α -emission), \(1s\leftrightarrow 3p\) (band-band absorption/K β -emission), \(2p\leftrightarrow {continuum}\) (photoionization/radiative-recombination), and \(3p\leftrightarrow {continuum}\) (photoionization/radiative-recombination). Even though we are focusing on the \(1s\leftrightarrow 2p\) transition spectra, the inclusion of the bounded \(3p\) -band of Cu is to ensure that all relevant population and depletion channels of the \(1s\) -band are properly accounted for in the kinetic simulations. The exclusion of 2 s and 3 s bands from VERITAS modeling of the current experiments is based on the fact that their transitions to the 1 s band are dipole-forbidden and their coupling to 2p and 3p are outside the spectral range of interest. At the plasma conditions encountered here, the n = 4 bands of Cu have already merged into the continuum . For the conditions studied here ( \(20-500\) eV and \(2-20\times\) solid-density), the rates for electron collisional processes are so high that local thermodynamic equilibrium is well maintained. Thus, one can take the thermal-DFT predicted band populations as a starting point in VERITAS to simulate the aforementioned radiative processes only, while the fast electron collisional processes are assumed to balance each other so that they can be omitted from current VERITAS simulations. To enable the VERITAS simulations of these x-ray spectroscopy experiments, we have first built a DFT-based table storing the relevant frequency and oscillator strength of transitions among Cu’s energy bands, for a density and temperature grid of CHCu[2%] spanning the mass density and temperature ranges of \(\rho=2-50\) g cm −3 and \({kT}=10-500\) eV. For each density and temperature condition, we have performed quantum molecular-dynamics (QMD) simulations to sample a variety of ionic configurations of dense CHCu[2%] plasma, based on the thermal-DFT formalism in either the Kohn-Sham orbital-based format or the orbital-free scheme. These DFT-based QMD calculations have been performed by using ABINIT 72 and our in-house OFMD code ( DRAGON ), with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional 73 . For temperatures below ~50 eV the QMD simulations were done by using the orbital-based Kohn-Sham DFT implemented in ABINIT , while for high temperatures ( kT > 50 eV) we turned to orbital-free DFT to perform our QMD simulations. Taking snapshots from QMD calculations, we performed the oscillator strength calculations for radiative transitions by using the Kubo-Greenwood package – KGEC@QUANTUM-ESPRESSO 74 , with the all-active-electron projector-augmented-wave (PAW) potential 75 . These choices of DFT packages, exchange-correlation functional, and potentials follow standard practice in the warm-dense matter physics community. The band-energy deficiency from DFT calculations has been compensated with constant shifts derived from comparison with experimental energy levels of Cu at ambient condition. The band-energy “ deficiency ” from DFT refers to the small (~1–2%) difference between the DFT-calculated 1s-2p energy gap of Cu and the experimentally measured Kα energy at ambient conditions. This small band-gap difference has been known as an intrinsic fact of DFT that uses approximated exchange-correlation functionals (e.g., PBE used here) which suffer from self-interaction error. These DFT calculations invoked 100–200 atoms of C, H, and Cu according to their atomic fractions in a supercell with periodic boundary conditions, which were converged with respect to number of bands, K-points, and energy cut-off. With such a pre-calculated DFT table accessible to VERITAS , the radiative transition rates can be computed at any spatial grid for plasma conditions of CHCu[2%] given by rad-hydro simulations. It is noted that the kinetic modeling was done only for the sample CHCu[2%] layer, while radiation transport in D 2 Ar and pure CH plasmas were calculated by using the emissivity and opacity tables from PrOpacEOS 76 , same as what were used in CRE modeling. In principle, the band broadening of Cu can be determined from direct DFT-based QMD calculations. However, due to the limited number of Cu atoms involved in such demanding calculations, the resulting band-width (broadening) is currently not reliable due to the lack of sufficient sampling for charge-state distribution (CSD). Instead, we have adopted in VERITAS the temperature- and density-dependent broadening information coming from both SCRAM 22 and FAC 77 calculations for Stark (with an enhancement factor of ~5) and CSD broadening effects, as well as Doppler shift due to fluid motion, in a Voigt line profile. Both the SCRAM and FAC codes consider traditional plasma broadening mechanisms, including electron thermal-collision broadening 78 , Stark broadening due to ion micro fields 79 , and broadening from the charge-state distribution 80 . While all of these broadening mechanisms can explain the line-shape observations in low-density and high-temperature classical plasmas, they appear unable to account for the enhanced broadening seen in the dense plasmas created and reported here. We speculate that the current treatment of micro-field induced Stark broadening might have missed some of the density effects from coupled ions in such dense plasmas, hence the ad-hoc 5x increase in broadening applied to the VERITAS results. We hope the experimental observations of enhanced broadening reported here shall motivate future investigations on how density effects change line broadening in warm-dense plasmas. Finally, since VERITAS is based on the DFT description of dense plasmas, we expect its low-density limit to be around or slightly below ambient solid density, below which DFT calculations are no longer practically feasible and traditional atomic physics models should work better. Data availability The experimental data, Spect3D simulation data, VERITAS simulation data that support the findings of this study are available from the corresponding authors upon request. Code availability The VERITAS code that support the findings of this study are available from the corresponding authors upon request.
Most people are familiar with solids, liquids, and gases as three states of matter. However, a fourth state of matter, called plasmas, is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. Because dense plasma—a hot soup of atoms with free-moving electrons and ions—typically only forms under extreme pressure and temperatures, scientists are still working to comprehend the fundamentals of this state of matter. Understanding how atoms react under extreme pressure conditions—a field known as high-energy-density physics (HEDP)—gives scientists valuable insights into the fields of planetary science, astrophysics, and fusion energy. One important question in the field of HEDP is how plasmas emit or absorb radiation. Current models depicting radiation transport in dense plasmas are heavily based on theory rather than experimental evidence. In a new paper published in Nature Communications, researchers at the University of Rochester Laboratory for Laser Energetics (LLE) used LLE's OMEGA laser to study how radiation travels through dense plasma. The research, led by Suxing Hu, a distinguished scientist and group leader of the High-Energy-Density Physics Theory Group at the LLE and an associate professor of mechanical engineering, and Philip Nilson, a senior scientist in the LLE's Laser-Plasma Interaction group, provides first-of-its-kind experimental data about the behavior of atoms at extreme conditions. The data will be used to improve plasma models, which allow scientists to better understand the evolution of stars and may aid in the realization of controlled nuclear fusion as an alternative energy source. "Experiments using laser-driven implosions on OMEGA have created extreme matter at pressures several billion times the atmospheric pressure at Earth's surface for us to probe how atoms and molecules behave at such extreme conditions," Hu says. "These conditions correspond to the conditions inside the so-called envelope of white dwarf stars as well as inertial fusion targets." A NASA image of plasma bursting from the sun. Plasma—a hot soup of atoms with free moving electrons and ions—is the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. A new study from University of Rochester researchers provides experimental data about how radiation travels through dense plasmas, which will help scientists to better understand planetary science and fusion energy. Credit: NASA Using X-ray spectroscopy The researchers used X-ray spectroscopy to measure how radiation is transported through plasmas. X-ray spectroscopy involves aiming a beam of radiation in the form of X-rays at a plasma made of atoms—in this case, copper atoms—under extreme pressure and heat. The researchers used the OMEGA laser both to create the plasma and to create the X-rays aimed at the plasma. When the plasma is bombarded with X-rays, the electrons in the atoms "jump" from one energy level to another by either emitting or absorbing photons of light. A detector measures these changes, revealing the physical processes that are occurring inside the plasma, similar to taking an X-ray diagnostic of a broken bone. A break from conventional theory The researchers' experimental measurements indicate that, when radiation travels through a dense plasma, the changes in atomic energy levels do not follow conventional theories currently used in plasma physics models—so-called "continuum-lowering" models. The researchers instead found that the measurements they observed in their experiments can only be explained using a self-consistent approach based on density-functional theory (DFT). DFT offers a quantum mechanical description of the bonds between atoms and molecules in complex systems. The DFT method was first described in the 1960s and was the subject of the 1998 Nobel Prize in Chemistry. "This work reveals fundamental steps for rewriting current textbook descriptions of how radiation generation and transport occurs in dense plasmas," Hu says. "According to our experiments, using a self-consistent DFT approach more accurately describes the transport of radiation in a dense plasma." Says Nilson, "Our approach could provide a reliable way for simulating radiation generation and transport in dense plasmas encountered in stars and inertial fusion targets. The experimental scheme reported here, based on a laser-driven implosion, can be readily extended to a wide range of materials, opening the way for far-reaching investigations of extreme atomic physics at tremendous pressures."
10.1038/s41467-022-34618-6
Biology
Proteins and natural language: Artificial intelligence enables the design of novel proteins
Noelia Ferruz et al, ProtGPT2 is a deep unsupervised language model for protein design, Nature Communications (2022). DOI: 10.1038/s41467-022-32007-7 Noelia Ferruz et al, Controllable protein design with language models, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-022-00499-z Journal information: Nature Machine Intelligence , Nature Communications
https://dx.doi.org/10.1038/s41467-022-32007-7
https://phys.org/news/2022-08-proteins-natural-language-artificial-intelligence.html
Abstract Protein design aims to build novel proteins customized for specific purposes, thereby holding the potential to tackle many environmental and biomedical problems. Recent progress in Transformer-based architectures has enabled the implementation of language models capable of generating text with human-like capabilities. Here, motivated by this success, we describe ProtGPT2, a language model trained on the protein space that generates de novo protein sequences following the principles of natural ones. The generated proteins display natural amino acid propensities, while disorder predictions indicate that 88% of ProtGPT2-generated proteins are globular, in line with natural sequences. Sensitive sequence searches in protein databases show that ProtGPT2 sequences are distantly related to natural ones, and similarity networks further demonstrate that ProtGPT2 is sampling unexplored regions of protein space. AlphaFold prediction of ProtGPT2-sequences yields well-folded non-idealized structures with embodiments and large loops and reveals topologies not captured in current structure databases. ProtGPT2 generates sequences in a matter of seconds and is freely available. Introduction Natural language processing (NLP) has seen extraordinary advances in recent years. Large pre-trained language models have drastically transformed the NLP field and with it, many of the tools we use in our daily lives, such as chatbots, smart assistants, or translation machines. Analogies between protein sequences and human languages have long been noted by us and others 1 , 2 . Protein sequences can be described as a concatenation of letters from a chemically defined alphabet, the natural amino acids, and like human languages, these letters arrange to form secondary structural elements (“words”), which assemble to form domains (“sentences”) that undertake a function (“meaning”). One of the most attractive similarities is that protein sequences, like natural languages, are information-complete: they store structure and function entirely in their amino acid order with extreme efficiency. With the extraordinary advances in the NLP field in understanding and generating language with near-human capabilities, we hypothesized that these methods open a new door to approach protein-related problems from sequence alone, such as protein design. Although protein sequences and human languages are not without dissimilarities, their analogies have stimulated applying NLP methods to solve protein research problems for decades 2 . Supervised NLP methods, where the input sequences are trained jointly with their labels to produce predictive models, have been applied to various tasks, such as detecting structural similarity or predicting stability 3 , 4 . A remarkable collection of supervised language models applied to biomolecules is available in the BioSeq-BLM platform 5 , 6 . Nevertheless, since the inception of the Transformer 7 , unsupervised learning, where the training occurs on unlabeled data, has emerged as a versatile tool for language modeling. Several Transformer-based models, such as TCR-BERT 8 , epiBERTope 9 , ESM 10 , ProtTrans 11 , or ProteinBERT 12 , have shown to be very competitive with other methods 13 , 14 . Most of these models use BERT-like 15 architectures and denoising autoencoding training objectives, i.e., they are pre-trained by corrupting the input tokens in some way and trying to reconstruct the original sentence 2 . Although these models could be adjusted for generation 16 , their most direct application is sequence embedding. Another important branch of language models benefits from autoregressive training, i.e., models are trained to predict subsequent words given a context. These models, the most well-known of which are possibly the GPT-x series 17 , excel at generating long, coherent text—sometimes to the extent that much debate has been raised about their potential misuse 18 . Protein autoregressive language models, such as ProGen 19 , 20 , 21 , RITA 22 , and DARK 23 have also been studied, and show the potential of autoregressive Transformers for protein design. Motivated by these works and the ever-increasing capabilities of English-speaking models such as the GPT-x series, we wondered whether we could train a generative model to (i) effectively learn the protein language, (ii) generate fit, stable proteins, and (iii) understand how these sequences relate to natural ones, including whether they sample unseen regions of the protein space. Here, we introduce ProtGPT2, an autoregressive Transformer model with 738 million parameters capable of generating de novo protein sequences in a high-throughput fashion. ProtGPT2 has effectively learned the protein language upon being trained on about 50 non-annotated million sequences spanning the entire protein space. ProtGPT2 generates protein sequences with amino acid and disorder propensities on par with natural ones while being “evolutionarily” distant from the current protein space. Secondary structure prediction calculates 88% of the sequences to be globular, in line with natural proteins. Representation of the protein space using similarity networks reveals that ProtGPT2 sequences explore ‘dark’ areas of the protein space by expanding natural superfamilies. The generated sequences show predicted stabilities and dynamic properties akin to their natural counterparts. Since ProtGPT2 has been already pre-trained, it can be used to generate sequences on standard workstations in a matter of seconds or be further finetuned on sequence sets of a user’s choice to augment specific protein families. The model and datasets are available in the HuggingFace repository 24 at ( ). Since protein design has an enormous potential to solve problems in fields ranging from biomedical to environmental sciences 25 , 26 , we believe that ProtGPT2 is a timely advance towards efficient high-throughput protein engineering and design. Results Learning the protein language The major advances in the NLP field can be partially attributed to the scale-up of unsupervised language models. Unlike supervised learning, which requires the labeling of each data point, self-supervised (or often named unsupervised) methods do not require annotated data, thus promoting the use of ever-growing datasets such as Wikipedia or the C4 Corpus 27 . Given both the growth of protein sequence databases and the lack of annotation for a significant part of the protein space, protein sequences have become great candidates for unsupervised training 4 , 10 , 11 and now offer the opportunity to encode and generate protein sequences. To achieve this goal, we trained a Transformer 7 to produce a model that generates protein sequences. Language models are statistical models that assign probabilities to words and sentences. We are interested in a model that assigns high probability to sentences (W) that are semantically and syntactically correct or fit and functional, in the case of proteins. Because we are interested in a generative language model, we trained the model using an autoregressive strategy. In autoregressive models, the probability of a particular token or word (w i ) in a sequence depends solely on its context, namely the previous tokens in the sequence. The total probability of a sentence (W) is the combination of the individual probabilities for each word (w i ): $$p\,\left(W\right)=\mathop{\prod }\limits_{i}^{n}p\left({w}_{i}|{w}_{ < i}\right)$$ (1) We trained the Transformer by minimizing the negative log-likelihood over the entire dataset. More intuitively, the model must learn the relationships between a word w i —or amino acid—and all the previous ones in the sequence, and must do so for each sequence k in dataset (D): $${{{{{{{{\mathscr{L}}}}}}}}}_{{{{{{{{{\mathrm{CLM}}}}}}}}}}=\,-\mathop{\sum }\limits_{k=1}^{D}{log}\,{p}_\theta\left({w}_{i}^{k}|{w}_{ < i}^{k}\right)$$ (2) To learn the protein language, we used UniRef50 (UR50) (version 2021_04), a clustering of UniProt at 50% identity. We chose this dataset versus larger versions of UniParc (such as UR100) as it was previously shown to improve generalization and performance for the ESM Transformers 10 . Uniref50’s sequences populate the entire protein space, including the dark proteome, regions of the protein space whose structure is not accessible via experimental methods or homology modeling 28 , 29 . For evaluation, we randomly excluded 10% of the dataset sequences—these sequences are not seen by ProtGPT2 during the training process. The final training datasets contained 44.9 and 4.9 million sequences for training and evaluation, respectively. We tokenized our dataset using the BPE algorithm 30 . The final model is a decoder-only architecture of 36 layers and 738 million parameters. Analogous to the GLUE benchmark 31 —a collection of tools that computational linguists use to evaluate language models on different tasks such as question answering or translation—we also developed a series of extrinsic tests to assess the quality of ProtGPT2-generated sequences. The following sections elaborate on how ProtGPT2 generates de novo sequences with properties that resemble modern protein space. Statistical sampling of natural amino acid propensities Autoregressive language generation is based on the assumption that the probability distribution of a sequence can be decomposed into the product of conditional next-word distributions (Eq. 1 ). However, there is still considerable debate about the best decoding strategy to emit sequences from a model 32 . It is not uncommon that well-trained generic language models that perform well in GLUE tasks generate incoherent gibberish or repetitive text depending on the sampling procedure 32 . We briefly summarize here the most used sampling strategies for language generation that we applied in this study. Greedy search strategy selects the word with the highest probability at each timestep. Although algorithmically simple, the generated sequences are deterministic and soon also become repetitive (Fig. 1a ). Beam search tries to alleviate this problem by retaining the most probable candidates, although the resulting texts still suffer from repetitiveness and are not as surprising as those from humans, which tend to alternate low and high probability tokens 32 (Fig. 1b ). Lastly, random sampling moves away from deterministic sampling by randomly picking a word out of the top-k most probable ones (Fig. 1c, d ). Fig. 1: Examples with different sampling parameters for GPT2-large after the context input: ‘ten best things to do in Lisbon’ (a–d) and ProtGPT2 without context (e–h). While greedy and beam search produce repetitive sentences ( a , b ) and protein sequences ( e , f ), sampling generates creative texts, which, however, can be degenerate ( c ) or not sample natural sequence propensities ( g ) for small values of k. Larger values of k produce quality text ( d ) and sequences whose propensities match natural ones. Repetitive and degenerative text are shown in blue and orange, respectively. Full size image In a recent study, Holtzman et al. 32 investigated several sampling strategies to find the best parameters for text generation. Inspired by this work, we systematically generated sequences following different sampling strategies and parameters (Fig. 1 ). To assess what sampling procedure generates the most natural-like sequences, we compared the amino acid propensities of the generated set to that found in natural protein sequences (Methods). As stated by Hoffmann et al., we also observe greedy and beam search to produce repetitive, deterministic sequences, while random sampling dramatically improves the generated propensities (Fig. 1 ). Moreover, we also observe that high values of k are needed to generate sequences that resemble natural ones, i.e., our best results occur in the range of k > 800 and we specifically chose k = 950 in this work (Fig. 1h ). As observed with other generative models 33 , 34 , our sampling improves when applying a repetition penalty of 1.2. Consequently, we used these sampling parameters for the rest of this work. ProtGPT2 sequences encode globular proteins In order to evaluate ProtGPT2’s generated sequences in the context of sequence and structural properties, we created two datasets, one with sequences generated from ProtGPT2 using the previously described inference parameters, and the other with randomly chosen sequences from UR50. Each dataset consists of 10,000 sequences. Since ProtGPT2 was trained in an unsupervised manner, i.e., without including functional annotations, our analyses focus on validating the structural and biochemical properties of ProtGPT2 sequences. We first studied disordered and secondary structural content in the datasets. It has been previously shown that approximately 14% of the proteins found in bacteria and archaea are disordered 28 . To this end, we ran IUPred3 35 to analyze if the ProtGPT2-generated sequences are more prone to be disordered than a set of natural sequences. Interestingly, our analysis shows a similar number of globular domains among the ProtGPT2-generated sequences (87.59%) and natural sequences (88.40%). Several methods have been reported that detect short intrinsically disorder regions 36 . Since our goal is to provide high-level comparisons of globularity and prevalent disorder across datasets, we further performed an analysis of the protein sequences at the amino acid level using IUPred3. Remarkably, our results show a similar distribution of ordered/disordered regions for the two datasets, with 79.71 and 82.59% of ordered amino acids in the ProtGPT2 and natural datasets, respectively (Table 1 ). Table 1 Disorder and secondary structure predictions of the natural and ProtGPT2 dataset Full size table We next investigated whether the similarities in disorder are a consequence of equivalent secondary structure element content. To this end, we computed PSIPRED 37 predictions for the ProtGPT2 and natural sequence datasets. The natural sequences display alpha-helical, beta-sheet, and coil contents of 45.19, 41.87, and 12.93%, respectively. The ProtGPT2 dataset presented percentages of 48.64, 39.70, and 11.66%, respectively. These results indicate that ProtGPT2 generates sequences that resemble globular domains whose secondary structure contents are comparable to those found in the natural space. ProtGPT2 sequences are similar yet distant to natural ones Proteins have diversified immensely in the course of evolution via point mutations as well as duplication and recombination. Using sequence comparisons, it is, however, possible to detect similarities between two proteins even when their sequences have significantly diverged. We wondered how related ProtGPT2 sequences are to natural ones. To this end, we utilized HHblits, a sensitive remote homology detection tool that uses profile hidden Markov models to search query sequences against a database 38 . We searched for homologs of the 10,000 sequences in ProtGPT2’s dataset against the Uniclust30 database 39 . For comparison purposes, we also performed the same search with the natural dataset using the same settings. In addition, to analyze how completely random sequences would compare against ProtGPT2 ones, we also crafted a third dataset by randomly concatenating the 25 letters in the vocabulary. Because we want to provide a quantitative comparison of the datasets’ relatedness to modern protein space, we produced identity vs sequence length plots (Fig. 2 ). In detail, for each of the alignments found in Uniclust30, we depict the one with the highest identity and length. As a reference point in this sequence identity-length space, we use the HSSP curve 40 , a boundary set to define the confidence of protein sequence relatedness. Proteins whose identity falls below this curve, an area known as the “twilight zone”, do not necessarily have similar 3D structures nor are likely homologous. Since the sequences in the ProtGPT2 and random datasets are not the consequence of protein evolution, we use the curve as a well-known threshold to compare the datasets. Fig. 2: Pairwise sequence identities vs. alignment length for each of the datasets (a: natural (yellow), b: ProtGPT2 (green), and c: random (red)) as computed with HHblits against the Uniclust30 database. The lines depicted in red on each plot represent the HSSP curve, which we use as a reference to compare the three datasets 40 . Each plot shows a hexbin compartmentalization of the best-scoring identities and their distributions. While natural ( a ) and protGPT2 ( b ) sequences show similar percentages below the curve, 93% of the sequences in the random dataset ( c ) do not have significantly similar sequences in the Uniclust30 database. Natural and ProtGPT2 datasets show significant differences in the high-identity range ( n = 10,000 independent sequences/dataset). Full size image When looking at the distribution of hits above and below the curve, we observe that HHblits finds many hits in the Uniclust30 database that are related to the dataset of natural sequences (Fig. 2a ). Specifically, out of the 10,000 dataset sequences, 9621 (96.2%) showed identities above the HSSP curve. Similarly, 9295 ProtGPT2-generated sequences (93%) also have counterparts in the Uniclust30 database that align above the HSSP curve (Fig. 2b ). Conversely, 93% of the randomly generated sequences fall below this threshold (Fig. 2c ). Despite these similar patterns for the natural and ProtGPT2 datasets, the two datasets show differences in their distribution of hits. With a one-standard-deviation range of 31.5–69.7%, the natural dataset has a higher mean identity than the ProtGPT2 set, with a range of 32.9–64.1% (Fig. 2a, b ). The differences between the natural and ProtGPT2 sequence distributions are not statistically significant ( p value <0.05 Kolmogorov–Smirnoff). However, substantial differences between the natural and ProtGPT2 datasets occur in the high-identity range (>90%). Although 365 sequences in the ProtGPT2 dataset have high-identity sequences in Uniclust30, they correspond in all cases to alignments below 15 amino acids, whereas the natural dataset displays 760 sequences over 90% with an alignment length in the one-standard-deviation range of 14.8–77.3 amino acids. These results suggest that ProtGPT2 effectively generates sequences that are distantly related to natural ones but are not a consequence of memorization and repetition. ProtGPT2 generates ordered structures One of the most important features when designing de novo sequences is their ability to fold into stable ordered structures. We have evaluated the potential fitness of ProtGPT2 sequences in comparison to natural and random sequences in the context of AlphaFold predictions, Rosetta Relax scores, and molecular dynamics (MD) simulations. AlphaFold 41 , 42 produces a per-residue estimate of its confidence on a scale from 0–100 (pLDDT). This score has been shown to correlate with order 43 : Low scores (pLDDT > 50) tend to appear in disordered regions, while excellent scores (pLDDT > 90) appear in ordered ones 43 . Here we produced five structure predictions per sequence. The mean pLDDT of the dataset is 63.2 when taking the best-scoring structure per sequence and 59.6 when averaging across all five predictions per sequence. Moreover, 37% of sequences show pLDDT values over 70, in agreement with other recent studies 23 . A representation of all data points is shown in Supplementary Fig. 2a . Since pLDDT scores are a proxy for structural order, we turned to the natural and random datasets to see how they compare to ProtGPT2 sequences. In agreement with previous works, 66% of the sequences in the natural dataset were predicted with pLDDT values greater than 70 43 , giving an average value of 75.3 for the whole dataset (Supplementary Fig. 2b ). In contrast, the predictions in the random dataset revealed a mean pLDDT value of 44, with only 7.4% of sequences with pLDDT values over 70 (Supplementary Fig. 2c ). To further validate the quality of the model, we performed Rosetta-RelaxBB runs on the three datasets 44 . Rosetta Relax performs a Monte Carlo optimization over the Rosetta energy function, which results in different backbone and rotamer conformations. Lower Rosetta Energy conformers correlate with more relaxed structures 45 . The most recent Rosetta Energy Forcefield (REF2015) strongly correlates with experimental variables such as heat capacity, density, and enthalpy 46 . This scoring function reflects the thermodynamic stability of one static protein conformation. Here we have performed Rosetta Relax experiments for the 30,000 sequences of the three datasets (Fig. 3a ). A broad rule of thumb is that the total score (Rosetta Energy Units, REU) should lie between −1 and −3 per residue 47 . We observe such distribution in the natural and ProtGPT2 datasets, with averages of 1.90 and 1.73 REU/residue, respectively. As expected, the dataset of random sequences showed an average value of 0.13 REU/residue. Fig. 3: Comparison of Rosetta and molecular dynamics calculations among the three datasets. a Average Rosetta energy units per residue for the three datasets. AlphaFold prediction structures were used as input for the Rosetta RelaxBB protocol. 10,000 structures were run per dataset, one replica per system. b Root mean square deviation (RMSD) distribution for each MD dataset as computed by averaging RMSDs independently for each trajectory, represented as a boxplot. Twelve structures were simulated per dataset, three replicas per system. In both plots, the median is indicated as a black line; boxes depict the interquartile range (IQR), and whiskers represent 1.5 x IQR. Points outside this range are displayed as individual data points. Full size image We further tested if ProtGPT2 sequences show similar dynamic properties as natural sequences. Proteins are dynamic entities; without their inherent flexibility, they would not be capable of interacting with other biomolecules and performing their functions in the cell 48 . To evaluate whether ProtGPT2 sequences show flexibility patterns in the same range as natural proteins, we randomly selected 12 sequences per dataset and ran three replicas of molecular dynamics (MD) of 100 ns each, totaling 108 trajectories and an aggregate time of 10.8 microseconds (Methods). To ensure that the dynamics observed during the simulations were not an artifact of different pLDDT values—and hence possible different disorder predictions—we made sure that differences among dataset-pLDDT mean values were not statistically different (Supplementary Fig. 3 ). The Root Mean Square Deviation means for each of the trajectories in the natural and ProtGPT2 datasets resulted in average values of 2.93 and 3.12 Å, respectively (Fig. 3b ). As expected, the random sequences showed significant deviations during the trajectories, with an average of 9.41 Å. While ProtGPT2 sequences showed higher values than the natural ones, the distributions are not significantly different (Mann–Whitney U -test, p value 0.39). The results indicate that ProtGPT2 sequences might have similar dynamic properties as proteins found in nature. The complete list of the trajectories’ RMSD is presented in Supplementary Figs. 4, 5 . ProtGPT2 transcends the boundaries of the current protein space Several studies tried to reduce the large dimensionality of protein sequences into a few discernible dimensions for their analysis. Most representation methods consist of (i) hierarchical classifications of protein structures such as the ECOD and CATH databases 49 , 50 , (ii) Cartesian representations 51 , and similarity networks 52 , 53 . We recently represented the structural space in a network that showed proteins as nodes, linked when they have a homologous and structurally-similar fragment in common 54 and made the results available in the Fuzzle database 55 . The network represented 25,000 domains from the seven major SCOP classes and showed that the modern known protein space has both connected and “island-like” regions. It is implausible that evolution has explored all possible protein sequences 56 . Therefore, the challenge has been posed whether we can design proteins that populate unexplored—or dark—regions of the protein space and if, by doing so, we can design novel topologies and functions 56 . Here, we integrated the ProtGPT2 sequences into our network representation of the protein space. To this end, we generated an HMM profile for each SCOPe2.07 and ProtGPT2 sequence, compared them in an all-against-all fashion using HHsearch and represented the networks with Protlego 57 . To avoid that specific sequences with several alignments end up represented by the same node in the network, we duplicate entries with two non-overlapping alignments, as previously described 54 . The network contains 59,612 vertices and 427,378 edges, comprising 1847 components or ‘island-like’ clusters (Fig. 4 ). The major component accumulates more than half of the nodes (30,690)—a number significantly higher than the number observed in a network produced with the same settings but excluding ProtGPT2 sequences (Supplementary Fig. 6 )— strongly suggesting that ProtGPT2 generates sequences that bridge separate islands in protein space. We select six examples across different areas of the network from topologically different SCOPe classes to showcase ProtGPT2 sequences at the structural level (Fig. 4 ). In particular, we report an all-β ( 751 ), two α/β ( 4266 , 1068 ), one membrane protein ( 4307 ), an α + β ( 486 ) and all-α ( 785 ) structures. These structures illustrate ProtGPT2’s versatility at generating de novo structures. For each case, we searched the most similar protein structure found in the PDB database using FoldSeek 58 . ProtGPT2 generates well-folded all-β structures ( 751 , 4307 ), which despite recent impressive advances 59 , have for long remained very challenging 60 . ProtGPT2 also produces membrane proteins ( 4307 ), which pose a difficult target for protein design due to the challenges at specifying structure within the membrane and the laborious experimental characterizations 61 . Besides the generation of natural fold representatives, ProtGPT2 also produces previously unreported topologies. For example, we report protein 4266 , whose topology does not match any of the currently reported structures in the PDB, with a low DALI Z-score of 5.4 and an RMSD of 3.0 Å to PDB 5B48 over 67 residues (identity 9%). Fig. 4: An overview of the protein space and examples of proteins generated by ProtGPT2. Each node represents a sequence. Two nodes are linked when they have an alignment of at least 20 amino acids and 70% HHsearch probability. Colors depict the different SCOPe classes, and ProtGPT2 sequences are shown in white. As examples, we select proteins of each of the major five SCOP classes: all-β structures (751), α/β (4266 and 1068), membrane protein (4307), α+β (486), and all-α (785). The selected structures are colored according to the class of their most similar hit. The structures were predicted with AlphaFold, and we indicate the code of the most similar structure in the PDB as found by FoldSeek 58 , except for protein 4266, where no structures were found. Full size image Nevertheless, possibly the most remarkable property of ProtGPT2 sequences is their significant deviation from all previously designed de novo structures, which often feature idealized topologies with loops and minimal structural elements. De novo proteins have the advantage of not carrying any evolutionary history and are thus amenable as a scaffold for virtually any function, but in practice, the lack of embodiments and longer loops hamper the design of crevices, surfaces, and cavities—necessary for the interaction with other molecules and function realization. ProtGPT2 sequences resemble the complexity of natural proteins, with multifaceted surfaces capable of allocating interacting molecules and substrates, thus paving the way for functionalization. In Fig. 4 , we show structures 486 and 1060 , two examples of such complex structures. In particular, 1068 shows a TIM-barrel fold, a topology which to date has met impressive success in de novo design 62 , 63 , 64 , but whose idealized structure has nevertheless proven challenging to extend via additional secondary elements and longer loops 65 , 66 . Preserved functional hotspots Visual inspection of the structural superimposition of the best hits found with FoldSeek revealed several instances where the sidechains of ligand-interacting residues are conserved. Two examples are shown in Fig. 5 . The natural structure most similar to sequence 357 (Fig. 5a ) corresponds to PDB code 1X0P (chain A), a blue-light sensor domain that binds FAD. When superimposing the structures, we observe that 357 has retained the sidechain binding hotspots, with three residues identical (D169, Q150, and N131) and two different but capable of forming the same interactions, Lysine at position R165 and Histidine at position K127. Sequence 475 (Fig. 5b ) is most similar to PDB code 5M1T (chain A), a phosphodiesterase that folds into a TIM-barrel and binds to the bacterial second messenger cyclic di-3′,5′-guanosine monophosphate (PDB three-letter code C2E). Out of the five sidechain-interacting residues, the ProtGPT2 sequence preserves three residues (Q455, R473, and E469), and includes one substitution for another residue capable of hydrogen-bonding (aspartic acid for Q513). It is remarkable to note that ProtGPT2 has generated these sequences in a zero-shot fashion, i.e., without further finetuning in these two particular folds. These results have impactful consequences for protein engineering because ProtGPT2 appears to preserve binding positions in the generated sequences, despite the low identities (31.1 and 29.2% for 357 and 45, respectively), and can be used to augment the repertoires of specific folds and families. Fig. 5: Superimposition of the predicted structures for sequences 357 and 475 and the respective top scoring proteins in FoldSeek. a Structural alignment of 357 with pdb 1X0P (chain A, blue). Shown are five residues in 1X0P that interact via their sidechains with the ligand FAD. Of these, three are identical in 357 , and another two correspond to substitutions to the same amino acid type (R165 to lysine and Q150 to histidine). b Structural alignment of 475 with pdb 5M1T (chain A) depicting five sidechain-interacting residues with ligand C2E. All amino acids in 475 are conserved except for residue R614, which was substituted by a glycine. The PDB structures are shown in color with their sidechains in a thinner representation. Full size image Discussion The design of de novo proteins harnessing artificial intelligence methods has been meeting incredible success in the last 2 years 10 , 67 , 68 . Motivated by the unprecedented advances in NLP, we have implemented a generative language model, ProtGPT2, which has effectively learned the protein language. ProtGPT2 can generate sequences that are distantly related to natural ones and whose structures resemble the known structural space, with non-idealized complex structures. Since ProtGPT2 has been trained on the entire sequence space, the sequences produced by the model can sample any region, including the dark proteome and areas traditionally regarded as very challenging in the protein design field, such as all-β structures and membrane proteins. Visual superimposition of ProtGPT2 proteins with distantly related natural protein structures reveals that ProtGPT2 has also captured functional determinants, preserving ligand-binding interactions. As the design of artificial proteins can solve many biomedical and environmental problems, we see extraordinary potential in our protein language model. ProtGPT2 designs fit globular proteins in a matter of seconds without requiring further training on a standard workstation. ProtGPT2 can be conditioned towards a particular family, function, or fold by finetuning the model on a set of sequences of a user’s choice. In this context, ProtGPT2 will enable the screening for proteins with similarities to natural proteins in order to improve, fine-tune or alter a specific biochemical function of a natural protein. Large-scale screening of ProtGPT2-designed protein libraries might identify proteins with folds not captured in structural databases and functions that have no related counterpart in the natural space. ProtGPT2 constitutes a big step forward towards efficient protein design and generation, and lays the groundwork for future experimental studies exploring the structural and functional parameters of designed proteins, and their subsequent real-world applications. Future efforts include the inclusion of conditional tags, which will enable the controlled generation of specific functions. Methods Vocabulary encoding We use a BPE 30 tokenizer to train the vocabulary of our dataset. BPE is a sub-word tokenization algorithm that finds the most frequently used word roots, ensuring better performance than one-hot tokenization and avoiding the out-of-vocabulary problem. Given the size of Uniref50, we used Swiss-Prot (2021_04) containing >0.5 M sequences to train our tokenizer. Following the training strategy of GPT2 17 , our final vocabulary contained 50,256 tokens that correspond to the most widely reused oligomers in protein space, with an average size of four amino acids per token (Supplementary Fig. 1 ). Learned positional embeddings were used as in the original GPT2. Dataset preparation We took Uniref50 version 2021_04 as the dataset for training, containing 49,874,565 sequences. 10% of the sequences were randomly selected to produce the validation dataset. The final training and validation datasets contained 44.88 and 4.99 million sequences, respectively. We produced two datasets, one using a block size of 512 tokens, and another one with 1024 tokens. The results shown in this work correspond to a model trained with a block size of 512 tokens. Model pre-training We use a Transformer decoder model as architecture for our training which processes input sequences tokenized with a BPE strategy. The model uses during training the original dot-scale self-attention as introduced by ref. 7 . The model consist of 36 layers with a model dimensionality of 1280. The architecture matches that of the previously released GPT2-large Transformer 17 , which was downloaded from HuggingFace 24 . Model weights were reinitialized prior to training. The model was optimized using Adam (β 1 = 0.9, β 2 = 0.999) with a learning rate of 1e-03. For our main model, we trained 65,536 tokens per batch (128 GPUs × 512 tokens). A batch size of 8 per device was used, totaling 1024. The model trained on 128 NVIDIA A100s in 4 days. Parallelism of the model was handled with DeepSpeed 69 . Model inference We systematically sampled sequences using our main model using different inference parameters. In particular, we varied the repetition penalty from a range of 1.1 to 3.0 at each 0.1 units, top_k from 250 to 1000 sampling every 50 units, and a top_p from 0.7 to 1.0 with a window of 0.05 units. 100 sequences were produced for each sampling parameter set and the frequency of their amino acids compared to natural sequences. We observed which parameters produced fewer differences in the set of the seven most common amino acids in natural sequences. We also explored the beam search algorithm for beams in the range 50 to 100 using a window of 1 unit but it produced worse matches in all cases. To determine amino acid frequencies in natural sequences for comparison to ProtGPT2 samples, we randomly picked 1 million sequences from the Uniref50 dataset. The best matching parameters were further downsampled with finer windows and their frequencies compared with radar plots, as shown in Fig. 1 in the main text. The best performing parameters in our dataset were top_k 950, repetition penalty of 1.2, and default temperature and top_p values of 1. Sequence dataset generation Three sequence datasets were produced to compare their properties. The ProtGPT2 dataset was generated by sampling 1000 batches of 100 sequences, each with the selected inference parameters and a window context of 250 tokens. This step produced 100,000 sequences. We filtered from this set those sequences whose length had been cut due to the window context, giving a total of 29,876 sequences. From this set, we randomly selected 10,000 sequences. Their average length is 149.2 ± 50.9 amino acids. The natural dataset was created by randomly sampling 100,000 sequences from Uniref50. 10,000 of these sequences were further chosen to ensure their average and standard deviation lengths matched that of the ProtGPT2 dataset sequences. The random dataset was created by concatenating the 25 amino acids that appear in UniRef50, which includes the 20 standard amino acids and other IUPAC codes such as “X”, “B”, “U”, “O”, and “Z”, by randomly concatenating them into sequences with a length taken from a normal distribution between 5 and 267 amino acids. Homology detection Each sequence in the three 10k datasets was searched for similarity against the PDB70 and uniclust30 databases using HHblits 70 . We used the Uniclust30 database version 2018_08 and the pdb70 version 2021_04. As HHblits produces a list of alignments we selected all those over the HSSP curve as possible matches, and from these, selected the largest alignment. Thus, for each sequence in each dataset, the longest and the highest identity scoring alignment was selected and represented in Fig. 2 . Disorder prediction IUPred3 was run on ProtGPT2 and natural datasets using all three possible options to detect shorter (“short”) or longer (“longer”) unstructured regions, as well as structured regions (“glob”) 35 . Ordered content was determined with the “short” option. The output of the “glob” analysis also reports if any structured, globular domain was found, as shown in Table 1 . We ran secondary structure prediction using PSIPRED v4.0 for each sequence in natural and ProtGPT2 datasets 37 . The alignments of the abovementioned HHblits searches were used as multiple sequence alignments. We computed the percentages for each secondary element by dividing the number of amino acids with a certain prediction by the total number of amino acids with a confidence value of 5 or more. AlphaFold2 structure prediction We predicted five structures for each sequence in the ProtGPT2 dataset using AlphaFold ColabFold batch v1.2 41 . Network construction Sequences in the ProtGPT2 and SCOP 2.07 filtered at 95% datasets were joined. For each sequence, we produced a multiple sequence alignment (MSA) using HHblits against the database Uniclust 2018_08. Hidden Markov model profiles were produced for each MSA using HHblits 70 , and an all-against-all search for each profile was performed using HHsearch 38 . The network was constructed by representing every sequence as a node, and linking two nodes whenever they have an alignment of at least 20 amino acids with 70% HHsearch probability. Extensive details on the all-against-all comparison and network construction, and tools to generate the networks can be found in our previous works Fuzzle 54 , 55 and Protlego 57 . Detection of similar topologies was determined with FoldSeek 58 . Molecular dynamics simulations Simulation systems were built and run with the software HTMD 71 . In all cases, systems comprised solvated all-atom cubic boxes. Simulation boxes consisted of a protein centered at the origin of coordinates and explicit solvent molecules and neutralizing NaCl ions were added to each box. The Amber 19SB forcefield was used 72 . Three replicas were constructed per sequence. All systems were minimized, equilibrated, and run with ACEMD 73 using default parameters: each system was minimized and relaxed under NPT conditions for 1 ns at 1 atm and 300 K using a time-step of 4 fs, rigid bonds, cutoff of 9 Å, and PME for long-range electrostatics. Heavy protein and ligand atoms were constrained by a 10 kcal/mol/Å2 spring constant. Production simulations were run in the NVT ensemble using a Langevin thermostat with a damping of 0.1 ps −1 and a hydrogen mass repartitioning scheme to achieve timesteps of 4 fs 74 . Rosetta calculations Rosetta Relax runs were produced with the Rosetta Software Suite v3.12 44 using as input structure the best-scoring prediction from AlphaFold. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The model weights are publicly available in the HuggingFace repository: and Zenodo: [ ]. The dataset for training is available at: . The three sequence datasets in this work are available at: . The AlphaFold predictions for the three datasets are available at . The Uniref50 original database version 21_04 is available at . The Uniclust30 database version 2018_08 is available at . Code availability The model was trained with the HugginFace transformers Trainer version 4.14.1. The code and documentation are available here: .
Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications. Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism—similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. "Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins," says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth. The language processing model transferred to protein evolution is called ProtGPT2. It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealized structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process—for example by inserting extensions and cavities—so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments. "Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customized proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems," says Prof. Dr. Birte Höcker.
10.1038/s41467-022-32007-7
Physics
How the mechanism of photoionization can provide insights into complex molecular potentials
H. Ahmadi et al, Attosecond photoionisation time delays reveal the anisotropy of the molecular potential in the recoil frame, Nature Communications (2022). DOI: 10.1038/s41467-022-28783-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-022-28783-x
https://phys.org/news/2022-03-mechanism-photoionization-insights-complex-molecular.html
Abstract Photoionisation time delays carry structural and dynamical information on the target system, including electronic correlation effects in atoms and molecules and electron transport properties at interfaces. In molecules, the electrostatic potential experienced by an outgoing electron depends on the emission direction, which should thus lead to anisotropic time delays. To isolate this effect, information on the orientation of the molecule at the photoionisation instant is required. Here we show how attosecond time delays reflect the anisotropic molecular potential landscape in CF 4 molecules. The variations in the measured delays can be directly related to the different heights of the potential barriers that the outgoing electrons see in the vicinity of shape resonances. Our results indicate the possibility to investigate the spatial characteristics of the molecular potential by mapping attosecond photoionisation time delays in the recoil-frame. Introduction Molecular systems are characterised by complex potential landscapes determined by their chemical composition and by the spatial arrangement of their constituents. In general, the electronic potential presents a non-spherical shape, which plays a key role in the stereo-dynamics of atom-molecule collisions 1 and molecule–molecule interactions 2 . As explained in textbooks, the effect and the spatial gradient of a potential can be unveiled by monitoring the motion of a probe charge immersed in that potential 3 . In atoms and molecules, this charge can be one of the electrons contained in the system, which must absorb enough energy from an external source to overcome the ionisation potential and to acquire the necessary kinetic energy to explore the potential landscape, while staying long enough in the molecular surroundings to sample its relevant features. This is ideally possible by using ultraviolet radiation, i.e. photon energies of a few tens eV. In a classical picture, an electron with 10 eV energy takes about 53 as to travel through the typical molecular extension of 1 Å. The extremely short timescale of this motion calls for the application of attosecond pulses, which can efficiently generate photoelectron wave packets and provide the necessary time resolution 4 , 5 , 6 , 7 . The dynamics of photoionising wave packets is usually investigated by means of pump-probe experiments, in which an isolated or a train of attosecond pulses in the extreme ultraviolet (XUV) range set the photoelectron wave packet free and a synchronised infra-red (IR) pulse probes the instant at which the electron enters the continuum 8 . Using this approach, the role of electronic correlation effects in the photoionisation of atoms has been investigated in real-time 9 , 10 , 11 . Attosecond time delays have also been reported in photoionisation in molecular systems, showing the relevance of nuclear motion in hydrogen 12 and the role played by shape resonances in N 2 O 13 and nitrogen 14 , 15 . Moreover, the role of the localisation of the initial wave function 16 and of a functional molecular group 17 has also been demonstrated. In atomic systems, the photoionisation time delays are usually decomposed into a term specific of the atomic potential (usually indicated as Eisenbud–Wigner–Smith delay 18 ) and a measurement-induced contribution due to the action of the IR probe pulse on the photoelectron wave packet moving in the long-range Coulomb potential 19 , 20 . While in atoms the influence of the latter term can be usually quantified through simple formulas independent of the specific target 20 and its angular dependence has been characterised 21 , in molecules the effect of the IR field on the measured time delays has not been characterised yet. In general, in the case of molecular systems, the contributions of the two terms cannot be disentangled 22 , which requires a more involved analysis. A fundamental prerequisite for the characterisation of the combined effect of the anisotropic molecular landscape and of the IR field is to have access to the orientation of the molecule at the photoionisation instant. This can be done by measuring the emission direction(s) of ionic fragment(s) after the interaction with the XUV radiation, which defines the recoil frame. Symmetric molecules consisting of only a few atoms are ideal to test these effects in this frame. On the one hand, small molecules present a limited number of photoionisation and photofragmentation pathways, making feasible the identification of the electronic level of the outgoing photoelectron and, under suitable conditions, the determination of the molecular orientation during the interaction with the ionising radiation. On the other hand, the symmetry of the molecule gives the opportunity to identify specific privileged directions in the recoil frame and to characterise the effect of the molecular potential along them. In this work we investigate the photoionisation dynamics induced by a train of attosecond XUV pulses on CF 4 molecules by means of photoelectron-photoion coincidence spectroscopy 23 . The advantage of this approach is the possibility to derive information on the molecular orientation at the instant of photoionisation by measuring in coincidence the momenta of the emitted electron and the fragment ions resulting from the ulterior dissociation of the molecular cation 24 . In this way, we have been able to unambiguously identify individual ionisation channels and obtain time-resolved recoil-frame photoelectron angular distributions (RFPADs) from which the variations of photoionisation delays with the electron emission direction have been extracted. The measured delays are in very good agreement with those obtained from calculated time-resolved RFPADs where all the transitions induced by the attosecond pulse train and the IR probe, in particular those induced in the continuum by the IR pulse, are taken into account in a time-dependent formalism. The agreement confirms the validity of our experimental approach and opens the route to orientation-specific exploration and understanding of molecular photoionisation delays. Results Recoil frame and XUV spectroscopy of CF 4 We focus on photoelectrons emitted from the Highest-Occupied Molecular Orbital of CF 4 , which is triply degenerate and belongs to the irreducible representation T 1 of the point group T d (see Fig. 1 a). The xyz system represents the molecular frame, where one fluorine atom (1) is positioned along the negative z-axis and a second fluorine atom (2) is contained in the xz plane (the carbon atom occupies the centre). In the molecular frame, the common direction of the polarisation vectors of the collinear XUV and IR fields (indicated as a magenta arrow in Fig. 1 a) is identified by the angles β (polar angle) and α (azimuthal angle). Fig. 1: Molecular and recoil frames, and averaged RABBITT traces for the parallel and perpendicular cases. a CF 4 molecule and definition of the different quantities in the molecular frame. The direction of emission of the CF \({}_{3}^{+}\) ion defines the z-axis (blue arrow), which identifies the recoil frame in this experiment. The x -axis is contained in the plane identified by two fluorine atoms (1 and 2 in the figure). The orientation of the polarisation vector of the electric field (magenta arrow) is defined in the molecular frame by the angles α and β . The emission direction of the photoelectron in the molecular frame (cyan arrow) is defined by the angles θ and φ . RABBITT traces obtained for the parallel (0 ∘ ≤ β ≤ 45 ∘ and 135 ∘ ≤ β ≤ 180 ∘ ) b and perpendicular (60 ∘ ≤ β ≤ 120 ∘ ) c configurations. The angles α and φ cannot be determined in our measurements. Full size image Photoelectrons and photoions emitted after the interaction with an attosecond pulse train generated in argon and krypton and a synchronised IR field were measured using a Reaction Microscope 25 . This approach usually called RABBITT (Reconstruction of attosecond beating by two-photon transitions) 8 allows one to combine temporal and spectral resolution, which turns out to be advantageous for identifying the different fragmentation channels or yields involved in the experiment. The setup used in the measurements is described in ref. 26 . For XUV photon energies in the range 20–46 eV photoionisation of neutral CF 4 into five different cationic states (X 2 T 1 , A 2 T 2 , B 2 E, C 2 T 2 , D 2 A 1 ) is energetically possible 27 . While the first three predominantly lead to the formation of CF \({}_{3}^{+}\) ions, the last two lead to the formation of CF \({}_{2}^{+}\) ions (see Supplementary Figs. 1 and 2 and Supplementary Table 1 ). The parent molecular ion CF \({}_{4}^{+}\) was not observed, in agreement with previous spectroscopic measurements 28 . Additional correlation between the kinetic energy release (KER) of the ions and the photoelectrons gives the possibility to isolate the contribution of the photoelectrons associated with the X 2 T 1 state from those associated with the A 2 T 2 and B 2 E states (see Supplementary Figs. 3 and 4 ). Photoionisation from the ground state (leading to cations in the X 2 T 1 state) results in fast dissociation and emission of CF \({}_{3}^{+}\) fragments with 100% probability 29 , giving access to the relative orientation of the polarisation direction of the external fields with respect to the polar angle β (the azimuthal angle α cannot be determined in our measurements) at the instant of photoionisation using the recoil approximation 30 . The validity of the approximation is further confirmed by the good agreement between the measured photoelectron angular distributions (PADs) for this state and those calculated by assuming a fixed nuclei configuration (see below). Because the position of the fluorine atoms in the tripod is not determined in the experiment, the measured PADs are plotted as a function of the θ angle, i.e. with respect to the z-axis (see Fig. 1 ), which defines the recoil frame and are therefore integrated over the φ angle. RABBITT measurements The photoelectron spectra measured as a function of the delay between the XUV and IR pulses for specific molecular orientations with respect to the light polarisation vector of the collinear fields are presented in Fig. 1 b, c. The spectra are thus retrieved capturing only photoelectrons in coincidence with the momentum of the dissociating CF \({}_{3}^{+}\) ion being parallel (0 ∘ ≤ β ≤ 45 ∘ and 135 ∘ ≤ β ≤ 180 ∘ ) or perpendicular (60 ∘ ≤ β ≤ 120 ∘ ) with respect to the light polarisation vector. The signal is therefore obtained by integrating over all possible photoelectron emission directions, but for the specific molecular orientations with respect to the light as indicated in the right side insets of Fig. 1 . The excellent quality of the traces offers the opportunity to investigate the oscillation of the sidebands for different photoemission directions θ in the recoil frame. The attosecond time delays determined for different recoil frame emission angles θ are reported in Fig. 2 for the parallel (panel a–c) and perpendicular (panel d–f) cases, respectively. The photoemission time delay averaged over the angle θ for the parallel and perpendicular cases were estimated from Fig. 1 b, c and subtracted from the angular-resolved delays to remove the effect of the attosecond chirp (see also Supplementary Fig. 5 ). Fig. 2: Experimental and simulated attosecond time delays for the parallel and perpendicular cases. Experimental (black points and black line) and theoretical (red lines) attosecond time delays as a function of the photoelectron emission angle θ along the recoil axis for the parallel ( a – c ) and perpendicular ( d – f ) configurations for the sidebands 16 ( a , d ), 18 ( b , e ), and 20 ( c , f ). The shaded areas in a – c indicate the three angle intervals θ A , θ B , and θ C . The error bars were obtained by weighting the photoionisation delays with the root-mean-square phase noise over an integrated electron kinetic-energy range of 1.2 eV centred around the maximum of each sideband 13 . The complete data set for all sidebands is presented in Supplementary Figs. 9 and 10 . Full size image Theoretical modelling The theoretical results obtained by solving the TDSE within the static-exchange DFT method described in 15 , 31 and with laser parameters that reproduce the experimental conditions (see details in the SI), are also shown in Fig. 2 . We checked that the simulations reproduce the photoelectron spectrum and the main features of the RFPADs generated by the XUV pulses for the parallel and perpendicular cases (see Supplementary Figs. 6 – 8 ). Considering the same fitting procedure as in the experiment, we have retrieved the time delays from the calculated RABBITT spectra for the parallel and perpendicular molecular orientations. Discussion Overall the experimental and theoretical photoemission delays are in good agreement. The experimental attosecond time delays for the parallel case exhibit a minimum around θ = 90 ∘ , while they are relatively independent of θ in the perpendicular case (see also Suppementary Figs. 9 and 10 ). The minima present negative values (on the order of a few hundreds of attoseconds), indicating that the maxima of the sidebands oscillations observed at these angles occur at earlier time delays. The theoretical data reproduce these trends, with the minimum at 90 ∘ for the parallel configuration and a smooth evolution as a function of the photoelectron emission angle for the perpendicular configuration. For the parallel case (left panels in Fig. 2 ), the minimum observed around θ = 90 ∘ can be attributed to the interaction with the IR field (see Supplementary Figs. 11 – 13 ). In Fig. 2 a–c, three different regions are highlighted corresponding to the intervals θ A = [0 ∘ − 29 ∘ ], θ B = [60 ∘ − 75. 5 ∘ ], and θ C = [151 ∘ − 180 ∘ ] centred around the directions A ( θ = 15 ∘ ), B ( θ = 68 ∘ ) and C ( θ = 165 ∘ ), respectively. Photoelectrons leaving the molecule along these three directions, experience significant differences in the molecular landscape, as shown in Fig. 3 a, which reports the molecular potential resulting from the simulations and the three escape directions A, B, and C (indicated with dot-dashed lines). For a better visualisation, two-dimensional cuts of the potential are plotted in each plane. The nearly opposite directions A and C correspond to photoelectrons emitted (almost) parallel or antiparallel to the ion emission direction. For these directions, the molecular potentials exhibit barriers with quite different heights, as shown in Fig. 3 b. On the contrary, the direction B corresponds to photoelectrons escaping the molecule along a direction characterised by a barrier, whose height is close to that along the C direction. The photoionisation cross-sections along the three directions are also reported in Fig. 3 c, indicating the existence of shape-resonances in the photon-energy range ≈ 28-33 eV for the three directions. Fig. 3: Molecular potential and photoionisation cross-sections. a Two-dimensional cuts of the molecular potential along the xy , xz and yz planes. b Molecular potential along the three directions indicated in a . c Photoionisation cross-sections along the three directions indicated in a . Full size image The anisotropic molecular potential influences the photoionisation dynamics, as demonstrated in Fig. 4 a, b, which present the difference of the time delays measured along the directions A and C (Δ τ A − C = τ A − τ C ) and B and C (Δ τ B − C = τ B − τ C ), respectively. The experimental values (red squares) were determined integrating the photoelectrons emitted in the parallel configuration over the emission angles θ in the intervals θ A = [0−29 ∘ ], θ B = [60−75.5 ∘ ], and θ C = [151−180 ∘ ] along the directions A, B, and C, respectively. The theoretical values (black circles) were obtained from the calculated RABBITT spectra in the same way. The delay difference Δ τ A − C presents a minimum in the experimental as well as in the theoretical data around 34 eV photon energy, which approximately matches the maximum of the shape resonance for the direction C observed in Fig. 3 c. For photon energies beyond the shape-resonance regions of both directions, the absolute value of the difference Δ τ A − C becomes smaller. Fig. 4: Influence of the anisotropic molecular potential on attosecond time delays. Difference of attosecond time delays between the emission directions A ( θ = 15 ∘ ) and C ( θ = 165 ∘ ) ( a ) and B ( θ = 68 ∘ ) and C ( b ) for the experiment (red squares) and extracted from the simulations (black circles). The RABBITT spectra were previously averaged over the intervals of emission angles θ A , θ B , and θ C for the emission directions A, B, and C, respectively. The blue triangles correspond to the differences of the stereo-Wigner time delays between the directions A and C ( a ) and B and C ( b ) and averaged over the corresponding emission angle intervals, respectively. The error bars were derived by error propagation from those of the single directions. Full size image The experimental points are in good agreement with the theoretical curve (except for the point corresponding to sideband 20). The stereo Wigner time delay 22 , 32 , 33 obtained from the one-photon (XUV) dipole matrix elements and integrated over the corresponding angle intervals is also reported (blue triangles) and indicates a minimum in the same energy region. The deviations between the difference of the stereo Wigner time delays and the corresponding time delays obtained for the two-colour simulations, support the conclusion that the delay in photoionisation of molecules cannot be decomposed, in general, into the sum of a contribution due to the Wigner delay and one associated with the continuum-continuum delay 22 . The latter, indeed, should cancel out when subtracting the delay estimated along the directions A and C. In any case, the overall evolution of the delay is qualitatively similar suggesting the relevance of the differences in the position of the shape resonances along the directions A and C in the photoionization process. Differently from the previous case, the experimental data for the delay difference Δ τ B − C (red square) do not show any significant variation of the delay as a function of the photon energy, as expected. The theoretical values (black circle) are in good agreement with the experimental data, indicating only a moderate linear increase of the delay. The absence of any remarkable variation of the delay as a function of the sidebands for these two directions can be attributed to the similar heights of the trapping potentials, as shown in Fig. 3 b. In this condition, the evolution of the stereo Wigner time delays (blue triangles) is close to the experimental one. We have demonstrated that the anisotropic molecular landscape affects the stereo-photoemission time delays. In particular, different heights of the trapping potentials introduce different delays in the emission of the photoelectron wave packet into the continuum. Our results indicate the importance of coincidence spectroscopy to disentangle the information on the photoemission process from different electronic states and for different molecular orientations. Extension of this approach would be beneficial also for larger and more complex molecular systems. Methods Experimental setup The experiment was performed using a 10 kHz laser Ti:sapphire system providing 30-fs laser pulses centred at 800 nm with a pulse energy of 1 mJ. The input pulse first passes through a 1-mm-thick glass plate with a 3-mm-diameter hole at the centre splitting the beam into two parts (annular and central part), which are spatially separated and temporally delayed. Then, the input beam is focused by 25-cm-focal-length parabolic mirror into a high-harmonic gas cell filled with argon (Ar) or krypton (Kr) to generate XUV photons (20–46 eV) consisting of odd multiples of the input pulse frequency. In our work, the XUV photons were generated by the annular beam. By using an iris before the gas cell and blocking the annular beam, we ensure that the central beam, after focusing it into the gas cell, is not contributing to the XUV emission. After the gas cell, another 1-mm-thick glass plate with a 1-mm-diameter hole, centred with respect to the first plate, is placed to synchronise the XUV and IR probe pulses. The IR probe pulse is part of the input beam which is going through the hole of the first plate and glass of the second one. The delay between XUV and IR probe pulse is varied precisely by tilting the second, drilled plate 26 . For the XUV-only measurements, an aluminium filter is introduced into the beam propagation direction after the high-order harmonic generation cell in order to block the co-propagating IR pulse. Then, the XUV and probe pulses are refocused by a toroidal mirror into the interaction region inside a 3D momentum imaging spectrometer leading to dissociative photoionisation of CF 4 at an ion count rate of about 0.1 per XUV pulse. The resulting ions and electrons are guided using a homogeneous electric field ( ∣ E ∣ ~ 313 V/m) and a weak magnetic field ( ∣ B ∣ ~ 9.4 G) towards time- and position-sensitive detectors located at the opposite ends of the spectrometer. Theoretical methods In brief, theoretical RABBITT spectra have been obtained by solving the time-dependent Schrödinger equation (TDSE) within the static-exchange approximation 15 , 31 . In this method, the time-dependent wave function is expanded in a basis set of N -electron stationary wave functions, built from antisymmetrized products of an ( N − 1)-electron wave function for the bound electrons and a one-electron continuum wave function for the electron ejected into the continuum. The ( N − 1)electron wave functions are built from B-spline representations of the lowest Kohn-Sham (KS) orbitals resulting from the diagonalization of the KS hamiltonian of density functional theory, excluding the orbital from which the electron is ejected to the continuum, and the one-electron continuum wave functions are obtained from an inverse iterative procedure in the former KS orbital basis for each photoelectron energy. All dipole matrix elements describing the transitions between all bound and continuum states as well as between continuum states have been included in the solution of the TDSE, which is essential to describe the bound-continuum and continuum–continuum transitions leading to the RABBITT spectra. We have restricted all simulations to the equilibrium geometry (fixed-nuclei approximation). More details can be found in the Supplementary Information . Data availability The data that support the findings of this study are available on reasonable request from the corresponding author G.S.(giuseppe.sansone@physik.uni-freiburg.de). Requests for data will be processed within 1 week. The data are not publicly available due to further analysis on the findings conducted by the authors of this manuscript. Code availability Analysis codes used in this study are available on reasonable request from the corresponding author G.S.(giuseppe.sansone@physik.uni-freiburg.de). Requests for codes will be processed within one week.
How can researchers use the mechanism of photoionization to gain insight into complex molecular potential? This question has now been answered by a team led by Prof. Dr. Giuseppe Sansone from the Institute of Physics at the University of Freiburg. The researchers from Freiburg, the Max Planck Institute for Nuclear Physics in Heidelberg and groups at the Universidad Autonoma in Madrid/Spain and the University of Trieste/Italy have published their results in the journal Nature Communications. In the origin of photoionization, also called the photoelectric effect, an atom or molecule absorbs one quantum of light, usually indicated as photon, from an external field. The energy absorbed in this process is transferred to an electron, which is freed, leaving behind a singly charged ion. In several aspects and for several applications, the effect can be regarded as instantaneous, meaning that there is no significant time delay between the absorption of the photon and the instant when the electron is emitted. However, several experiments conducted in the last years have evidenced that tiny, but measurable delays lying in the attosecond range (1 as = 10-18 s) occur between these two processes. Generation of attosecond pulses "Thanks to the advanced laser sources and specially designed spectrometers available in our laboratory, we can generate the shortest bursts of light, lasting only few hundreds of attoseconds," Sansone explains. "Moreover, we can reconstruct the orientation of simple molecules when they absorb a photon from an external laser pulse. We have used such pulses to investigate the motion of the electrons after the absorption of a photon." Electrons experience paths with potential peaks and valleys The researchers found that on its way out from the molecule, the electron experiences a complex landscape characterized by potential peaks and valleys. These are determined by the spatial distribution of the atoms composing the system. The path followed by the electron during its motion can affect the time it takes to be freed. Extension to more complex molecular systems possible In the experiment, the team measured the time delays accumulated by the electrons emitted from CF4 molecules in different spatial directions were measured using an attosecond pulse train combined with an ultrashort infrared field. "Combining this information with the characterization of the spatial orientation of the molecule, we can understand how the potential landscape and, in particular, potential peaks affect the time delay," says the Freiburg physicist. The work can be extended to more complex molecular systems and to potentials changing on ultrashort timescales. In general, Sansone emphasizes, this approach could give the possibility to map complex potential landscapes from within, with unprecedented temporal resolution.
10.1038/s41467-022-28783-x
Medicine
Diabetes patients at higher risk of deadly liver disease, finds study of 18 million people
Myriam Alexander et al, Risks and clinical predictors of cirrhosis and hepatocellular carcinoma diagnoses in adults with diagnosed NAFLD: real-world study of 18 million patients in four European cohorts, BMC Medicine (2019). DOI: 10.1186/s12916-019-1321-x Journal information: BMC Medicine
http://dx.doi.org/10.1186/s12916-019-1321-x
https://medicalxpress.com/news/2019-05-diabetes-patients-higher-deadly-liver.html
Abstract Background Non-alcoholic fatty liver disease (NAFLD) is a common condition that progresses in some patients to steatohepatitis (NASH), cirrhosis and hepatocellular carcinoma (HCC). Here we used healthcare records of 18 million adults to estimate risk of acquiring advanced liver disease diagnoses in patients with NAFLD or NASH compared to individually matched controls. Methods Data were extracted from four European primary care databases representing the UK, Netherlands, Italy and Spain. Patients with a recorded diagnosis of NAFLD or NASH (NAFLD/NASH) were followed up for incident cirrhosis and HCC diagnoses. Each coded NAFLD/NASH patient was matched to up to 100 “non-NAFLD” patients by practice site, gender, age ± 5 years and visit recorded within ± 6 months. Hazard ratios (HR) were estimated using Cox models adjusted for age and smoking status and pooled across databases by random effects meta-analyses. Results Out of 18,782,281 adults, we identified 136,703 patients with coded NAFLD/NASH. Coded NAFLD/NASH patients were more likely to have diabetes, hypertension and obesity than matched controls. HR for cirrhosis in patients compared to controls was 4.73 (95% CI 2.43–9.19) and for HCC, 3.51 (95% CI 1.72–7.16). HR for either outcome was higher in patients with NASH and those with high-risk Fib-4 scores. The strongest independent predictor of a diagnosis of HCC or cirrhosis was baseline diagnosis of diabetes. Conclusions Real-world population data show that recorded diagnosis of NAFLD/NASH increases risk of life-threatening liver outcomes. Diabetes is an independent predictor of advanced liver disease diagnosis, emphasising the need to identify specific groups of patients at highest risk. Peer Review reports Background Non-alcoholic fatty liver disease (NAFLD) is the most common cause of liver disease worldwide. NAFLD represents a spectrum of disease that includes simple steatosis, non-alcoholic steatohepatitis (NASH) and fibrosis [ 1 ]. The numbers of individuals presenting with end-stage complications of NASH, namely decompensated cirrhosis and hepatocellular carcinoma (HCC), are rising [ 2 , 3 ], and NASH is rapidly becoming the most common indication for liver transplantation [ 4 ]. Yet not all patients within the NAFLD spectrum progress, and for the majority, NAFLD is a benign condition [ 1 ]. A key clinical challenge is to identify the proportion of patients who are at high risk of developing advanced liver disease, so that interventions, including the many novel therapies in development, can be targeted to those at greatest need. Our current understanding of NAFLD epidemiology and progression largely derives from single-centre studies of small- or medium-sized cohorts and meta-analyses of these [ 5 , 6 , 7 ]. These studies, together with emerging data from placebo arms of therapeutic trials [ 8 ], have taught us that patients with existing evidence of progressive disease (e.g., fibrosis) are at risk of further progression to HCC and decompensated cirrhosis, albeit this may reflect a degree of lead-time bias. Such studies often involve formal assessment of well-phenotyped patients at inclusion but are, by design, selective and may not represent the ‘real-world’ situation for the majority of patients with NAFLD. Paired biopsy data have been reported, although the second biopsy is often performed because of clinical suspicion and not per study protocol, which may bias estimates of progression [ 9 ]. Real-world patients are socially and ethnically diverse, have comorbidities and concomitant medications or simply cannot commit to long-term studies or trials and therefore may not be represented by any of these study designs. Increasingly, real-world data derived from primary care electronic health records (EHR) of a sizeable proportion of the general population [ 10 , 11 ] are being used to address these issues. In many European countries, where healthcare is largely state-funded and there are low or absent primary care co-payments, the population has unrestricted access to healthcare via primary care physicians who act as gatekeepers for referral to secondary care [ 12 ]. People register with primary care centres at birth or when they move to an area in order to access healthcare; therefore, primary care EHR represent data that are as close to the ‘general’ population as possible. If a practice joins the database, all the patients at that practice are registered in the database and, although there is an option for individual patients to opt out, this is minimal (< 1%). In order to gain insights into the NAFLD spectrum of diseases in real-world patients, we extracted data from four large European primary care databases and identified a cohort of patients with a diagnosis of NAFLD or of NASH. Our aim in this study was to estimate the risk for patients with diagnoses of NAFLD or NASH to acquire a new diagnosis of cirrhosis and HCC and to understand the main predictors for this. Methods Databases Databases were accessed via the European Medical Information Framework (EMIF) network: The Health Search Database (HSD) in Italy [ 13 ], The Integrated Primary Care Information (IPCI) in the Netherlands [ 14 ], the Information System for the Development of Research in Primary Care (SIDIAP) in Spain [ 15 ] and The Health Information Network (THIN) in the UK [ 16 ] (Additional file 1 : Table S1). HSD collects electronic medical record data from a network of over 800 Italian GPs who are members of the Italian College of General Practitioners. IPCI is a longitudinal collection of electronic patient records from over 750 Dutch general practitioners, containing data from over 2 million patients. SIDIAP collects data from 274 primary care practices comprising 3414 basic care units [ 17 ], and THIN contains the electronic medical records of 11.1 million patients from 562 general practices in the UK, covering 6.2% of the UK population [ 18 ]. The data custodians for each database provided approval that the protocol of the study complied with local privacy laws. Anonymised data were extracted locally by each data custodian liaising with the EMIF Platform and using a data transformation tool called Jerboa Reloaded [ 10 ]. The data were then uploaded onto a secure remote server maintained by an independent academic centre (Erasmus Medical Centre Private Research Environment, Netherlands) and analysed centrally. Study design We conducted a matched cohort study. All patients with a diagnosis of NAFLD or NASH (termed NAFLD/NASH) prior to 01/01/2016 were identified in the four databases using harmonisation methods previously described [ 10 ]. Patients were included in the analysis if they were aged ≥ 18 at diagnosis and had medical records available for ≥ 12 months from registration with the practice. Exclusion criteria were missing information on age and sex, a record of alcohol abuse at any time prior to diagnosis and a history of liver morbidity within the 12 months prior to diagnosis [ 10 ] (see Additional file 1 : Supplementary Methods for exclusion diagnoses). Each NAFLD/NASH patient was matched with up to 100 ‘non-exposed’ controls who did not have a NAFLD or NASH diagnosis at or prior to the index date (defined as the date of diagnosis of the matched NAFLD/NASH patient). Matching was done by practice site, age at index date ± 5 years, sex and a visit at the practice within ± 6 months of the index date. In the THIN and SIDIAP databases, the terminology of the database (Read code and International Classification of Disease version 10, ICD10, respectively) allowed NAFLD and NASH diagnoses to be distinguished from each other. Therefore, in these databases, a matched control cohort was constructed for each of the diagnoses: NAFLD, NASH and, to enable comparison between all databases, NAFLD/NASH. If a patient had both NAFLD and NASH diagnoses recorded, the earliest event was used to define index date of NAFLD/NASH diagnosis, and the NASH diagnosis deemed an incident event. In HSD (ICD 9) and IPCI (IPCI Dutch), where NAFLD and NASH could not be distinguished, only one cohort (NAFLD/NASH) was defined and controls matched to this. Patients were followed up from the index date until the earliest of occurrence of cirrhosis, hepatocellular carcinoma or NASH (where this could be identified), end of the study period (31/12/2015) and loss of follow-up due to exit out of the database or death. Events of interest were incident diagnosis of cirrhosis, hepatocellular carcinoma or NASH, where this could be identified. See Additional file 1 : Supplementary Methods for variable extraction and data analysis. Results Out of 18,782,281 eligible individuals in the four databases, we identified 136,703 (0.7%) who had a recorded diagnosis of either NAFLD or NASH (coded NAFLD/NASH) and who met the inclusion criteria (Additional file 1 : Table S1). The Spanish (SIDIAP) and UK (THIN) databases contributed 71% of all cases; the remaining 29% of coded NAFLD/NASH cases were from the Dutch (IPCI) and Italian (HSD) databases. In SIDIAP, 2.5% of all coded NAFLD/NASH patients ( n = 1880) had NASH, and in THIN, this was 4.7% ( n = 1212). Due to the coding, NAFLD and NASH could not be distinguished in IPCI and HSD. Therefore, in the initial phase of analysis, we combined all NAFLD and NASH codes from all four databases as coded NAFLD/NASH. Comparing coded NAFLD/NASH patients across the four databases, there were minor differences between databases in mean age, BMI and proportion with diabetes (Table 1 and Additional file 1 : Table S2). BMI data were available in 64.6% of patients with coded NAFLD/NASH and in 45.9% of matched controls (Additional file 1 : Table S3). In the subset of patients for whom data were available, ALT and AST values were highest in THIN, and the proportion of obese patients highest in SIDIAP. Sufficient data were available to calculate the non-invasive fibrosis Fib-4 score (age, AST, ALT and platelets) in 46.7% of patients (range 12.6–62.6%, Table 2 ). THIN (UK) had the smallest proportion of patients with Fib-4 data (12.6%), in whom the proportion of patients with high-risk scores was 10.5%, highest among the four databases. Table 1 Descriptive characteristics of coded NAFLD/NASH patients and matched unexposed cohorts Full size table Table 2 Distribution of Fib-4 scores in coded NAFLD/NASH patients shown for each country database Full size table Patients with a coded diagnosis of NAFLD/NASH had comparable age and sex distribution, smoking rates and duration of follow-up as matched controls (Table 1 ). As expected, however, controls had lower BMI; lower rates of obesity, hypertension or diabetes; and lower serum levels of ALT and AST. Risk of incident cirrhosis and HCC is higher in NAFLD/NASH patients compared to controls Combining all four databases, the median duration of follow-up was 3.3 years (IQR 1.8–5.3) totalling 531,452 person-years for patients with coded NAFLD/NASH and 43,385,495 person-years for controls. Among all coded NAFLD/NASH patients, the incidence of cirrhosis diagnosis was 0.76 per 1000 person-years, (95% confidence interval (CI) 0.46 to 2.32), and the incidence of hepatocellular carcinoma diagnosis was 0.3 per 1000 person-years, (0.26 to 0.60; Additional file 1 : Table S4). Patients with coded NAFLD/NASH were at significantly higher risk of acquiring a new diagnosis of cirrhosis compared to controls with a pooled HR of 4.73 (95%CI 2.43–9.19) after adjustment for age, smoking status and BMI (Fig. 1 ). Fig. 1 Association of coded NAFLD/NASH, NAFLD and NASH with cirrhosis. Hazard ratios and 95% confidence interval for acquiring a new diagnosis of cirrhosis in each database and combined across databases (subtotal) Full size image Similarly, the risk of incident HCC diagnosis was significantly higher in coded NAFLD/NASH patients compared to controls. The pooled HR across the four databases for an incident diagnosis of HCC was 3.51 (95%CI 1.72–7.16 Fig. 2 ). There were no significant differences in the HRs when categorising patients into those with and without obesity, smoking, diabetes or hypertension; male sex and older age (Additional file 1 : Figure S1). There were no significant differences in the HRs for cirrhosis and HCC diagnoses following adjustment for age and smoking alone in all coded NAFLD/NASH patients compared to patients with available BMI data (Additional file 1 : Figures S2 and S3). This is despite the fact that patients with BMI data were more likely to be smokers (19.5% vs 11.2%), diabetic (26.9% vs 7.0%) and hypertensive (50.1% vs 27.9%, Additional file 1 : Table S5). Fig. 2 Association of coded NAFLD/NASH, NAFLD and NASH with hepatocellular carcinoma (HCC). Hazard ratios and 95% confidence interval for acquiring a new diagnosis of HCC in each database and combined across databases (subtotal) Full size image Fib-4 predicts disease progression in patients with NAFLD/NASH In the subset of coded NAFLD/NASH patients in whom we could calculate Fib-4 ( n = 63,971, Additional file 1 : Table S3), the incidence of a new diagnosis of cirrhosis was significantly higher for the high-risk compared to low-risk category (HR 33.24, 95%CI 8.82–125.34), adjusting for age and smoking status and more modest, albeit still significant, for the intermediate compared to low-risk group (HR 5.04, 95%CI 2.30–11.04 Additional file 1 : Figure S4A). Similarly, compared to patients with low-risk scores, the incidence of an HCC diagnosis was higher in patients with indeterminate (HR 3.74, 95%CI 1.76–7.96) or high-risk scores (HR 25.2, 95%CI 7.83–80.66, Additional file 1 : Figure S4B). Distinguishing NAFLD from NASH diagnoses when estimating risk of cirrhosis and HCC The pooled HR for incident NASH diagnosis in patients with a coded diagnosis of NAFLD compared to controls was 7.75 (95%CI 2.56–23.51, p = 0.008) although this estimate is based on a very small number of individuals ( n = 130 of whom only seven were in SIDIAP, Additional file 1 : Figure S5). In the subset of patients with a coded diagnosis of NASH, the incidence of diagnoses of liver outcomes was higher than in those with NAFLD albeit confidence intervals overlapped: 3.25 per 1000 person-years (95%CI 2.41–4.10) for cirrhosis and 1.16 per 1000 person-years (95%CI 0.67–1.65) for HCC (Figs. 1 and 2 ). Short time interval to cirrhosis diagnosis in patients with NAFLD and NASH In SIDIAP, 174 out of 75,415 patients with coded NAFLD were coded as having cirrhosis (incidence rate 0.66 per 1000 person-years (95%CI 0.56–0.76) with a median time to the new diagnosis of 2.9 years whereas 38 out of 1880 patients with NASH acquired a diagnosis of cirrhosis (incidence rate 2.83 per 1000 person-years (95%CI 2.0–3.88, Additional file 1 : Table S4) with a similar median time to diagnosis of 3.0 years (Additional file 1 : Table S6). In THIN, the incidence of cirrhosis was higher and the interval between diagnoses was shorter for both stages of disease. One hundred three out of 24,743 patients with coded NAFLD acquired a cirrhosis diagnostic code (incidence rate 2.17 per 1000 person-years (95%CI 1.86–2.51) with median time to diagnosis of 2.0 years, compared to 26 out of 1212 patients with coded NASH (incidence rate 5.81 per 1000 person-years (95% CI 3.8–8.52) with median time to diagnosis of 0.5 years. Diabetes predicts disease progression In coded NAFLD/NASH patients, the strongest association with incident liver outcomes was observed in patients who also had a past diagnosis of diabetes at baseline (HR 2.3, 95% CI 1.9–2.78). In matched controls without coded NAFLD/NASH, smoking was also associated with liver outcome (HR 1.5, 95% CI 1.41–1.6) in addition to the independent risk attributed to diabetes, which was higher than in patients with coded NAFLD/NASH (HR 2.92, 95% CI 2.76–3.08, Table 3 ). Table 3 Association between covariates and risk of liver outcomes: cirrhosis or hepatocellular carcinoma. Using a 1-step Cox model stratified by database Full size table Discussion To our knowledge, this is the largest study to date that has used EHR data to investigate rates of new diagnoses of advanced liver disease in patients with NAFLD. Our patients were well-matched to a very large number of controls according to sex, age, GP practice and most recent visit, thus limiting bias due to geographical and socioeconomic diversity and behaviours relating to health service utilisation. Patients with coded NAFLD/NASH are at significantly increased risk of acquiring a diagnosis of cirrhosis or HCC, compared to matched controls. The risk is greater in patients with a coded diagnosis of NASH compared to NAFLD and in those with high-risk Fib-4 fibrosis scores compared to indeterminate or low-risk scores. Diabetes is an independent risk factor for progression to either HCC or cirrhosis diagnoses in both coded NAFLD/NASH patients and matched controls. We applied minimal selection criteria and therefore were able to include over 78% of all adults registered in the databases, hence the ‘real-world’ nature of the study. The overall proportion of people with coded NAFLD/NASH diagnoses is lower than expected as reported previously [ 10 ], is in keeping with other primary care work [ 19 ] and may reflect levels of awareness of NAFLD/NASH in primary care [ 20 , 21 ]. Hence, our data, by definition, can only represent the visible part of the clinical iceberg. Despite this, we find that patients with coded NAFLD/NASH acquire diagnoses of life-threatening liver disease within a relatively short follow-up period (median 3.3 years). It is not feasible that the short time intervals between coded diagnosis of NAFLD/NASH and advanced liver disease reflect true rates of disease progression, estimated to be one fibrosis stage per 7 years [ 22 ]. The acquisition of a new code in the healthcare record does not necessarily mean that pathological progression has occurred at that time, nor that the stage did not exist at baseline. Our interpretation of these data is that patients in Europe are being diagnosed at the later stages of disease, which are associated with greater risk of liver-related mortality [ 23 , 24 , 25 ]. Less than 50% of patients had sufficient data to calculate Fib-4, the components of which are also needed to calculate many other non-invasive fibrosis scores [ 26 ]. There was marked national variation in fibrosis assessment; 73.1% of patients in whom we could calculate Fib-4 were from the Spanish database. We have no way of determining whether these scores were actually calculated by clinicians and whether they influenced decision-making. This is despite the fact that such risk stratification is central to most guidelines [ 27 , 28 , 29 ], used to determine clinical management, select patients for clinical trials and probably triage patients for future therapy. In the databases where NAFLD/NASH codes could not be distinguished (HSD and IPCI), even those with low-risk Fib-4 scores were at increased risk of cirrhosis and HCC compared to controls. This further suggests that primary care records under-estimate disease severity and that some patients with NAFLD/NASH diagnoses actually have advanced fibrosis or cirrhosis already. Apart from a diagnosis of NAFLD/NASH, diabetes was the strongest independent risk factor for acquiring a diagnosis of cirrhosis or HCC. In the matched control population, the HR for diabetes was even higher than the coded NAFLD/NASH cohort, which may reflect a significant number of individuals with undiagnosed NAFLD/NASH among the controls. The importance of diabetes is consistent with a review of patients who had undergone more than one biopsy in the course of their routine clinical care in the UK, which showed that diabetes was a risk factor for progression of fibrosis [ 9 ]. Obesity is an important risk factor for many cancers including HCC [ 30 ], but we did not find that in our study. If patients are diagnosed late in the disease spectrum, it is unlikely that patients will have undergone surveillance and HCC may be diagnosed at late stages when symptoms including weight loss are manifest. Taken together, these findings emphasise the need to recognise risk factors for progressive disease and to detect disease at early stages when interventions can be more effective. This study is subject to limitations. The nature of real-world data is such that we cannot ascertain the origin of codes nor the motivation for adding diagnoses to the patient record. Although the study is based in primary care, it is likely that a large proportion of diagnoses will have been made with some involvement of secondary care. It would be inaccurate to assume that all patients who carry the code ‘NASH’ have had a liver biopsy and histological assessment and it might be that the diagnosis was assumed and recorded based on, for example, ultrasound evidence of fatty liver and elevated serum transaminases or increased stiffness on transient elastography. Similarly, it was not possible to confirm that the matched controls did not have NAFLD/NASH. However, the clinical features of patients with coded NAFLD/NASH are consistent with the diagnostic codes, although if patients with NAFLD/NASH do exist in the control group then the effect sizes reported here are underestimates of the real risk. This means that there are individuals living with diabetes in primary care who have not been diagnosed with NAFLD/NASH but are at significantly increased risk of developing liver cirrhosis and cancer. The estimated size of the NAFLD problem has raised fears of large unmanageable patient numbers who are not at immediate threat of disease. Notwithstanding our expectation that many cases have not been identified in this study, we have shown that 0.6% of patients with an existing coded diagnosis of NAFLD/NASH acquire a diagnosis of cirrhosis and/or HCC within a 3-year follow-up period. This gives us insight into the rate at which advanced disease is discovered, even if this is not the natural history in the general population. The clinical impact of our data is that they highlight the large gaps in diagnosis and risk assessment of NAFLD and NASH with variable rates of risk stratification, staging of disease and seemingly late diagnosis. Conclusions Our knowledge of NAFLD/NASH is being based on small, highly selected cohort studies. These have been accurate in telling us the potential scale of the prevalence and progression of disease, but the reality for many in the general population is some way from that. In order to affect population health and make an impact on the overall health burden of advanced liver disease, we cannot simply rely on introducing effective therapies to the small number of people with established diagnoses. The current approach to opportunistically investigate those in whom abnormalities in liver tests arise is clearly not working. While better biomarkers are needed that identify those at risk more precisely, the current tools are not being used, leaving many patients unclear as to the stage of their disease and its significance to their health. Therefore, making an impact on advanced liver disease will need co-ordinated efforts to identify those with NAFLD, to stage their disease and target those at risk of progression. Abbreviations ALT: Alanine transaminase AST: Aspartate transaminase BMI: Body mass index CI: Confidence interval EHR: Electronic Health Record EMIF: European Medical Information Framework GP: General Practitioner HCC: Hepatocellular carcinoma HSD: Health Search Database IPCI: Information System for Research in Primary Care LFT: Liver function tests NAFLD: Non-alcoholic fatty liver disease NASH: Non-alcoholic steatohepatitis SIDIAP: Information System for Research in Primary Care THIN: The Health Improvement Network UK: United Kingdom US: United States
Many patients with potentially deadly liver cirrhosis and liver cancer are being diagnosed at late advanced stages of disease, according to a study led by Queen Mary University of London and the University of Glasgow. The study of 18 million people across Europe also suggests the people living with type 2 diabetes are at particular risk of this 'silent disease' and should be monitored closely to prevent life-threatening disease progression. Non-alcoholic fatty liver disease (NAFLD) affects up to a quarter of people in the West and is the most common cause of liver disease around the world. It is closely associated with obesity and type 2 diabetes and its rise mirrors the social problems of poor diets and sedentary lifestyles. GPs are often unaware of the condition and patients often go undiagnosed. For the majority, NAFLD is a benign condition, but one in six people will go on to develop the aggressive form of disease, called non-alcoholic steatohepatitis (NASH), leading to liver injury, scarring and eventually in some to cirrhosis, liver failure and even liver cancer. By identifying which patients might go on to develop the more aggressive disease, interventions and treatments could be targeted to those at greatest need. In the largest study of its kind, published in the journal BMC Medicine, the team combined the healthcare records of 18 million European adults from the UK, Netherlands, Italy and Spain. They matched each NAFLD patient to 100 patients who did not have a recorded diagnosis, and looked to see who developed liver cirrhosis and liver cancer over time. Lead researcher Dr. William Alazawi from Queen Mary University of London said: "We were surprised that the number of patients with recorded diagnoses of non-alcoholic fatty liver was much less than expected, meaning that many patients are actually undiagnosed in primary care. Even over the short time frame of the study, some patients progressed to more advanced, life threatening stages of disease, suggesting that they are being diagnosed very late. "The public, doctors and policy makers need to be aware of this silent disease and strategies need to be put in place to tackle the root causes and avoid progression to life-threatening stages. "People living with diabetes are at increased risk of more advanced, life threatening stages of disease, suggesting that we should be focusing our efforts in educating and preventing liver disease in diabetes patients." Naveed Sattar from the University of Glasgow added: "Doctors treating patients with diabetes already have a lot to check on—eyes, kidneys, heart risks—but these results remind us that we should not neglect the liver, nor forget to consider the possibility of NASH. They also remind us that perhaps more efforts are needed to help our patients with diabetes lose weight and cut alcohol." More than 136,000 patients were identified with NAFLD/NASH and were more likely to have type 2 diabetes, hypertension and obesity than matched controls. The strongest association was observed in NAFLD/NASH patients who had a diagnosis of type 2 diabetes—they were more than twice as likely to develop aggressive liver disease. This suggests that diabetes could be a good predictor of liver disease progression. Looking at particular types of advanced liver disease, NAFLD/NASH patients were almost five times more likely to be diagnosed with cirrhosis and more than three and a half times more likely to be diagnosed with liver cancer. The study also found that NAFLD/NASH patients acquired diagnoses of life-threatening liver disease within a relatively short time (around 3.3 years). The researchers say that it is not feasible that this reflects true rates of disease progression. The acquisition of a new diagnosis in the healthcare record does not necessarily mean that disease progression has occurred at that time, nor that the advanced disease did not exist at the time of the initial diagnosis. This suggests that patients in Europe are being diagnosed at the later stages of disease, which are associated with greater risk of liver-related mortality. The results also suggests that primary care records under-estimate disease severity and that some patients with NAFLD diagnoses actually have advanced cirrhosis already. The research was funded by the European Union's Innovative Medicines Initiative and Dr. William Alazawi was funded by the Medical Research Council.
10.1186/s12916-019-1321-x
Medicine
One size does not fit all when it comes to marrow fat, scientists say
Nature Communications 6, Article number: 7808 DOI: 10.1038/ncomms8808 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms8808
https://medicalxpress.com/news/2015-08-size-marrow-fat-scientists.html
Abstract Marrow adipose tissue (MAT) accumulates in diverse clinical conditions but remains poorly understood. Here we show region-specific variation in MAT adipocyte development, regulation, size, lipid composition, gene expression and genetic determinants. Early MAT formation in mice is conserved, whereas later development is strain dependent. Proximal, but not distal tibial, MAT is lost with 21-day cold exposure. Rat MAT adipocytes from distal sites have an increased proportion of monounsaturated fatty acids and expression of Scd1/Scd2 , Cebpa and Cebpb . Humans also have increased distal marrow fat unsaturation. We define proximal ‘regulated’ MAT (rMAT) as single adipocytes interspersed with active haematopoiesis, whereas distal ‘constitutive’ MAT (cMAT) has low haematopoiesis, contains larger adipocytes, develops earlier and remains preserved upon systemic challenges. Loss of rMAT occurs in mice with congenital generalized lipodystrophy type 4, whereas both rMAT and cMAT are preserved in mice with congenital generalized lipodystrophy type 3. Consideration of these MAT subpopulations may be important for future studies linking MAT to bone biology, haematopoiesis and whole-body metabolism. Introduction Marrow adipose tissue (MAT) is a functionally distinct adipose depot, located within the skeleton, with the potential to contribute to both local and systemic metabolism 1 , 2 . Further accumulation of MAT occurs in a diverse range of clinical conditions including osteoporosis, ageing, gonadal dysfunction, type 1 diabetes and anorexia 2 , 3 . MAT formation is also induced with therapeutic interventions including radiation, chemotherapy, glucocorticoids and thiazolidinediones 1 , 3 . Despite these clinical findings, the regulation and function of MAT remains largely unclear. In many cases, MAT accumulation has been correlated with low bone mineral density, decreased bone formation and bone loss (reviewed in ref. 2 ). However, the presence of a direct relationship between MAT and bone remains controversial. For example, despite a clear correlation, increased MAT is not necessary for bone loss at the proximal tibia in rodent models of type 1 diabetes or ovariectomy-induced osteopenia 4 , 5 , 6 . In addition, histomorphometric studies in rats demonstrate that sites of high MAT have decreased ovariectomy-induced trabecular bone loss, with trabecular width in rat tibial metaphyses being greater at sites of high MAT (distal tibia) than at sites of low MAT (proximal tibia) 7 , 8 , 9 . The hypothesis that MAT is necessary for skeletal equilibrium is also supported by phenotypes of patients with congenital generalized lipodystrophy (CGL). A high proportion of patients with CGL1 or CGL2 (who lack MAT) develop pathological osteosclerosis and skeletal cysts between ages 10 and 20 years—the time in humans when MAT generally undergoes robust formation in a developmentally defined pattern in the affected skeletal regions 2 . In contrast, those with CGL3 or CGL4 (who retain MAT) fail to develop this pathology. These apparent contradictions emphasize the complex, context-specific relationship between MAT and bone, and likely the relationship between MAT and peripheral metabolism 1 , 10 . Although it is generally assumed that all marrow adipocytes are equivalent, a study by Tavassoli 11 in 1976 suggested that characteristics of red marrow adipocytes may differ to those of adipocytes within yellow marrow. In humans, formation of adipocytes within the yellow marrow occurs at or slightly before birth, regardless of prematurity, and accelerates between 4 and 8 weeks of age 2 , 12 . Early MAT formation occurs in distal skeletal regions including the hands, feet, distal tibia and tail (in rodents). Histologically, once this early MAT matures, the densely packed adipocytes resemble peripheral white adipose tissue (WAT) and are relatively devoid of active haematopoiesis. For the purposes of discussion in this paper, we define these areas as constitutive MAT (cMAT). After the initial peak, MAT accumulation continues in areas of red, haematopoietic marrow throughout life 13 . We refer to this population as regulated MAT (rMAT) and define it histologically as single adipocytes interspersed with sites of active haematopoiesis. It is important to note that, especially in larger species, both histological patterns may exist side by side. In rats and mice, however, these regions appear to be more spatially distinct. We hypothesized that the later-forming rMAT adipocytes would have characteristics distinct from the cMAT adipocytes that arise early in development. Herein, we address this hypothesis using mouse models to examine MAT formation and regulation during development and with cold exposure; lipidomics and proton magnetic resonance (MR) spectroscopy ( 1 H-MRS) to measure MAT lipid composition in rats and humans; MAT isolated from rats to quantify molecular differences in gene expression; and CGL3 and CGL4 mouse models that reveal a genetic basis for development of distinct rMAT and cMAT subpopulations. In sum, this evidence distinguishes rMAT from cMAT—a fundamental finding that may help to explain previous inconsistencies in the literature and inform future research on the relationship between MAT, bone, haematopoiesis and whole-body metabolism. Results Strain-specific MAT development in mice The postnatal development of MAT remains poorly characterized on a spatiotemporal level. We used osmium tetroxide staining to visualize and quantify MAT in the whole tibia of male C57BL/6J (B6) and C3H/HeJ (C3H) mice at 1, 4, 12 and 56 weeks of age ( Fig. 1 ). At 1 and 4 weeks, the initial phase of MAT development was similar in both strains. In the distal tibia, MAT formation and maturation accelerated rapidly after birth until the marrow space filled with adipocytes at 4 weeks of age. The amount of MAT distal to the junction of the tibia and fibula was similar between B6 and C3H strains through 12 weeks and remained relatively stable until 56 weeks in C3H animals ( Fig. 1a,b ). A parallel pattern of development occurred in the caudal vertebrae of the tail, with mature MAT filling the marrow space by 4 weeks of age ( Fig. 1c ). At this time, MAT in the tail vertebrae matched the histological appearance of cMAT as defined above. Figure 1: Quantification of MAT development in C57BL/6J and C3H/HeJ mice from 1 to 56 weeks of age. ( a ) Osmium-stained tibiae were scanned by μCT and were reconstructed with the decalcified bone overlaid. Representative images of the data are presented in ( b ). Marrow fat is dark grey and bone is light grey. ( b ) Region-specific quantification of MAT volume (biological replicate N =5 (1 week old), 7 (4 weeks old), 9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). Regions as defined in a include the proximal epiphysis (Prox Epi), the growth plate to the tibia/fibula (Tib/Fib) junction (GP to T/F J) and the tibia/fibula junction to the end of the bone (T/F J to end). a Two-tailed t -test, P <0.05 for total tibial MAT volume compared between strains at a given age. ( c ) Representative histology of caudal vertebrae (biological replicate N = 5), × 10 objective (scale bars, 200 μm). All graphs represent mean ± s.d. Full size image In contrast to the distal tibia and tail, rMAT within the middle and proximal tibia was highly variable in both volume and rate of development ( Fig. 1a,b ). By 12 weeks, MAT development diverged, with robust expansion in the proximal tibial marrow of C3H, but not B6, mice. Thus, C3H mice had nearly twice as much total MAT than B6 at this age. Surprisingly, by 56 weeks these differences in total MAT disappeared ( Fig. 1b ); however, the distribution of the cells within the tibia remained divergent, with C3H mice having increased MAT volume in the proximal regions of the tibia ( Fig. 1b ). These distinct developmental characteristics suggest discrete MAT populations, designated cMAT (distal tibia and tail vertebrae) and rMAT (mid- to proximal tibia). To examine the developmental relationship between MAT and bone, we also analysed the tibiae of 12- and 56-week-old animals both before decalcification and again after osmium staining. Osmium-based localization of MAT in three dimensions demonstrated its asymmetric distribution within the tibial marrow cavity ( Fig. 2a,b ). In both B6 and C3H mouse strains, MAT accumulation with age in the proximal metaphysis occurred most robustly in the medial marrow space. In the mid-diaphysis, B6 MAT continued to approximate the medial endocortical surface, whereas C3H MAT closely followed the posterior cortex ( Fig. 2d,e ). Development of trabecular and cortical bone was similar to what has been reported previously ( Fig. 2c,f ; ref. 14 ). In addition to increases in MAT with age in both strains, trabecular number decreased and thickness increased. Thus, in the proximal metaphysis across the 12- and 56-week-old groups, MAT volume correlated negatively with trabecular number (linear regression B6, P =0.007; C3H, P =0.005) but positively with trabecular thickness (linear regression B6, P =0.010; C3H, P <0.001). Figure 2: Trabecular and cortical development in C57BL/6J and C3H/HeJ mice at 12 and 56 weeks of age. ( a , b ) Representative images of the proximal tibial metaphysis both before decalcification and after osmium staining of the data presented in c . Marrow fat is in white. ( c ) Quantification of trabecular parameters and MAT volume in the proximal tibial metaphysis (biological replicate N =9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). ( d , e ) Representative images of the mid-tibial diaphysis both before decalcification and after osmium staining of the data presented in panel f . Marrow fat is in white. ( f ) Quantification of cortical parameters (biological replicate N =9 (C3H 12 weeks old) and 11 (B6 12- and 56 weeks old)). Scale bars=500 μm. *Two-tailed t -test, P <0.05. All values represent mean±s.d. Full size image Differential loss of MAT with cold exposure Cold exposure in rodents elevates sympathetic tone and results in extensive remodelling of WAT, which enhances thermogenesis and allows maintenance of body temperature and survival at 4 °C (ref. 15 ). However, the response of MAT to cold temperatures is unknown. To quantify changes in MAT after 21-day cold exposure (4 °C), we analysed MAT in the whole tibia of male C3H mice at 12 and 56 weeks of age. The C3H strain was used based on the robust proportion of MAT in both proximal and distal regions of the tibia ( Fig. 1 ), allowing for simultaneous analysis of rMAT and cMAT populations within the same bone. After cold exposure in 12-week-old mice, the amount of rMAT decreased by 76% in the tibial epiphysis and 71% in the proximal tibia, between the growth plate and the tibia/fibula junction ( Fig. 3a,b ). In 56-week-old mice, rMAT decreased by 56 and 71%, respectively ( Fig. 3c ). In contrast, cMAT in the distal tibia, below the fibular attachment, remained unchanged ( Fig. 3a–c ). MAT loss at the proximal tibial metaphysis was most prominent in the centre of the marrow space with a relative preservation of the adipocytes that were directly adjacent to the endocortical surface ( Supplementary Fig. 1A,B ). This is the reverse of the developmental pattern of proximal MAT accumulation ( Fig. 2a,b ). Despite the robust loss of rMAT, trabecular and cortical parameters in the tibia remained largely unchanged ( Supplementary Fig. 1C,D ); indeed, the only significant finding was a slight decrease in the relative cortical bone volume in the 12-week-old C3H mice. Figure 3: Differential loss of MAT with cold exposure. ( a ) Representative osmium-stained tibiae scanned with μCT of the data presented in b and c . Marrow fat in dark grey, decalcified bone overlaid in light grey. ( b ) Region-specific quantification of tibial MAT (as defined in Fig. 1a ) from 12-week-old mice normalized to marrow volume (biological replicate N =11 (4 °C) and 11 (22 °C)). ( c ) Region-specific quantification of tibial MAT from 56-week-old mice normalized to marrow volume (biological replicate N =9 (4 °C) and 11 (22 °C)). ( d ) Adipocyte size distribution from the proximal tibial metaphysis (proximal) or ( e ) distal diaphysis below the tibia/fibula junction (distal) as measured by nanoCT in the 12-week-old mice (biological replicate N = 5). Histogram bin size 250. ( f ) Representative histology, based on quantification in d , of osmium-stained samples at the proximal tibia shows a decrease in adipocyte size. Scale bars, 50 μm. ( g ) Estimation of region-specific adipocyte number was performed by dividing the total adipocyte volume (from μCT) by the average adipocyte volume (nanoCT) in the proximal tibia (growth plate to tibia/fibula junction) and the distal tibia (tibia/fibula junction to the distal end). *( b , c , g ) Two-tailed t -test, ( d , e ) two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. RT, room temperature. All graphs represent mean±s.d. Full size image For osmium-based MAT analysis, we use a micro-computed tomography (μCT) voxel size of 12 μm, which allows rough outlines of the marrow adipocytes to be observed (the average MAT cell diameter is 30–40 μm). However, at this resolution, μCT might be unable to detect more subtle changes in regions of densely packed adipocytes, such as those in the distal tibia. To test this, we re-scanned the bones from the 12-week-old mice at a voxel size of 2 μm using nano-computed tomography (nanoCT). The resolution of these scans was sufficient to clearly identify individual adipocytes ( Supplementary Fig. 2 ). Using a digital histology approach, we quantified adipocytes sizes in two-dimensional nanoCT DICOM slices ( Supplementary Fig. 2 ; ref. 16 ). To determine the adipocyte size distribution, we measured the two-dimensional area of 300–400 individual adipocytes in the proximal tibial metaphysis and at the midpoint of the distal tibia ( Supplementary Fig. 2 ). Consistent with our μCT results for total MAT volume, adipocytes in the proximal tibia decreased in size, whereas those in the distal tibia remained unchanged ( Fig. 3d–f ). This confirmed our μCT interpretation and the validity of the osmium/μCT method for total MAT volume quantification, even in adipocyte-dense regions such as the distal tibia 17 . Together, the μCT and nanoCT data revealed that in response to cold exposure, proximal rMAT adipocytes decrease in both size and number, whereas the adipocytes in the distal tibia are unchanged ( Fig. 2g ). Average adipocyte size of rMAT and cMAT adipocytes Adipocyte size is a parameter that has historically been used to track metabolic responsiveness of individual cells. Analysis of the 12-week-old C3H animals at room temperature revealed that cMAT adipocytes are significantly larger than rMAT adipocytes ( Fig. 4a ), with average diameters of 37.8±1.2 and 32.5±2.4 μm, respectively (two-tailed t -test, P =0.002). This 16% increase in cMAT adipocyte diameter extrapolates to an estimated 54.6% increase in cMAT adipocyte volume. cMAT adipocytes were also larger in rats ( Fig. 4b,c ), with cMAT adipocytes in tail vertebrae being 24 or 17% larger in diameter than tibial rMAT adipocytes in males or females, respectively (male cMAT versus rMAT, 38.9±1.9 versus 31.4±1.6 μm, two-tailed t -test, P <0.001; female cMAT versus rMAT, 38.9±1.6 versus 33.1±3.2 μm, two-tailed t -test, P =0.003). The cell size distributions for each group are presented as histograms in Fig. 4 . Figure 4: Quantification of rMAT and cMAT adipocyte size. Adipocyte size quantification from ( a ) rMAT in the proximal tibia and cMAT in the distal tibia of 12-week-old C3H mice (biological replicate N =5), ( b ) rMAT in the proximal tibia and cMAT in the caudal vertebrae of 16-week-old male Sprague–Dawley rats (biological replicate N =12), and ( c ) rMAT in the mid-tibia and cMAT in the caudal vertebrae of 19-week-old female Sprague–Dawley rats (biological replicate N =5 rMAT and 6 cMAT). *Two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. All graphs represent mean±s.d. Full size image Region-specific fatty-acid content of MAT The mechanisms underlying site-specific regulation of marrow adipocytes with cold exposure could be related to differences in the local microenvironment and/or between adipocyte subpopulations. With the exception of Tavassoli 11 , previous work on marrow fat has assumed that all MAT adipocytes are equivalent. To begin testing the validity of this assumption and thus determine whether the microenvironment is the sole mediator, we characterized the lipidomic profile of the proximal rMAT and distal cMAT adipocytes. We started with lipidomics because the work of Tavassoli 11 suggests that marrow adipocytes may have a region-specific lipid composition. In addition, our ‘marrow fat consortium’ group previously developed techniques to estimate the lipid unsaturation of MAT in the human skeleton using 1 H-MRS 18 ( Supplementary Fig. 3 ). We applied this method to measure marrow lipid unsaturation in four regions of the human appendicular skeleton, including the femur (proximal metaphysis and mid-diaphysis) and the tibia (mid-diaphysis and distal metaphysis). We found that, in humans, the distal tibia had an increased unsaturation index relative to the proximal femur ( Fig. 5a ), implying that distal marrow adipocytes contain more unsaturated lipids than those in proximal/central skeletal regions. Figure 5: Region-specific lipid saturation of human and rat MAT. ( a ) Marrow unsaturation at four sites in the human leg was compared using 1 H-MRS. Marrow at the metaphysis of the distal tibia (T) had a higher unsaturation index than marrow in the proximal femur (F) (biological replicate N =5). Mean±s.e. ( b ) Adipocytes were isolated from four regions of the rat skeleton. Red outlines indicate rMAT sites including femur/proximal (Prox) tibia and lumbar vertebrae. Grey and black outlines indicate cMAT sites including distal tibia and caudal vertebrae. Histology of intact rat rMAT and cMAT before adipocyte isolation representative of the animals from Fig. 4b (biological replicate N =12). Objective × 40, scale bars, 50 μm. ( c ) Principal components analysis of normalized fatty acids from three independent experiments (23 fatty acids and 44 unique biological samples). Raw data presented in Supplementary Data 1 . Visceral WAT (vWAT) includes 5 gonadal and 3 perirenal; subcutaneous WAT (scWAT) includes 12 inguinal; rMAT includes 3 lumbar vertebrae and 6 femur/proximal tibia; cMAT includes 3 distal tibia and 12 caudal vertebrae (samples were derived from 13 unique animals). Dashed line = 95% confidence interval. ( d ) Proportion of fatty acids with one or more double bonds relative to total lipid in adipocytes from perirenal visceral WAT (vWAT), inguinal scWAT, rMAT from femur/proximal tibia (rMAT T/F), rMAT from lumbar vertebrae (rMAT Vert), cMAT from distal tibia (cMAT tibia) and cMAT from tail vertebrae (cMAT Vert). Representative data presented as mean±s.d. (as presented, biological replicate N =3). Experiment repeated with similar results in three animal cohorts with a total of 44 samples from 13 rats as outlined in Supplementary Data 1 . *One-way analysis of variance with Tukey’s multiple comparisons test, P <0.05. Full size image Since the human model relies on indirect analysis of intact marrow, we developed a modified collagenase digestion protocol to purify adipocytes from the rat bone marrow for direct lipidomic analyses 19 . Adipocytes from WAT (perirenal, gonadal and inguinal) were used as a control. The rMAT regions included the femur/proximal tibia and lumbar vertebrae, whereas the cMAT regions included the distal tibia and caudal vertebrae ( Fig. 5b ). Adipocytes were isolated from a diverse population of rats including (experiment no. 1) 1-year-old female high-capacity runner rats 20 , (experiment no. 2) 16-week-old male Sprague–Dawley rats and (experiment no. 3) 8-month-old female Sprague–Dawley rats. After isolating adipocytes, we extracted total lipid with methanol–choloroform and then used gas chromatography (GC) for lipidomic analysis of esterified fatty acids. In the adipocyte, the vast majority of fatty acids are derived from triacylglycerols with minor contributions from species such as phospholipids. Palmitate, stearate and their unsaturated derivatives were the most common—accounting for >90% of the total lipid. To standardize between experiments, we expressed each fatty-acid subtype as a per cent of the total lipid. The raw data for all experiments are presented in this format as Supplementary Data 1 . This standardized data set was used to perform principal component analysis of the 23 fatty-acid subtypes across three independent experiments. In total, 44 unique lipidomic profiles of purified adipocytes from MAT (8 rMAT and 15 cMAT), visceral WAT (5 gonadal and 3 perirenal) and subcutaneous WAT (scWAT; 12 inguinal) were compared ( Supplementary Data 1 ). Despite the diversity in the animal cohorts, all forms of WAT were tightly clustered while there was a clear separation of cMAT from rMAT and WAT ( Fig. 5c ). Consistent with the human data, the per cent of unsaturated fatty acids relative to total lipid was highest in the cMAT adipocytes purified from the distal tibia and the tail vertebrae ( Fig. 5d ). The increased proportion of unsaturated fatty acids in the rat cMAT adipocytes and separation from rMAT/WAT adipocytes on the principal component plot was primarily driven by decreases in palmitate and stearate, and corresponding increases in their monounsaturated derivatives palmitoleate and oleate ( Supplementary Data 1 ). This resulted in a robust increase in the monounsaturated-to-saturated ratio for these fatty acids ( Fig. 6a,b ). This change was greater in cMAT adipocytes from the tail vertebrae when compared with the cMAT from the distal tibia, indicating that the distal tibia may be a region of mixed MAT. Consistent with the increased proportion of the unsaturated fatty acids palmitoleate and oleate, expression of stearoyl-CoA desaturase-1 ( Scd1 ) was elevated in both male and female cMAT adipocytes relative to adipocytes isolated from scWAT ( Fig. 6c,d ). Elevated expression of desaturases including Fads1 and Fads2 was also noted in both males and females, with inconsistent elevations in Scd2 (males only) and Fads3 (females only; Fig. 6c,d ). Expression of mitochondrial glycerol-3-phosphate acyltransferase ( Gpam ), an enzyme that preferentially incorporates saturated fatty acids during synthesis of glycerolipids, was similar between scWAT and cMAT in both cohorts. Figure 6: Gene expression of desaturases in isolated adipocytes. ( a ) Proportion of C16:1n7-palmitoleate relative to C16:0-palmitate. ( b ) The proportion of C18:1n9-oleate relative to C18:0-stearate. Representative data presented as mean±s.d. (as presented, biological replicate N =3). Repeated with similar results in three animal cohorts with samples from 13 total rats as detailed in Supplementary Data 1 . Transcript expression in isolated constitutive MAT (cMAT) and subcutaneous WAT (scWAT) adipocytes normalized to scWAT from ( c ) 16-week-old male Sprague–Dawley rats (biological replicate N =6 cMAT (two animals pooled per sample) and N =12 scWAT) and ( d ) 8-month-old female Sprague–Dawley rats (biological replicate N =3 cMAT (two animals pooled per sample) and N =5 scWAT). Presented as mean±s.d. *( a , b ) One-way analysis of variance with Tukey’s multiple comparisons test, ( c , d ) two-tailed t -test, P <0.05. Full size image Region-specific transcription factor expression Differentiation of adipocytes from precursor cells is tightly regulated by a defined transcriptional cascade (see ref. 22 for review). The transcription factors CCAAT/enhancer-binding protein (C/EBP)-β and -δ are induced during early adipogenesis. These factors then activate expression of the essential adipogenic transcription factors peroxisome proliferator-activated receptor-γ and C/EBPα (ref. 23 ). Sterol regulatory element-binding protein-1 (encoded by Srebf1 ) serves as a transcriptional activator that is required for lipid homeostasis in mature adipocytes. Unexpectedly, in cMAT adipocytes, expression of both Cebpa and Cebpb was elevated relative to rMAT and/or scWAT adipocytes from male and female rats ( Fig. 7a–c ). Expression of Srebf1 was elevated in cMAT and rMAT adipocytes of males, but not females. In contrast, Pparg was similar between cMAT/rMAT/WAT in males but increased in cMAT relative to scWAT in females. The similar or elevated expression of Pparg in MAT reinforces the notion that these cells are of the adipocyte lineage, but the selective elevation of Cepba and Cebpb in cMAT adipocytes suggests potential for alternative transcriptional regulation and function in this unique adipocyte population. Figure 7: Gene expression of transcription factors in isolated adipocytes. Transcript expression in isolated adipocytes from subcutaneous WAT (scWAT), constitutive MAT (cMAT) and/or regulated MAT (rMAT) normalized to scWAT from ( a ) 16-week-old male Sprague–Dawley rats (biological replicate N =6 cMAT (two animals pooled per sample) and 12 scWAT), ( b ) 8-month-old female Sprague–Dawley rats (biological replicate N =3 cMAT (two animals pooled per sample) and 5 scWAT), and ( c ) 16-week-old male Sprague–Dawley rats (biological replicate N =5, four animals pooled per sample). Presented as mean±s.d. *( a , b ) Two-tailed t -test, ( c ) one-way analysis of variance, P <0.05. Full size image Knockout of PTRF inhibits formation of rMAT adipocytes Patients with CGL lose a majority of their peripheral WAT; however, magnetic resonance imaging (MRI) scans indicate that MAT is preserved in those with mutations in CAV1 (CGL3) and PTRF (CGL4) (reviewed in ref. 2 ). Caveolin-1 (encoded by CAV1 ) is a key structural component of caveolae, 50–100 nm invaginations of the plasma membrane that account for up to 50% of the surface of peripheral white adipocytes 24 . PTRF encodes for cavin-1, a protein required for stabilization of caveolins and formation of caveolae 25 , 26 , 27 , 28 . Caveolae and their associated proteins coordinate many diverse signalling pathways and have been identified as key regulators of insulin sensitivity, lipid trafficking and adipocyte precursor differentiation 29 , 30 . To explore the preservation of MAT in CGL3 and CGL4, we quantified region-specific changes in MAT of adult male and female Cav1 and Ptrf knockout mice at 16–17 weeks of age. The metabolic and peripheral adipose tissue phenotypes of these animals have been reported previously 26 , 31 , 32 , 33 . Consistent with the CGL3 human phenotype 34 , Cav1 knockout mice did not lose MAT ( Fig. 8a–c and Supplementary Fig. 4 ), despite a significant decrease in the amount of peripheral WAT ( Supplementary Fig. 5A,B ). As with MAT, trabecular bone at the proximal tibial metaphysis and cortical bone at the mid-diaphysis remained unchanged in the Cav1 knockout animals ( Supplementary Fig. 5C,D ). Figure 8: Differential loss of MAT in mice with knockout of Cav1 or Ptrf . ( a ) Representative osmium-stained tibiae scanned with μCT based on data in b and c . Marrow fat in dark grey, decalcified bone overlaid in light grey. ( b , c ) Region-specific quantification of tibial MAT volume by μCT (biological replicate N =6–9 as indicated on the graph). Box plot centre line represents median, box extends from the 25th to 75th percentile, whiskers indicate range. ( d ) Representative histology of caudal vertebrae based on data in e , × 10 objective (scale bars, 200 μm). ( e ) Adipocyte size distribution of the caudal marrow adipocytes as measured by histology (biological replicate N =5). Histogram bin size 250. Presented as mean±s.d. *( b ) Non-parametric Mann–Whitney test, ( c ) two-tailed t -Test, ( e ) two-way analysis of variance with Sidak’s multiple comparisons test, P <0.05. KO, knockout; WT, wild type. Full size image In addition to loss of WAT ( Supplementary Fig. 6 ), in male mice knockout of Ptrf caused nearly complete loss of proximal tibial rMAT adipocytes with a relative preservation of cMAT in the distal tibia ( Fig. 8a–c ). Based on the three-dimensional reconstructions of the tibiae from Ptrf knockout animals, only the most distal portion of the MAT in the tibia was maintained while there was mixed preservation moving towards the tibia/fibula junction ( Fig. 8a ). This finding, similar to the lipidomic data in the rats, suggests a possible mixture of rMAT and cMAT adipocytes in the distal tibia. In contrast, the tail vertebrae of male Ptrf knockout mice remained completely filled with MAT ( Fig. 8d ), and these vertebral cMAT adipocytes were of the same size as those of wild-type animals ( Fig. 8e ). Except for a 4.6% increase in cortical bone mineral content, trabecular and cortical parameters did not differ between the Ptrf knockout males and their wild-type counterparts ( Fig. 9c and Supplementary Fig. 6 ). Figure 9: Trabecular morphology versus MAT in the tibia and L4 vertebrae of Ptrf KO mice. ( a , b ) Representative images of the proximal tibial metaphysis both before decalcification and after osmium staining based on data in c. Marrow fat is in white. Scale bars, 500 μm. ( c ) Quantification of trabecular parameters and MAT volume in the proximal tibial metaphysis (biological replicate N =5 (female KO), 6 (female WT), 7 (male KO) and 9 (male WT)). ( d ) Quantification of trabecular parameters in the vertebral body of L4 (biological replicate N =5 (WT) and 4 (KO)). KO, knockout; WT, wild type. All values represent mean±s.d. *Two-tailed t -test, P <0.05. a Non-parametric Mann–Whitney test, P <0.05. Full size image Similar decreases in rMAT and WAT were observed in the female Ptrf knockout mice ( Fig. 9a and Supplementary Figs 4 and 6 ). Unlike males, the female Ptrf knockout mice had a significant 14.3% increase in trabecular number and corresponding 8.5% decrease in trabecular spacing ( Fig. 9c ). In addition, relative to males, both control and Ptrf knockout females had increased MAT volume in the proximal tibia ( Fig. 9 ) that was inversely correlated with trabecular number at the proximal tibial metaphysis ( P =0.021). Interestingly, the trabecular bone phenotype of the females was even more striking in the L4 vertebral body, with a 21.1% increase in trabecular number, 21.5% decrease in spacing, 22.9% increase in bone volume fraction and 21.0% increase in bone mineral content ( Fig. 9d ). Consistent with previous reports 10 , we did not observe MAT adipocytes in the lumbar vertebrae in either the wild-type or knockout females. As with the tibia, the trabecular phenotype of the vertebrae was unaffected by Ptrf knockout in males. Discussion Our results demonstrate that there are region-specific differences in development, regulation, adipocyte size, lipid composition, gene expression and genetic determinants of marrow adipocytes that have implications for understanding the marrow niche and its relationship to skeletal and whole-body metabolism. Localization of osmium-stained adipocytes in three dimensions demonstrated that MAT in mice develops asymmetrically from distal to proximal. A similar pattern of early development occurs in vertebrae species including rats, rabbits and humans. However, the absolute rate of formation decreases with increasing lifespan/size of the animal. For example, the ‘adult’ distribution of MAT in humans occurs around age 25 years, in rabbits by 6 months and in mice as early as 8 weeks—likely with some relationship to sexual maturation 12 , 13 , 35 , 36 . The amount of MAT that forms during this phase also varies between species; larger animals have more MAT that extends farther into the skeleton than smaller animals (humans > rabbits > rats > mice). In addition, we found that MAT forms in two distinct temporal waves that are spatially separated in mice and correspond histologically to rMAT in red marrow and cMAT in yellow marrow ( Fig. 10 ). Conversely, MAT loss with cold exposure is the opposite of development—the last to form is the first to go. The cMAT in the distal tibia and tail, in particular, is highly resistant to dissolution. While the microenvironment likely plays a major role in these site-specific responses, our lipidomic and gene expression data identify cell autonomous differences between the rMAT and cMAT adipocytes that might also contribute to their distinct behaviours. Figure 10: rMAT versus cMAT summary. In the human and mouse tibia, cMAT is present in the distal portion of the bone. The marrow shifts to red towards the proximal tibia, occurring near the tibia/fibula junction in the rodent and in the proximal tibial metaphysis or femur in the human. The red marrow contains rMAT adipocytes. In some cases, especially in larger species such as humans, the histological patterns that correspond to rMAT and cMAT adipocytes may be present in the same region. The bones have been shaded in orange to indicate this possibility. cMAT is the first to develop and histologically appears as sheets of confluent adipocytes that are relatively devoid of haematopoiesis. Isolated cMAT adipocytes form shortly after birth, have an increased proportion of unsaturated fatty acids and are larger in size. rMAT develops throughout life and is histologically defined as single cells interspersed with areas of active haematopoiesis. Isolated rMAT adipocytes have a lipid saturation profile that is similar to WAT adipocytes and are more saturated, and smaller in size, than cMAT. These cells are negatively regulated by 21-day cold exposure. They also fail to form in mice with genetic knockout of Ptrf , but not Cav1 . NC, no change. Scale bar, 50 μm. Full size image Tavassoli 11 , in 1976, demonstrated the presence of two different types of adipocytes in rabbit bone marrow—those that stain with performic acid Schiff (PFAS) and those that do not. The stain reaction is thought to rely on oxidation of the ethylenic linkages in unsaturated fats to aldehyde and processing with Schiff’s reagent to generate a red/purple colour, although this mechanism is controversial 37 . Inspired by Tavassoli’s 11 PFAS stain, we found that rMAT and cMAT have distinct lipidomic profiles ( Fig. 5 and Supplementary Data 1 ). In addition, despite the histological similarities between WAT and cMAT—the lipid composition of WAT more closely mirrors that of rMAT, suggesting that lipid metabolism in WAT and rMAT adipocytes may be similar. Coordinate regulation by cold exposure and similarities in Pparg , Cebpa and Cebpb gene expression between rMAT and WAT further support this hypothesis. Of note, the increase in cMAT unsaturation is actually the opposite of what we expected based on the proposed mechanism of the PFAS stain. This is likely due to the historic debate surrounding the stain, which in one paper from 1970 was characterized as ‘useless in lipid histochemistry’ 37 . Regardless, it led us to uncover a highly conserved difference between rMAT and cMAT in rats that, based on indirect evaluation with 1 H-MRS, appears to extend to human MAT. These findings have implications for diseases including osteoporosis, which has been associated with a decrease in MAT unsaturation 38 . A shift in marrow fat composition to higher levels of saturated lipid has also been correlated with fragility fractures in postmenopausal women 39 . For example, palmitate is lipotoxic to osteoblasts and impairs mineralization 40 . As a proportion of total lipid, palmitate is enriched in rMAT relative to cMAT. In contrast, palmitoleate, which is enriched in cMAT, has been identified as a secreted adipose tissue-derived lipid hormone with the capacity to stimulate muscle insulin action and suppress hepatosteatosis 41 . Our current analysis does not discriminate between lipid types (for example, triacylglycerols versus phospholipids); hence, future work is needed to examine subcellular localization and secretion of fatty acids, in addition to other mediators, by rMAT and cMAT, and to quantify their impact on local and distant tissues. In the introduction, we highlighted the unresolved controversy that exists regarding the relationship between MAT and bone. It is notable that the reports that correlate MAT accumulation with low bone mineral density, decreased bone formation and bone loss generally analyse rMAT-enriched sites, including the proximal femur, hip and lumbar spine (reviewed in ref. 2 ). In contrast, studies demonstrating resistance to bone loss at sites of high MAT are all based on cMAT-enriched areas including the distal tibia and tail vertebrae 7 , 8 , 9 . In this manuscript, we explored changes in trabecular and cortical architecture and compared our findings with MAT volume and its three-dimensional distribution. During development in B6 and C3H mice, we observed polarization of rMAT towards the medial marrow space in the proximal tibia and to the medial/posterior endocortical surface at the mid-tibia ( Fig. 2 ). MAT accumulation from 12 to 56 weeks of age correlated negatively with trabecular number and positively with trabecular thickness in both strains. Conversely, extensive loss of MAT in the proximal tibia in mice undergoing 21-day cold exposure failed to uniformly impact trabecular or cortical parameters ( Supplementary Fig. 1 ). In our genetic models, knockout of Cav1 left both MAT and bone unchanged. In contrast, developmental inhibition of rMAT accumulation in the female Ptrf knockout mice was correlated with increased trabecular number in the proximal tibia. With this phenotype, it is tempting to conclude that rMAT loss is necessary for trabecular gain; however, in this same model, the increase in trabecular number was actually more pronounced in the lumbar vertebrae—a skeletal site in the mouse that has little to no MAT. What then can we conclude about the relationship between MAT and bone? It is certainly of note that developmental polarization of MAT along the medial and posterior surfaces of the cortical bone implies that rMAT may be related to cortical drift patterns during development. Similarly, logic dictates that accumulation of MAT in the proximal tibia must occur at the expense of either haematopoiesis or bone, since the size of the space within the skeleton is finite. It would not be unreasonable to subsequently assume that these components have an inherent ability to regulate one another. What we truly need, however, are animal models in which we can specifically regulate rMAT and cMAT in vivo . Identification of Ptrf knockout as a selective mediator of rMAT loss ( Fig. 8 ) is one step towards generation of a genetic model of rMAT ablation. Future quantification of MAT, expanded beyond conventional methods to include both rMAT and cMAT, in currently available genetic models will undoubtedly reveal additional targets. Although depletion of MAT has been proposed as a strategy to combat osteoporosis 42 , the function of rMAT and cMAT must be clarified before removal of MAT populations. The highly defined accumulation of cMAT early in vertebrate development and its robust resistance to dissolution implies an important function for this adipose depot—one that may go beyond the skeleton 1 . In contrast, rMAT adipocytes are more closely situated in areas of high bone turnover and are better positioned to actively influence haematopoiesis and/or skeletal remodelling. Although our data provide a working definition of rMAT and cMAT, and highlight the need to explore these cells in more detail ( Fig. 10 ), there are many questions that remain. Our ability to definitively address fundamental differences in marrow adipocytes and their role locally in the skeletal microenvironment, or systemically, as a component of whole-body metabolism, depends on future development of targeted animal models and continued clinical investigation. Methods Rodents Where they were utilized, animal procedures were approved by the animal use and care committees at the University of Michigan, Maine Medical Center Research Institute and/or Boston University. Animals were housed at 22 °C on a 12-h light/dark cycle unless otherwise indicated. Development . Male C57BL/6J (Jackson Labs, stock: 000664) and C3H/HeJ (Jackson Labs, stock: 000659) mice were euthanized at 1, 4, 12 or 56 weeks of age and tissues were collected for analysis. The 12- and 56-week-old C3H animals are the same as the control groups for the C3H cold exposure experiment outlined below. Cold exposure . At 8 or 52 weeks of age, 10 male C3H/HeJ (Jackson Labs, stock: 000659) mice were placed individually into pre-cooled cages with bedding, food and water in a room held at 18 °C. Littermate control mice ( n =10 per strain) were held in identical conditions at room temperature ( ∼ 22 °C). After 1 week at 18 °C, the cold room was adjusted to 4 °C and maintained at this temperature for an additional 3 weeks. Control mice were held at 22 °C for a total of 4 weeks (concurrently). Rectal core body temperature of control and cold-exposed mice was monitored daily using a Type T thermocouple rectal probe (RET-3, Physitemp Instruments, Inc., Clifton, NJ, USA) with a MicroTherma 2T hand-held thermometer (ThermoWorks, Inc., Lindon, UT; cat. no. THS-227-193). After 4 weeks, all mice were killed and tissues were collected for analysis. Given the length of the intervention (21 days), we pre-scanned the non-decalcified tibia bones to calculate the marrow volume. After decalcification and osmium stain, the bones were re-scanned and the MAT volume was normalized to marrow volume in each region of interest to correct for any changes in the size of the tibiae between groups. Cav1 and Ptrf knockout mice . Cav1 and Ptrf knockout mice were generated previously 26 , 31 . Homozygous Cav tm1Mls/J mice with knockout of Cav1 on a mixed background (Jackson Labs, stock: 004585) were crossed with B6129SF2/J controls (Jackson Labs, stock: 101045). The resulting Cav1 +/− heterozygotes were crossed and the male homozygous offspring were euthanized for analysis at 16 weeks of age. The homozygous male and female Ptrf −/− and their wild-type control littermates were generated from breeding Ptrf +/− heterozygotes and used for the present study at 16–17 weeks of age. The Ptrf −/− mice had previously been backcrossed to C57BL/6J mice for at least nine generations. Marrow fat quantification by osmium staining and CT Mouse bones were stained with osmium tetroxide for analysis of marrow fat, with slight modification from ref. 17 , as follows. Bones were fixed in 1.5 or 2.0 ml microtubes for 24–48 h in 10% neutral-buffered formalin (VWR, Radnor, PA; cat. no. 16004-128), washed with water and decalcified in 14% EDTA, pH 7.4, for 14 days. After washing again with water, 600 μl Sorensen’s phosphate buffer (pH 7.4) was added to one bone (femur or tibia) in a 1.5-ml microtube. (Note: all subsequent steps must be performed in the fume hood.) Four per cent osmium tetroxide (200 μl) solution (Electron Microscopy Services, Hatfield, PA; cat.no. 19170) was added to each tube to make a 1% solution. Bones were stained in the fume hood 48 h at room temperature. Osmium solution was carefully removed to a small liquid waste container that had been filled with corn oil to ∼ 25% of the volume. Any used pipet tips were ‘rinsed’ of active osmium tetroxide by pipeting corn oil. All tips and tubes were discarded as osmium solid waste. Bones were washed, in the same tube, by incubating in 1 ml of Sorensen’s buffer for 3 h at room temperature. This was repeated twice and the last wash was left in the hood overnight. This waste was disposed of as indicated above. Stained bones were then moved to a fresh set of 1.5 ml microtubes containing 1 ml Sorensen’s buffer each. The used tubes were discarded as solid osmium waste. At this point, the bones and tubes were removed from the fume hood and used for CT. MicroCT . Specimens were embedded in 1% agarose and placed in a 19-mm diameter tube. The length of the bone was scanned using a μCT system (μCT100 Scanco Medical, Bassersdorf, Switzerland). Scan settings are as follows: voxel size 12 μm (all except Fig. 1a , 1 week bones at 10 μm), medium resolution, 70 kVp, 114 μA, 0.5 mm AL filter and integration time 500 ms. Density measurements were calibrated to the manufacturer’s hydroxyapatite phantom. Analysis was performed using the manufacturer’s evaluation software and a threshold of 400 for MAT. NanoCT . Samples were scanned at 2 μm voxel size, 90 kV, 90 μA and 1,500 ms exposure time with a total scan time of 73 min on a nanotom-s (phoenix|X-ray, GE Measurement & Control; Wunstorf, Germany). DICOM image files were opened in ImageJ 43 for size analysis of individual adipocytes in two dimensions using a ‘virtual histology’ approach ( Supplementary Fig. 1 ). The area of 300–500 adipocytes was measured per sample 44 . A bin size of 250 was used to generate a size distribution histogram for each adipocyte type. The average adipocyte volume was estimated based on the average adipocyte area and compared with the total MAT volume (as determined by μCT) to determine the number of adipocytes in a region of interest. Histology Samples were fixed in 10% neutral-buffered formalin and decalcified in 14% EDTA, pH 7.4, before paraffin embedding and haematoxylin and eosin stain. Where indicated, osmium-stained bones (prepared as detailed above) were submitted and processed in the same way. Human marrow unsaturation This study was approved by the Partners Healthcare Institutional Review Board and complied with Health Insurance Portability and Accountability Act guidelines. Written informed consent was obtained from all subjects after the nature of the procedure had been fully explained. We studied five women (mean age: 33±10 years) with a mean body mass index (BMI) of 24.8±10 kg m 2 . All subjects underwent proton MRS ( 1 H-MRS) of the proximal femoral metaphysis, the mid-femoral and tibial diaphyses, and the distal tibial metaphysis to determine MAT content and composition using a 3.0-T MR imaging system (Siemens Trio, Siemens Medical Systems, Erlangen, Germany). Single-voxel 1 H-MRS data were acquired using point-resolved spatially localized spectroscopy pulse sequence without water suppression as previously described 18 . Coefficient of variation for bone marrow fat quantification was 5%. Fitting of all 1 H-MRS data was performed using LCModel (version 6.3-0K) as previously described 18 . A customized fitting algorithm for bone marrow analysis provided estimates for total marrow lipid content (lipid peaks at 0.9, 1.3, 1.6, 2.0 and 5.3 p.p.m. combined). Unsaturation index was determined by obtaining a ratio between the olefinic resonance at 5.3 p.p.m., an estimate of fatty-acid unsaturation bonds, and total lipid content. Adipocyte isolation for lipidomics Adipocytes were isolated from rat WAT and MAT using a modified collagenase digestion protocol as described below 19 . Older female rats were obtained from the University of Michigan rat recycling programme and including 1-year-old female high-capacity runner rats ( N =3; ref. 20 ) and ∼ 8-month-old female Sprague–Dawley rats ( N =5). Sixteen-week-old male Sprague–Dawley rats were obtained from Charles River Laboratories (strain code: 400). Rats were anaesthetized with isofluorane in a drop jar and euthanized by decapitation. The processing for each sample type is outlined in detail below. All protocols were performed simultaneously and the adipocytes from each tissue underwent methanol–choloroform extraction for total lipid at the same time (±5 min). White adipose tissue . Adipose tissues were removed and placed in warm Krebs-Ringer HEPES (KRH) buffer, pH 7.4 (10 mM HEPES, 120 mM NaCl, 1.2 mM KH 2 PO 4 , 1.2 mM MgSO 4 , 2.5 mM CaCl 2 , 15 mM NaHCO 3 , 4.8 mM KCl, 1.0 g l -1 D -glucose and 500 nmol adenosine), that had been pre-equlibrated overnight in an incubator at 37 °C, 5% CO 2 and re-pHed to 7.4. Washed adipose tissue pieces totalling ∼ 1 g were minced in 10 ml KRH containing 1 mg ml -1 collagenase type I (Worthington Biochemical Corp., Lakewood, NJ; cat. no. 4197) and 3% fatty-acid-free BSA (Calbiochem; cat. no. 126575) in a 50-ml conical tube and placed in a shaking water bath at 100 r.p.m., 37 °C for 45–60 min. Digested tissue was pulled gently through a 10-ml polypropylene Luer-lock syringe (no needle) three times to complete disruption and then filtered through a 100-μm cell strainer (Fisherbrand, Pittsburgh, PA; cat. no. 22363549) into a fresh 50-ml polypropylene conical tube. Tibial cMAT . Both tibiae were removed and cleaned of muscle and tendon using gauze. A rotary power tool (Dremel, Robert Bosch Tool Co, Addison, IL) with a Dremel 545 Diamond cutting wheel was used to horizontally bisect the tibia at the base of the tibia/fibula junction. The distal portion was inverted into a 1.5-ml polypropylene microtube containing a hollow spacer and centrifuged at 3,000 g to extrude the marrow. The bone was removed and discarded, and the distal tibial marrow was then processed in the same manner as the WAT, described above. Femur/tibia rMAT . Both femurs were isolated and cleaned, and the ends were removed with the rotary tool to expose the marrow cavity. The femurs and the proximal tibiae were inverted into 1.5 ml microtubes and centrifuged at 3,000 g to separate the marrow. The bones were discarded. Gentle pipetting was used to combine and resuspend the proximal marrow in KRH containing 1 mg ml -1 collagenase and 3% BSA in a 50-ml conical tube. The suspension was then incubated in a shaking water bath at 100 r.p.m., 37 °C for 15–20 min to liberate the rMAT adipocytes. Vertebral cMAT . The most proximal 10 tail vertebrae were separated and some of the surrounding muscle and tendon were removed with gauze. The vertebrae were added to a 50-ml conical tube with 2 × the volume of KRH+1 mg ml -1 collagenase and 3% BSA. The tube was then incubated in a shaking water bath at 100 r.p.m., 37 °C for 20 min, with vigorous shaking by hand every 5 min to help dislodge remaining tissue on the outside of the vertebrae. After 20 min, the vertebrae solution was poured into a 10-cm dish. The vertebrae were quickly cleaned with gauze to remove any remaining soft tissue. Each vertebrae was then bisected longitudinally with a diagonal cutter and put into a fresh 50-ml conical tube containing 2 × the volume of KRH/collagenase/BSA solution. The bisected vertebrae were incubated in a shaking water bath at 100 r.p.m., 37 °C for an additional 20–30 min to liberate the cMAT adipocytes. Vertebral rMAT . Eight lumbar vertebrae were isolated and cleaned with gauze. The processing then continued as described for the vertebral cMAT above. Final processing for all adipocyte types. After filtration, the conical tubes were centrifuged at 400 g for 1 min to pellet the stromal vascular fraction and float the adipocytes. The pellet and the majority of the infranatant was carefully removed with a glass pipet and suction bulb. A plastic 1,000 ml pipet tip was used to resuspend the adipocytes and transfer 300 μl of liquid containing 0.1–1.0 mg of cells to a 24-well plate size transwell insert with 8 μm pores (Corning Inc., Corning, NY; cat. no. 3422). Approximately 90% of the liquid was removed by pressing the transwell membrane on a piece of dry paper towel. The cells in the insert were then washed twice in this manner with fresh KRH (no collagenase, no BSA). After the final wash and liquid depletion, the cells in the insert were collected in 300 μl of water and transferred immediately to a borosilicate glass tube for lipid extraction as described below. Lipidomic analyses of rat adipocytes Lipid extraction . Lipids from the adipocyte samples were extracted following essentially the Bligh and Dyer 45 method of solvent partition. A typical extraction procedure consists of suspending the cells in a borosilicate glass tube in 0.3 ml of water followed by adding 1.125 ml of a mixture of chloroform–methanol (1:2). The mixture was then vortexed to disrupt the cells. The samples were further treated with 0.375 ml each of chloroform and NaCl (0.9%) solution followed by vortexing and centrifugation at 4 °C, 6,500 g , for 7 min. The lower organic (chloroform) layer containing the total lipids was separated out and saved at −20 °C for further use. Preparation of methyl ester with boron trifluoride–methanol and purification . The fatty-acid components of the lipids were derivatized into their methyl esters via trans-esterification with boron trifluoride–methanol 46 after slight modification as follows. The solvents of the above lipid extract were removed under nitrogen. To the dry residue, 2 ml of boron trifluoride–methanol (14% solution) and 10 μl of 4 mM heptadecanoic acid (C 17 ) as an internal standard were added, and the tubes containing the mixture were closed under nitrogen and incubated at 68 °C for 3–3.5 h. The methyl esters were extracted by adding 2 ml of hexane and 1 ml of water, mixing followed by centrifugation. The hexane layer containing methyl esters was transferred into a separate tube. The solvent was then removed under nitrogen, the methyl esters were re-dissolved in to a small volume of hexane and purified by thin-layer chromatography (TLC) using n -hexane-diethyl ether–acetic acid (50:50:2, v/v) as the developing solvents 47 , applying authentic standard fatty-acid methyl ester (FAME) side by side on the TLC plate. After development, the plates were dried and sprayed with Premulin 48 . The products were identified with respect to the retention flow of the standard (retention flow=0.67). The methyl esters were extracted from the TLC powder with diethyl ether, the volumes were concentrated under nitrogen, re-dissolved in 100 μl of hexane and the fatty-acid compositions of the lipids were analysed by GC as follows. GC of FAME . Analysis of FAMEs was performed with 1 μl of sample injection, by GC on an Agilent GC machine, model 6890N equipped with flame ionization detector, an auto sampler and a Chemstation software for data analysis. The GC column used was Agilent HP 88, 30 m, 0.25 mm I.D. and film thickness 0.20 μm. Hydrogen was used as a carrier gas as well as for flame ionization detector and nitrogen was used as a makeup gas. Analyses were carried out with a temperature programming of 125–220 °C. The fatty-acid components in unknown samples were identified with respect to the retention times of standard methyl ester mixtures run side by side. The fatty-acid components were quantified with respect to the known amount of C 17 internal standard added and the calibration ratio derived from each fatty acid of a standard methyl esters mixture and the methyl heptadecanoate (C 17 ) internal standard. Adipocyte isolation for quantitative PCR Adipocytes were isolated from two cohorts of animals, 16-week-old male Sprague–Dawley rats and ∼ 8-month-old female Sprague–Dawley rats as described above. Adipocytes, including rMAT from the proximal tibia and femur, were then isolated from a third cohort of 16-week–old male Sprague–Dawley rats by adding 50 U ml -1 heparin to the collagenase solution. Adipocyte preparations were lysed and processed using Stat60 reagent (Amsbio, Cambridge, MA, USA) to isolate total RNA. Pelleted RNA was resuspended in water and 100 μg of total RNA was reverse-transcribed to cDNA using TaqMan RT reagents (Applied Biosystems, Carlsbad, CA, USA). Quantitative PCR was performed using qPCRBIO SyGreen 2 × mix, Hi-Rox, on an Applied Biosystems real-time PCR detection system (Applied Biosystems). Gene expression was calculated based on a cDNA standard curve within each plate and normalized to the expression of TATA-binding protein (TBP) messenger RNA. Primer sequences are presented in Supplementary Table 1 . Statistics Graphpad Prism software was used to perform statistical tests. Tests including a two-tailed, homoscedastic t -test, a non-parametric Mann–Whitney test, two-way analysis of variance with Sidak’s multiple comparisons test and one-way analysis of variance with Tukey’s multiple comparisons test were applied as indicated in the figure legends. Principal components analysis was performed using MetaboAnalyst 21 . When possible, sample size was determined based on the effect size of preliminary data, and data analysis was performed by an investigator that was blinded to the sample groups. Additional information How to cite this article: Scheller, E. L. et al . Region-specific variation in the properties of skeletal adipocytes reveals regulated and constitutive marrow adipose tissues. Nat. Commun . 6:7808 doi: 10.1038/ncomms8808 (2015). Change history 08 December 2016 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper.
While most of us worry about the fat cells building up on the fleshy parts of our bodies, scientists have started to pay serious attention to another kind of fat cell deep inside our bones, in what's called the marrow. Today, they've published new important clues about this little-understood kind of fatty tissue - including the discovery that there are two different types. Their results pave the way for more research on how marrow fat influences the rest of the body, and its role in a range of diseases including osteoporosis. In a paper published in Nature Communications, the team from the University of Michigan and other institutions describes research in rodents, and a small group of women, that led them to conclude that there are two kinds of fat cells in what scientists call marrow adipose tissue, or MAT. The findings deepen understanding of MAT, which makes up about 70 percent of the marrow in the adult human skeleton. They also make it clear that researchers need to take different MAT types into account when studying its role in disease. Why MAT matters Scientists have come to realize MAT plays a key role in our body's metabolism. MAT levels rise in many different diseases, from anorexia to type 1 diabetes, and also go up as we age and as bones get brittle and break down in osteoporosis. "Reducing marrow fat has been mentioned as a target for osteoporosis therapy, but before such approaches go further we need to get a more targeted understanding of MAT and the effects of potential intervention," says Erica Scheller, Ph.D., DDS, who is transitioning from a postdoctoral fellowship at the U-M Medical School to a faculty position at Washington University in St. Louis. Scheller worked with senior author and U-M physiology professor Ormond MacDougald, Ph.D., and others to determine that MAT actually exists in two forms: regulated and constitutive. Their detailed analysis shows that the two kinds of cells store different types of fat molecules, that their genetic profile differs in very specific ways, and that they develop at different times in the life cycle and interact in different ways with the blood cell formation process that also happens in the marrow. Though the researchers can't yet see whether what they saw in mice holds completely true for humans, their study includes data from five women that agreed to let the researchers study the fat composition of their leg bone marrow using special scanners. Just as in the mice, the further down the leg bone, the more unsaturated fat there was inside the marrow. This is the first evidence in humans that two types of MAT exist, and the team will continue to study the bones of human. "We're definitely finding that MAT is more complex than anyone originally thought, and that we have a long way to go in understanding it," says MacDougald, who is the John A. Faulkner Collegiate Professor of Physiology in the Department of Molecular and Integrative Physiology, and a professor of Internal Medicine in the Metabolism, Endocrinology & Diabetes division. "We have a lot of it, and we need to do more to understand why it's there and what it's doing, and how it changes in different diseases." From here to tomorrow MacDougald, Scheller and their colleagues will continue to study the two forms of MAT in further studies in mice, and in bones removed from patients having hip replacement surgery and limb amputations. Getting healthy bone samples is harder, but over time they hope to flesh out the full picture of how the two forms of MAT form and act. The techniques they developed in their lab, which enable scientists to detect the characteristics of MAT, should be useful to scientists around the world studying bone marrow. And, the findings they've made should make MAT composition a key marker for scientists that study blood cell formation, bone biology and metabolism.
10.1038/ncomms8808
Medicine
Micro-map of hippocampus lends big hand to brain research
Jessie Kulaga-Yoskovitz et al. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset, Scientific Data (2015). DOI: 10.1038/sdata.2015.59
http://dx.doi.org/10.1038/sdata.2015.59
https://medicalxpress.com/news/2015-12-micro-map-hippocampus-big-brain.html
Abstract The hippocampus is composed of distinct anatomical subregions that participate in multiple cognitive processes and are differentially affected in prevalent neurological and psychiatric conditions. Advances in high-field MRI allow for the non-invasive identification of hippocampal substructure. These approaches, however, demand time-consuming manual segmentation that relies heavily on anatomical expertise. Here, we share manual labels and associated high-resolution MRI data (MNI-HISUB25; submillimetric T1- and T2-weighted images, detailed sequence information, and stereotaxic probabilistic anatomical maps) based on 25 healthy subjects. Data were acquired on a widely available 3 Tesla MRI system using a 32 phased-array head coil. The protocol divided the hippocampal formation into three subregions: subicular complex, merged Cornu Ammonis 1, 2 and 3 (CA1-3) subfields, and CA4-dentate gyrus (CA4-DG). Segmentation was guided by consistent intensity and morphology characteristics of the densely myelinated molecular layer together with few geometry-based boundaries flexible to overall mesiotemporal anatomy, and achieved excellent intra-/inter-rater reliability (Dice index ≥90/87%). The dataset can inform neuroimaging assessments of the mesiotemporal lobe and help to develop segmentation algorithms relevant for basic and clinical neurosciences. Design Type(s) repeated measure design • digital curation Measurement Type(s) nuclear magnetic resonance assay Technology Type(s) MRI Scanner Factor Type(s) Sample Characteristic(s) Homo sapiens • hippocampal formation Machine-accessible metadata file describing the reported data (ISA-Tab format) Background & Summary The hippocampus has been a focus of neuroscience research for decades. Highly segregated connectional properties have promoted its use as a model system. The hippocampus plays an important role in multiple cognitive processes, particularly declarative memory 1 , 2 ; its structural compromise is a hallmark of prevalent neurological and psychiatric disorders, such as temporal lobe epilepsy 3 , Alzheimer’s disease 4 , 5 , depression 6 , and schizophrenia 7 . Prior to the advent of sophisticated histological staining techniques 8 , the hippocampal formation was described as a single entity despite its complex histo-morphology. Since the description by Ramon y Cajal 9 , several histological subdivisions have been proposed 10 – 12 . Similarly, neuroimaging studies have generally considered the hippocampus as a single structure, constrained by limited spatial resolution 13 . Developments in high-field MRI at 3 Tesla and beyond, together with the use of phased-array head coils, offer new opportunities to appraise its internal structure by unveiling strata rich in white matter, and improved identification of the hippocampal sulcus, which separates Cornu Ammonis (CA) and subiculum from the dentate gyrus (DG). Paralleling advances in hardware, a number of studies have provided MRI-based guidelines to manually segment hippocampal subfields 14 – 23 . While substantial progress has been made, challenges remain, particularly when attempting to separate individual CA subfields from one another, which compromises reliability within and across analysts. From a practical perspective, manual segmentations require anatomical expertise and are often prohibitively time-consuming. Here, we share a dataset containing manual segmentations of hippocampal subfields together with submillimetric multi-spectral images in 25 healthy individuals. To facilitate local implementation and independent verification, we share detailed MR sequence information as well; importantly, all data were acquired in a clinically-feasible scan time on a widely available 3 Tesla MRI system. Opting for high reliability, segmentations were based on a protocol that divided the hippocampal formation into consistently identifiable subregions, guided by intensity and morphology of the densely myelinated molecular layer, together with few geometry-based boundaries flexible to overall mesiotemporal anatomy. Specifically, we combined presubiculum, parasubiculum, and subiculum proper into a single label (subiculum), joined CA1, 2, and 3 (CA1-3), and merged CA4 with the DG (CA4-DG). While segmentation relied primarily on T1-weighted (T1w) data, T2-weighted (T2w) images offered additional guidance. We provide the full set of multispectral images in high-resolution native and stereotaxic (MNI152) space, the manual labels, together with a probabilistic atlas that can inform functional and structural imaging assessments of the hippocampal formation. Moreover, our datasets can be used to develop new protocols, validate existing ones and design automated algorithms relevant for basic as well as clinical neurosciences. Methods Participants We studied 25 healthy individuals (12 males; 21–53 years, mean±s.d. age=31.2±7.5 years; Table 1 ), recruited through advertisement. All participants had normal or corrected-to-normal vision; none of them suffered from neurological, psychiatric, or somatic diseases. The Ethics Committee of the Montreal Neurological Institute and Hospital approved the study and written informed consent was obtained from all participants in accordance with the standards of the Declaration of Helsinki. Participants gave their written informed consent prior to scanning and received a monetary compensation. Table 1 Samples, subjects and data outputs. Full size table Scan parameters MRI data were acquired on a 3 Tesla Siemens TimTrio scanner using a 32-channel head coil. We obtained two sets of T1w images: a 3D magnetization-prepared rapid-acquisition gradient echo (MPRAGE) with millimetric resolution (repetition time (TR)=2,300 ms; echo time (TE)=2.98 ms; inversion time (TI)=900 ms; flip angle=9°; matrix size=256×256; field-of-view (FOV)=256×256 mm 2 ; 176 sagittal slices with 1 mm slice thickness resulting in 1×1×1 mm 3 voxels; iPAT=2, acquisition time=5.30 min), and a submillimetric 3D MPRAGE (TR=3,000 ms; TE=4.32 ms; TI=1,500 ms; flip angle=7°; matrix size=336×384; FOV=201×229 mm 2 ; 240 axial slices with 0.6 mm slice thickness resulting in 0.6×0.6×0.6 mm 3 voxels; acquisition time=16.48 min; to increase the signal-to-noise ratio, two identical scans were acquired, motion corrected, and averaged into a single volume). T2w images were obtained using a 2D turbo spin-echo sequence (TR=10,810 ms; TE=81 ms; flip angle=119°; matrix size=512×512; FOV=203×203 mm 2 , 60 coronal slices angled perpendicular to the hippocampal long axis, slice thickness=2 mm, resulting in 0.4×0.4×2.0 mm 3 voxels; acquisition time=5.47 min). Pre-processing MRI data files were converted from DICOM to MINC (*.mnc) format using dcm2mnc with dicom header anonymization. Images underwent automated correction for intensity non-uniformity and intensity standardization 24 . Millimetric and submillimetric T1w MRI volumes were linearly registered to the high-resolution MNI-ICBM152 template 25 , 26 . T2w images were linearly registered to the millimetric T1w MRI in native space; the resulting transformation matrix was concatenated with the matrix that mapped the millimetric T1w image to the MNI space, thereby linearly mapping T2w images to this template. During the final registration of submillimetric T1w and T2w data to MNI space, images were resampled to a resolution of 0.4×0.4×0.4 mm 3 , yielding a voxel volume of 0.064 mm 3 . To reduce interpolation artifacts given the anisotropic resolution of the T2w data, we applied a non-local up-sampling method that recovers high frequency information using a data-adaptive patch-based reconstruction together with a subsampling coherence constraint 27 . MNI-space structural scans were subsequently anonymized by zeroing out the voxels in the vicinity of the facial surface, teeth, and auricles following a previously described procedure 28 . For data sharing, images were converted to NIfTI (*.nii) format using mnc2nii. Please see Fig. 1 for a schematic overview of the preprocessing steps and data quality. Figure 1: Dataset: schematic illustration of image acquisition, native data, image processing and final processed data. Full size image Protocol description A single rater (JKY), blinded to case identities, carried out all segmentations using a 3D viewer ( ). Subfield segmentation took approximately 16 h per individual (8 h per hemisphere). Boundaries were based on anatomical descriptions of the hippocampus by Duvernoy 29 and Insausti 30 . As spatial relationships between subfields vary along the hippocampal long axis, landmarks are separately described for the hippocampal head ( Fig. 2a–e ), body ( Fig. 2f ), and tail ( Fig. 2g–j ). These segments were defined as in our previous protocol 31 . Figure 2: Anatomical boundaries of hippocampal subfileds on T1- and T2-weighted MRI. Sections displaying critical landmarks are shown. ( a , j ) are the most rostral and caudal coronal sections. ( a ) The rostral-most tip of the hippocampus is composed of the subiculum; at this level, the alveus surrounds the subiculum, separating it from the overlying amygdala (AM). ( b ) When CA1 first becomes visible, it runs parallel to the subiculum; the structures are separated by the subicular molecular layer (arrow). ( c ) Vertical digitations of CA1-3 (arrowheads point to cavities within the hippocampal sulcus; x indicates the ambient cistern); the supero-lateral subicular interface with CA1 is drawn along a line following the hippocampal sulcus, directed towards the fundus of the collateral sulcus (asterisk). ( d ) The rostral-most portion of CA4-DG is set at the section where the medial portion of the DG, the margo denticulatus, becomes visible (arrowhead). ( e ) Junction between head and body, at the level of the uncal apex (asterisk). ( f ) Hippocampal body; the arrow points to the molecular layer of the subiculum. ( g ) Rostral portion of hippocampal tail: the crus fornix is fully visible and well demarcated from the thalamus (Th). ( h ) The caudal slice of the subiculum is set to the posterior-most section on which the thalamus can be identified. ( i ) Middle segment of the tail. The subiculum is replaced by CA1-3, at the level at which the crus fornix fuses with the splenium (Sp) of the corpus callosum. ( j ) Terminal segment of the tail. ( k) Sagittal hippocampal section displaying planes of the coronal cuts. ( l ) 3D surface rendering of hippocampal subfields with a coronal cut at the level of the body. On coronal sections, the orientation of the hippocampal body varies across individuals, modifying the spatial relationships between subiculum and CA1. In d , the hippocampus is oriented clockwise. In e , it is oriented counter-clockwise and in f it has a horizontal position. The red line follows the slope of the superior border of the subiculum, the solid white line represents the horizontal axis, and the dashed white line is placed at the boundary between subiculum and CA1. Full size image Segmentations were primarily performed on coronal T1w images, with cross-referencing to sagittal/axial views. T2w data eased the identification of the densely myelinated and thick molecular layer of the subiculum (forming its superior border). This layer is hyperintense on T1w and hypointense on T2w images ( Fig. 2b–i, k ); it is contiguous with, but distinct from the thinner molecular layer of CA1 ( ref. 30 ). The second landmark is the molecular layer of the DG and that of CA fused across the vestigial hippocampal sulcus; this ensemble is visible as a T1w-hyperintense/T2w-hypointense band ( Fig. 2c–i ). The molecular layers, along with residual vascular cavities that follow the sulcal route, consistently appear on T2w images and separate the DG from the subiculum (inferiorly and medially) and the CA (inferiorly, laterally, and superiorly). We included alveus and fimbria in the CA1-3 label. a) Hippocampal head The hippocampal head includes the subiculum, CA1-3, and small portions of the DG. Its rostral-most section is composed of the subiculum only 30 ( Fig. 2a ). Here, the alveus surrounds the subiculum, separating it from the overlying amygdala; cross-referencing to the sagittal view confirmed this boundary. The inferior subicular boundary is formed by parahippocampal white matter running along the entire rostro-caudal extent of the hippocampus. Perforant projections from the entorhinal cortex to the subiculum occasionally blurred this boundary; in this case, we identified the subiculum by cross-referencing to axial/sagittal views. As the exact boundary between subiculum and infero-medial entorhinal cortex cannot be visualized on MRI, it was defined by extending a straight line along the gray-white matter border at the crown of the parahippocampal gyrus until it reached the cerebro-spinal fluid in the ambient cistern 32 . When CA1 first becomes visible, it runs parallel to the subiculum; for a few slices, the molecular layer of the subiculum separates both structures, with CA1-3 on the top ( Fig. 2b ). More posteriorly, given the overlap (rather than sharp transition) between the pyramidal layers of CA1 and subiculum 30 , we drew a line along the hippocampal sulcus pointing towards the fundus of the collateral sulcus ( Fig. 2b,c ). This often-oblique line has been previously used to describe this boundary 19 . The hippocampal head exhibits 3–4 digitations before turning medially to form the posterior uncus. Each digitation encapsulates an extension of the DG. At the level of the head, however, the DG molecular layer that would have allowed for its identification cannot be visualized. For consistency, we merged CA and DG at this level ( Fig. 2c ). We could reliably segment CA4-DG at the junction of head and body, where the medial surface of the DG (known as margo denticulatus) becomes visible ( Fig. 2d ). b) Hippocampal body Head and body interface at the caudal end of the uncus 31 ( Fig. 2e,f ). Here, the margo denticulatus of the DG has a characteristic toothed appearance and is separated from the overhanging fimbria by the fimbriodentate sulcus. Coronally, the orientation of the hippocampal body varies along its rostro-caudal direction both across and within individuals. The term malrotation has been coined to describe this abnormal shifting/rotations of the long hippocampal axis relative to the horizontal plane 33 , 34 , which likely affects the relative boundary between subiculum and CA1. To determine this border, we adapted our guidelines based on the position of the hippocampus on coronal slices: (1) if the left hippocampus was oriented counter-clockwise (clockwise for the right hippocampus), the boundary was defined as the extension of the line corresponding to the superior subicular border ( Fig. 2e ); (2) if the hippocampus was horizontally positioned, the border was defined as a line drawn from the lateral-most point of the subicular molecular layer at a 45 degrees angle until it reached the underlying white matter ( Fig. 2f ); (3) if the left hippocampus was oriented clockwise (counter-clockwise for the right), the border followed a line drawn from the lateral-most point of the subicular molecular layer towards the fundus of the collateral sulcus ( Fig. 2d ). Inferior and medial boundaries of the subiculum remained the same as in the head. CA and DG form two U-shaped interlocking laminae, one fitting into the other, and separated from each other by the hippocampal sulcus. For consistency, voxels corresponding to the fused molecular layers of the CA1-3 and DG were assigned to CA4-DG. As the CA3-CA4 boundary cannot be resolved on MRI, the superior border of CA4-DG was drawn as the horizontal continuation of the hippocampal sulcus, from its most medially visible point towards the fimbriodentate sulcus. c) Hippocampal tail The junction between body and tail was set as the rostral-most slice at which the crus fornix becomes fully visible ( Fig. 2g ) 31 . In the initial segment of the tail, the CA1-subiculum boundary was determined to be the infero-lateral extension of the superior subicular border ( Fig. 2g,h ). Inferior and medial borders of the subiculum were defined as in the body. In the initial portion of the tail, CA1 is deeply located, hidden by the subiculum; more posteriorly, it appears at the surface of the parahippocampal gyrus, progressively replacing the subiculum. The exact posterior subicular border is not visible on MRI: we consistently chose it to be the posterior-most coronal slice on which the thalamus could be seen ( Fig. 2h,i ), verified on sagittal view. We excluded the isthmus of the cingulate gyrus, which replaces the subiculum in the middle and terminal segments of the tail, by excluding grey matter inferior to the hippocampal sulcus, best visualized sagittally. The hippocampal sulcus separates the DG from the subiculum in the initial segment, and from CA1-3 in the initial and middle segments. Furthermore, the fused molecular layers of CA and DG allowed us to visualize the caudal border of the DG on the sagittal view. The posterior hippocampal end belongs to CA1-3 ( Fig. 2j ) and faces the cerebrospinal fluid of the lateral ventricle medially and of the atrium laterally. This boundary was best seen sagittaly ( Fig. 2k ). While fimbria and alveus were included in CA1-3, we excluded the crus fornix ( Fig. 2g ). The latter joins the splenium of the corpus callosum. Code availability All MRI preprocessing employed standard routines (non-uniformity correction, intensity normalization, image registration). We used minc tools that are freely available on github ( ). Similar processing can also be achieved using tools provided by other freely available packages, such as FreeSurfer ( ) or FSL ( ). The patch-based up-sampling technique for T2w-images is available on P. Coupé’s website ( ). Defacing was based on publically available code ( ). Data Records The submillimetric 3 Tesla dataset are highly suitable for the development and cross-validation of future manual or automatic segmentation protocols. MRI data and subfield segmentations of all participants, detailed scan parameters, as well as stereotaxic probabilistic maps are available on Dryad ( Data Citation 1 ) and NITRC ( Data Citation 2 ). A README file with a detailed description of the content of all downloads is available there as well. MRI data files were converted from to DICOM to MINC format (using dcm2mnc) before processing, and to NIfTI (using mnc2nii) after processing. For every subject, high-resolution T1w and T2w data are available in 0.4 mm isotropic MNI152 space as well as in their native spaces. For registration purposes, the 1×1×1 mm 3 T1w data is also provided in native and stereotaxic space. Labels in NIfTI format of the subiculum, CA1-3 and CA4-DG are provided in the high-resolution MNI152 space. We furthermore provide probabilistic anatomical maps of each subfield in 1×1×1 mm 3 MNI152 space. To anonymize data, centre-specific study and participant codes have been removed using an automated procedure. MRI data have been de-faced. All participants were given sequential integer IDs with an ‘S’ prefix. Technical Validation Contrast-to-noise ratio To obtain a quantitative index of MRI data quality, we estimated Contrast-to-Noise ratio (CNR), similar to the approach carried out in a recently published study 21 . In short, an eroded mask of the CA1-3 was compared with an equivalently-sized mask of the temporal lobe white matter inferior to it. The CNR was estimated using the following formula: C N R = W M ¯ − G M ¯ v a r ( W M ) + v a r ( G M ) where W M ¯ and G M ¯ are the mean intensities in the WM and GM masks; var(.) is the intensity variance. We calculated the CNR for each subject in native T1w and T2w space, as well as in the MNI space on which segmentations were performed. For native T1w and T2w data, mean±s.d. (range) CNR estimates across the sample were: 3.04±0.23 (2.73–3.48) and 4.42±0.64 (3.29–5.83). For supersampled and MNI space data, corresponding values were 4.74±0.86 (3.27–7.73) and 4.53±0.59 (3.63–5.71). Please see Table 2 for a subject-by-subject listing. Table 2 Contrast-to-noise estimates. Full size table Intra- and inter-rater reliability JKY segmented subfields of 10 hippocampi (5 left, 5 right) from 10 different subjects twice, 6 months apart. We assessed inter-rater reliability comparing subfield delineations of 10 hippocampi segmented by JKY and another observer (KL), blinded to each other’s segmentation. Reliability was quantified using Dice overlap indices between two labels 35 , D=2×|M 1 ∩M 2 |/(|M 1 |+|M 2 |)×100% , where M 1 is the 1st label, M 2 the 2nd label; M 1 ∩ M 2 is the intersection of M 1 and M 2 ; |.| is the volume operator. We also calculated intra-class correlations (ICC). The Dice index quantifies the overlap of two labels geometrically, whereas ICC calculates statistical similarity. To approximate the actual distribution of reliability values, we employed 1,000 bootstrap-based subsamplings and computed 95% confidence intervals. Table 3 displays mean±s.d. as well as bootstrap confidence interval of Dice indices for individual subfileds. Overall, indices were ≥90 and 87% for intra- and inter-rater reliability, respectively. The ICC ranged from 0.91 to 0.96 within and from 0.73 to 0.91 between raters. Table 3 Intra-and inter-rater reliability assessment. Full size table Probabilistic anatomical maps For each MNI152-space subfield label, we generated statistical anatomical maps that outline the probability of subfield location across participants ( Fig. 3 ). Figure 3: Statistical probabilistic atlas of hippocampal subfields overlaid on the MNI152 template. Full size image Usage Notes The procedures we employed in this study resulted in a high-resolution 3 Tesla dataset containing submillimetric MRI data in native and MNI152 space, together with manual labels of three hippocampal subfields in MNI152 space. Data are shared in documented standard formats, such as NIfTI or plain text files, to enable further processing in arbitrary analysis environments with no imposed dependencies on proprietary tools. Exam card printouts from the scanner are also available for local implementation of the image acquisition protocol. All processing performed on the released data article were produced by openly accessible software on standard computer workstations. Data are available on a curated open access repository ( Data Citation 1 ) and on NIRTC ( Data Citation 2 ). Additional Information How to cite this article: Kulaga-Yoskovitz, J. et al. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset. Sci. Data 2:150059 doi: 10.1038/sdata.2014.59 (2015).
A new detailed map of the hippocampal region of the brain, compiled by researchers at the Montreal Neurological Institute and Hospital-The Neuro at McGill University, is helping the scientific community accelerate research and develop better treatments for patients suffering from epilepsy and other neurological and psychiatric disorders. The team of researchers, led by Dr. Neda Bernasconi, a neuroscientist specializing in the neuroimaging of epilepsy and co-founder of the Neuroimaging of Epilepsy Laboratory (NOEL) at The Neuro, set out to build and share a detailed model of the substructures making up one of the key centres of the brain involved in epilepsy: the hippocampus. The goal of their project, published on November 10 in Scientific Data, is to improve the tools available to researchers and clinicians working in the field around the globe. Epilepsy is a neurological disorder characterized by a sudden, brief change in the brain, expressed as a seizure. According to Epilepsy Canada, approximately one percent of Canadians suffer from the condition and more than 30% of patients with epilepsy do not respond to anti-epileptic drugs. For these individuals, the surgical removal of the brain tissue causing seizures is the only known effective treatment for controlling the condition and improving quality of life. In order to compile this hippocampal atlas, researchers used MRI imagery from a sample of 25 healthy individuals. They then used their expertise in brain anatomy to label all the substructures composing the region, providing a model of an average, healthy hippocampus. The end result is analogous to a Google street view of this particular part of the brain. With this tool, researchers will be better able to assess the pathology of their patients by comparing their data to the atlas and will more clearly be able to locate the areas in need of surgical intervention. A tool for brain diseases experts of all levels "Our primary purpose was epilepsy. We wanted to be able to detect and identify different substructures in the hippocampus to enable us to be a lot more precise in our diagnosis and to pinpoint the affected region to better target treatments", said Dr. Bernasconi. "With this new submillimetric dataset, made available through open science, we are not just sharing MRI images, we are also transferring anatomical knowledge and providing a statistical map that can be used by researchers and clinicians of different levels of expertise anywhere in the world." These tools hold promising therapeutic implications for epilepsy, but also for other neurological and psychiatric disorders such as Alzheimer's disease, schizophrenia and depression. Crucially, the atlas provides researchers with a non-invasive way to assess the impact of therapies targeting this region of the brain and to thus develop better treatments to improve the quality of life for their patients.
10.1038/sdata.2015.59
Biology
Some genes 'foreign' in origin and not from our ancestors
Alastair Crisp, Chiara Boschetti, Malcolm Perry, Alan Tunnacliffe and Gos Micklem, Expression of multiple horizontally acquired genes is a hallmark of both vertebrate and invertebrate genomes, Genome Biology 2015. DOI: 10.1186/s13059-015-0607-3 Journal information: Genome Biology
http://dx.doi.org/10.1186/s13059-015-0607-3
https://phys.org/news/2015-03-genes-foreign-ancestors.html
Abstract Background A fundamental concept in biology is that heritable material, DNA, is passed from parent to offspring, a process called vertical gene transfer. An alternative mechanism of gene acquisition is through horizontal gene transfer (HGT), which involves movement of genetic material between different species. HGT is well-known in single-celled organisms such as bacteria, but its existence in higher organisms, including animals, is less well established, and is controversial in humans. Results We have taken advantage of the recent availability of a sufficient number of high-quality genomes and associated transcriptomes to carry out a detailed examination of HGT in 26 animal species (10 primates, 12 flies and four nematodes) and a simplified analysis in a further 14 vertebrates. Genome-wide comparative and phylogenetic analyses show that HGT in animals typically gives rise to tens or hundreds of active ‘foreign’ genes, largely concerned with metabolism. Our analyses suggest that while fruit flies and nematodes have continued to acquire foreign genes throughout their evolution, humans and other primates have gained relatively few since their common ancestor. We also resolve the controversy surrounding previous evidence of HGT in humans and provide at least 33 new examples of horizontally acquired genes. Conclusions We argue that HGT has occurred, and continues to occur, on a previously unsuspected scale in metazoans and is likely to have contributed to biochemical diversification during animal evolution. Background The acquisition of genes from an organism other than a direct ancestor (that is, horizontal gene transfer (HGT) also called lateral gene transfer) is well known in bacteria and unicellular eukaryotes, where it plays an important role in evolution [ 1 ], with recent estimates suggesting that on average 81% of prokaryotic genes have been involved in HGT at some point [ 2 ]. However, relatively few cases have been documented in multicellular organisms [ 3 - 7 ]. Reports of HGT in animals are usually limited to the description of the transfer of only one or a few genes, making the extent of horizontal gene transfer in animals unclear. Examples include the transfer of fungal genes for carotenoid biosynthesis to the pea aphid, which results in a red pigmentation and is thought to be beneficial to the aphid [ 8 ] and the transfer of a cysteine synthase from a bacterium into the arthropod lineage (likely two independent transfers into a phytophagous mite ancestor and a lepidopteran ancestor), which allows the detoxification of cyanide produced by host plants [ 9 ]. This activity is also found in nematodes, where it may have been acquired by HGT from plants [ 9 ]. Other examples of putatively adaptive HGT have been characterised in plant-parasitic nematodes, which produce cell-wall degrading enzymes from a number of horizontally transferred genes [ 3 ], and the coffee berry borer beetle, where a mannanase has been transferred from bacteria allowing the hydrolysation of coffee berry galactomannan [ 10 ]. In exceptional cases, high levels of HGT in animals have been reported, but this has been attributed to the lifestyles of the recipient organisms. For example, in bdelloid rotifers, which are desiccation-tolerant asexuals, up to approximately 10% of transcripts derive from horizontally acquired genes [ 11 - 13 ]. Desiccation results in both DNA breakage [ 14 , 15 ] and loss of membrane integrity (reviewed in [ 16 ]), both of which may potentiate HGT. Another unusual example is the transfer of the entire genome (>1 Mb) of the bacterium Wolbachia into the fruit fly Drosophila ananassae , although relatively few Wolbachia genes are transcribed in this case [ 17 ]. Genes from Wolbachia are frequently transferred to invertebrates [ 17 , 18 ], probably because the long-term association (either parasitic or mutualistic) between the bacterium and its hosts maintains their genomes in close proximity. Furthermore, as Wolbachia frequently infects the testes and ovaries of its hosts, it has access to their germlines, a prerequisite for the transmission of the acquired genes to the next generation. These studies have led to the perception that HGT occurs very infrequently in most animals, especially in vertebrates [ 5 , 6 ]. Furthermore, there are concerns over the validity of the examples of HGT reported in humans [ 19 - 22 ]. The original report on the human genome sequence [ 19 ] described prokaryote-to-vertebrate HGT discovered by aligning human sequences to those of a small number of species (not many genomes were available at the time), including only two metazoans, D. melanogaster and Caenorhabditis elegans . Any proteins aligning to bacteria but not to these two metazoans, or to the other two eukaryotic proteomes used ( Arabidopsis thaliana and Saccharomyces cerevisiae ), were considered to be a result of prokaryote-to-vertebrate HGT. However, these four eukaryotic species do not contain orthologs of all ‘native’ human genes (that is, those not horizontally acquired), leading to incorrect identification of HGT (false positives) and the subsequent rejection of many cases by phylogenetic analyses [ 20 - 22 ]. The problem (the availability of a limited number of eukaryotic genomes for comparison in studies of HGT) has lessened in the intervening decade; thousands of proteomes (including several primates) are now available in UniProt, allowing prediction of HGT using alignment to hundreds of species and subsequent phylogenetic validation, as shown in recent work in invertebrates (for example, [ 12 , 23 , 24 ]). In the human, however, there have been no follow-up studies since the original genome paper, and the true scale of HGT in humans, and metazoans generally, remains unclear. To remedy this, we initially identified non-metazoan to metazoan HGT in multiple Drosophila , Caenohabditis and primate (including human) species. Due to the controversy surrounding the human studies [ 19 - 22 ], we then took our analysis a step further by comparing multiple closely related species and combining information on horizontally transferred (‘foreign’) genes found in more than one species in the group, thereby reducing mis-identification of HGT caused by spurious alignments. In this way, we identified up to hundreds of active foreign genes in animals, including humans, suggesting that HGT provides important contributions to metazoan evolution. Results Drosophila species, Caenorhabditis species and primates have up to hundreds of active foreign genes To determine the scale of HGT across well-characterised taxonomic groups, we examined 12 Drosophila species, four Caenorhabditis species and 10 primates (Figure 1 ) for which high quality genomes and transcriptomes are available. For each transcribed gene, we calculated the HGT index, h (the difference between the bitscores of the best non-metazoan and the best metazoan matches), which gives a relative quantitative measure of how well a given gene aligns to non-metazoan versus metazoan sequences, with positive numbers indicating a better alignment to non-metazoan sequences [ 12 ]. For example, the C. elegans gene gut-obstructed 1 ( gob-1 ), which encodes a trehalose-6-phosphate phosphatase, has a best non-metazoan match with a bitscore of 135 and a best metazoan match with a bitscore of 39.3 resulting in an HGT index of 95.7. As we were interested in more than just very recent HGT, we excluded members of the test species’ phylum from the metazoan matches. This allowed us to identify HGT over evolutionary periods encompassing hundreds of millions of years, as opposed to only identifying HGT that occurred since the test species’ divergence from its most closely related species (likely up to tens of millions of years). Hereafter, when we refer to matches to metazoan sequences, we mean these subsets. Figure 1 Phylogenetic relationships of the main taxonomic groups studied. The blue numbers indicate the ortholog groups mapping to each branch (HGT events). Events may have occurred anywhere along the branch, not just where the number is indicated. Events found at the base of the tree have occurred anywhere between the origin of the phylum and the base of the tree. Trees are not drawn to scale with each other. Full size image We first identified a base level of HGT (called class C) by using conservative thresholds of h ≥30 (as in [ 12 ]) (meaning that the gene aligns much better, and is therefore much more similar, to non-metazoan genes) and bitscore of best non-metazoan match ≥100 (thereby excluding bad alignments to non-metazoans). The example given above ( gob-1 ) passes these thresholds and is therefore at least class C HGT. This per-species information was then combined for each taxon ( Drosophila , Caenorhabditis and primates) to construct ortholog groups. For each ortholog group we calculated the average h value of all members ( h orth ) and defined the genes with h orth ≥30 as class B, a subset of class C. These genes are, on average, predicted as HGT in all tested species they are found in. The gene gob-1 has homologs in C. brenneri , C. briggsae and C. japonica , with values of h = 102, h = 97.1 and h = 86.4 respectively, giving an average h ( h orth ) of 95.3 and as such gob-1 (ands its homologs) are also class B HGT. Finally, we applied a still more stringent filter to define class A foreign genes (a subset of class B), which had only very poor alignments to metazoan sequences and whose orthologs, as used to define class B, also had similarly poor alignments to metazoan sequences. To do this, we identified those sequences whose best match to a metazoan had a bitscore <100 and whose ortholog groups contain no genes with metazoan matches of bitscore ≥100 (Figure 2 A). The gene gob-1 has no metazoan matches with bitscore ≥100 (best metazoan match = 39.3) and the same is true for its homologs (best matches of 37, 38.9 and 36.6, respectively), as such it is also class A HGT. Figure 2 HGT genes by class. ( A ) The left panel shows a schematic representation of the HGT classes: class B and C genes have h index ≥ 30 and bitscore of the best non-metazoan blastx hit ≥ 100 (they are distinguished by h orth , which is not shown on this figure), while class A genes must additionally have bitscore <100 for the best metazoan blastx hit. The right panel shows the scores for all genes in H. sapiens , colour-coded according to their classification (class A: red, class B: orange, class C: blue, native genes: grey). ( B ) Box-plot of the number of genes in each class, for the three main taxa analysed ( Drosophila spp. Caenorhabditis spp., primates species), colour-coded according to the same scheme (class A: red, class B: orange, class C: blue). Full size image We then performed phylogenetic analyses for all genes of each of the above classes and found that an average of 55% of all class C genes, 65% of all class B genes and 88% of all class A genes were phylogenetically validated as foreign. This validation and further manual analysis (Additional files 1 and 2 ) suggested that, while false positives are minimised as C → B → A, some true positives are also lost. Therefore, class A genes represent a minimum estimate of the level of HGT for a given species. We found that Caenorhabditis species have, on average, 173, 127 and 68 genes in HGT classes C, B and A, respectively. In contrast, Drosophila species have fewer active foreign genes with, on average, 40 genes in class C, 25 in class B, and only four in class A. Primate HGT levels fall between those of the invertebrate taxa, with an average of 109, 79 and 32 genes per species in classes C, B and A, respectively (Figure 2 B, Additional files 2 and 3 ). Identified foreign genes are unlikely to be explained by alternative hypotheses To verify that the foreign genes we identified do indeed belong to the species under study and are not contamination (this is a problem in a number of animal genome sequences; see ‘Phylogenetic validation’ in Additional file 1 ), we tested whether they were found on the same genomic scaffolds as (that is, were linked to) genes of metazoan origin (native genes). Across all species we found an average of only nine class C genes per species (6.6% of foreign genes) that were not linked to native genes (Additional file 2 ), with correspondingly low proportions for class B and A genes. Demonstration of such high levels of linkage was only possible due to the high quality of the assemblies of these genomes. Although most species showed a high degree of linkage, there were three outliers (see Additional file 1 ), but even if all unlinked genes were contamination, which is not necessarily the case, this would have a minimal impact on the levels of HGT detected. An alternative hypothesis to explain our data is that the genes we label as foreign in any single species are actually the result of conventional vertical descent, but have been lost in all other animal lineages. The parsimony principle tells us that we should choose the simplest explanation, which might be determined by comparing the rate of gene loss and the rate of gene gain by HGT. However, while the rate of gene loss over time can be estimated, at this point we cannot accurately estimate the rate of HGT over anything less than the time since the common ancestor of all metazoans, due to limited data. The rates that should actually be compared are the rates of gene loss and HGT at the earliest branches of the eukaryotic tree, but these rates are especially difficult to determine as the very long periods of time involved mean that ortholog determination (necessary to find which genes have been lost/gained) is hard. Furthermore, published estimates of the rate of gene loss typically treat all genes as equal, but the actual rate varies between types of genes and types of organisms (for example, parasites have higher loss rates [ 25 , 26 ]). As HGT involves the transfer of only a subset of genes (see section ‘ Many horizontally acquired genes code for enzyme activities ’, below), a generic gene loss rate is not comparable to the HGT rate. Given these difficulties we attempted to differentiate between the two hypotheses with a different method. We looked at the functions of foreign genes and compared them to those of native genes that are known to have been lost in all other animal lineages, but were not predicted as foreign (genes for which the alternate hypothesis is true) and found significant differences between the foreign genes we identified and native genes fulfilling these criteria (see section ‘ Many horizontally acquired genes code for enzyme activities ’, below). Therefore, while we cannot entirely discount the gene loss hypothesis, it seems an unlikely explanation for the tens or hundreds of foreign genes per genome that we observe. Identification of new foreign genes and confirmation of previously reported examples The first report of the human genome sequence highlighted 223 protein sequences (of which 113 were confirmed as present in the genome by PCR) that were proposed to originate from bacteria by HGT [ 19 ]. While some of these genes were later confirmed as foreign, many were rejected [ 20 - 22 ]. At the time of writing, it is difficult to assess all of these sequences because some early identifiers have not been maintained, but we have been able to confirm or reclaim 17 previously reported examples as foreign (some also confirmed by other studies; Additional file 4 ). Furthermore, we identified up to 128 additional foreign genes in the human genome (128 class C, of which 93 are class B and 33 class A), giving a total of 145 class C genes, of which 110 are class B and 39 class A. Among these examples, we reclaim those encoding the hyaluronan synthases (HAS1-3). These were originally proposed as examples of prokaryote-to-metazoan HGT [ 19 ], but later rejected [ 20 ]; however, neither study considered foreign taxa other than bacteria. We were able to identify all three hyaluronan synthases as class A HGT, originating from fungi, an assessment supported by our phylogenetic analysis (Figure 3 ). The HAS genes appear in a wide variety of chordates, but not in non-chordate metazoans, suggesting they result from the transfer of a single gene around the time of the common ancestor of Chordata, before undergoing duplications to produce the three genes found in primates. As the original rebuttal paper [ 20 ] only focused on recent HGT, and did not look for eukaryotic matches outside Chordata, they could not detect this ancient HGT. Figure 3 Phylogenetic tree for the human gene HAS1. For each branch the species name and UniProt accession is shown. The human gene under analysis is shown in orange, proteins from chordates are in red, other metazoa in black, fungi in pink, plants in green, protists in grey, archaea in light blue and bacteria in dark blue. Numbers indicate aLRT support values for each branch where higher than 0.75 (on short terminal branches the support values are not shown). Full size image We also identify cases of HGT reported more recently that have not been analysed in detail despite the potentially interesting consequences of such a finding. For example, the fat mass and obesity associated gene (FTO, in Additional file 5 : Figure S1A) seems to be present only in marine algae and vertebrates [ 27 , 28 ], which is a highly unusual distribution. Another gene proposed to have been horizontally transferred is the ABO blood group gene (ABO, in Additional file 5 : Figure S1B), which is suggested to enhance mutualism between vertebrates and bacteria [ 29 ]. We identified both these genes as class A HGT with phylogenetic validation (Additional file 3 ). In the invertebrates, Dunning Hotopp et al. [ 17 ] reported the horizontal transfer of nearly the entire Wolbachia genome into the D. ananassae genome and the expression of 44 formerly Wolbachia genes, as evidenced by PCR amplification of their transcripts in flies that had been cured of Wolbachia and partial re-sequencing of the D. ananassae genome. These genes are still not included in the official genome of D. ananassae , likely because eukaryote genome sequencing projects routinely screen for and exclude bacterial sequences on the assumption that these represent contamination. Consequently, they were not in the dataset tested in this study and therefore do not appear among the foreign genes identified in D. ananassae . However, we did find one gene in D. ananassae , GF19976, which has not been identified previously as foreign and that may originate from Wolbachia . Parkinson and Blaxter [ 30 ] identified four horizontally acquired genes in C. elegans . We identified all of these, three as class B HGT and the fourth as class A (highlighted in yellow in Additional file 3 ), but we also discovered a further 135 class C genes, of which 113 were class B and 71 class A (Additional file 3 ). This discrepancy with Parkinson and Blaxter [ 30 ] arises largely because these authors aligned C. elegans sequences with only a single non-metazoan ( S. cerevisiae ). Accordingly, we identified three of the four known foreign genes as fungal in origin, with the fourth also aligning well to fungal proteins (although we find it originated from a protist). Overall, however, only 4% to 15% of C. elegans HGT (depending on class) is of fungal origin (Additional file 2 ), with rather more (52% to 72%) deriving from bacteria (not assessed in ref. [ 30 ]). As mentioned in the Background section, there is phylogenetic evidence that the Cysteine Synthase Like genes found in nematodes, including C. elegans ( cysl1 - 4 ), may have been acquired from plants [ 9 ]. Our analysis supports this conclusion with all four genes being class B HGT of plant origin and three being phylogenetically validated. HGT also occurs in a number of other nematodes, in particular the parasitic root-knot nematodes, in which as many as approximately 3% of all genes may be horizontally acquired [ 3 , 24 ]. Many horizontally acquired genes code for enzyme activities In prokaryotes, horizontally acquired genes tend to be operational, typically encoding enzymes, rather than informational, that is, genes involved in transcription and translation [ 31 ]. It has recently been suggested that network connectivity is a more important consideration than function [ 32 ], but nevertheless most identified foreign genes are concerned with metabolism. Consistent with this, 83% of foreign genes in the bdelloid rotifer encode enzymes [ 12 ]. To determine whether this applies to HGT throughout metazoans, we first inspected Gene Ontology (GO) terms that were found at unexpectedly high levels among class A foreign genes (‘enriched GO terms’), then determined which GO terms indicated enzyme activities, and finally calculated the percentage of enzyme activities for both enriched and un-enriched terms. In almost all Caenorhabditis and primate species, enriched GO terms were significantly more likely (chi-squared test: 3E-9 ≤ P ≤ 0.05; Additional file 2 ) to describe enzyme activities than un-enriched terms (on average, 42% of enriched terms relate to enzyme activities vs. 26% of un-enriched terms; Additional file 2 ). In Drosophila species, insufficient terms were enriched to perform the calculation. Enriched GO terms in class B genes were also more likely to relate to enzyme activities. The second largest group of foreign genes codes for membrane-bound proteins, another category of operational genes. Therefore, like in prokaryotes [ 22 ], HGT is biased towards operational genes in metazoans. A possible alternative explanation for the genes suggested to result from HGT is that they are actually the product of vertical descent in the concerned species, but have been lost in all other animal lineages (as discussed above in ‘ Identified foreign genes are unlikely to be explained by alternative hypotheses ’). This explanation is more likely in the primates as their HGT is predominately ancient (see section ‘ Horizontal gene transfer is both ancient and ongoing ’ below), reducing the number of times the gene must be lost. To test this hypothesis, the same GO analysis was performed on native genes that are found in chordates and not in non-chordate metazoans (that is, genes that have been lost in all non-chordate metazoans; a possible alternative explanation for the putative foreign genes we identify). In all primate species, enriched GO terms for these genes (when compared to those from all other native genes) were significantly less likely (chi-squared test: P ≤0.05; Additional file 2 ) to describe enzyme activities than un-enriched terms (on average, 4% vs. 20%; Additional file 2 ). This is the opposite of the result for foreign genes, suggesting that an alternative hypothesis of gene loss does not explain our findings. Foreign gene functions Many foreign genes are, like many native genes, currently uncharacterised, even in intensively studied model organisms; for example, the human (foreign) gene ENSG00000136830 is annotated ‘family with sequence similarity 129, member B’, but there is no information on its role. Where foreign genes have meaningful annotation, it is clear they code for a wide variety of different functions across a broad range of categories, some of which may overlap. Here we describe the six most noteworthy categories, from largest to smallest, across C. elegans , D. melanogaster and the human (Additional file 3 ). In C. elegans , the largest category includes genes connected to the innate immune response (16 genes), including genes that specifically target bacterial cell walls ( lys family), genes targeting fungi (for example, endochitinases) and other more general immune system genes (for example, thn-3 and thn-4 ). The second largest category comprises eight genes involved in lipid metabolism, including the breakdown of fatty acids by beta-oxidation (for example, fatty acid CoA synthetase family, acs-11 ), as well fatty acid synthesis (for example, fatty acid desaturase, fat-1 ). The third category includes four genes involved in macromolecule modification, which encompasses activities such as phosphorylation, methylation and ubiquitination. The fourth category governs stress responses and includes a heat shock protein ( dnj-16 ), an LEA protein ( lea-1 ) and two genes involved in the trehalose synthesis pathway: trehalose-6-phosphate phosphatase ( gob-1 ) and trehalose-phosphate synthase ( tps-1 ). Trehalose production allows C. elegans dauer larvae to survive extreme desiccation [ 33 ], while LEA proteins are also linked to tolerance of water stress in C. elegans [ 34 ] and other invertebrates, as well as plants and microorganisms (reviewed in [ 35 ]). The fifth category consists of antioxidant activities (one gene; glutathione peroxidase, gpx-2 ) and the sixth category is amino-acid metabolism, also consisting of a single gene, coding for asparagine synthesis ( asns-2 ). There are far fewer foreign genes in D. melanogaster , but we do see genes belonging to some of the same categories as in C. elegans , namely macromolecule modification (two genes), the innate immune response (three genes) and stress response (three genes). The three D. melanogaster immune response genes all belong to the same family of proteins, which is involved in the phagocytosis of bacteria, while the three stress response genes are all involved in the trehalose synthesis pathway: two trehalose phosphatases (FBgn0031907, FBgn0031908) and a trehalose-phosphate synthase (Tps1). While this last gene has the same name as a C. elegans trehalose-phosphate synthase gene (Tps1/ tps-1 ), alignment shows they are dissimilar, especially outside the catalytic domain, suggesting they do not originate from the same HGT event (in Additional file 5 : Figure S2). Likewise the trehalose phosphatases are not conserved across species. In the human we find genes in five of the six categories: amino-acid metabolism (two genes), macromolecule modification (15 genes), lipid metabolism (13 genes), antioxidant activities (five genes) and innate immune response (seven genes). The lipid metabolism genes include genes with similar functions to the C. elegans genes, such as the breakdown of fatty acids by beta-oxidation (for example, enoyl-CoA, hydratase/3-hydroxyacyl CoA dehydrogenase, EHHADH), as well as a wide variety of other functions including the formation of glycolipids via chain extension (for example, globoside alpha-1,3-N-acetylgalactosaminyltransferase 1, GBGT1) and transmembrane transporters required for lipid homestasis (for example, ATP-binding cassette, sub-family G (WHITE), member 5, ABCG5). The innate immune response genes include genes involved in the inflammatory response (for example, immunoresponsive 1 homolog, IRG1), genes for immune cell signalling (for example, phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit gamma, PIK3CG) and antimicrobial genes (for example, epididymal peptidase inhibitor, EPPIN). We do not find any of the same foreign genes in common across the three species because our method precludes this: such genes would have been present in a common ancestor and would be screened out as metazoan. However, we do find shared functions, such as the trehalose synthesis pathway in the invertebrates. Few genes are found in shared pathways. This may indicate that transfers happen one gene at a time, with each gene being separately integrated into the metabolic networks of the organism. Broadly speaking we do not see differences between the species in the functions encoded by foreign genes, except in the immune response category: the majority of the invertebrate genes encode enzymes that break down bacterial and fungal cell walls, which would seem to confer a clear adaptive advantage, while the human genes are more likely to code for signalling and regulation of the immune response and have less obvious advantages to the organism. This likely reflects the differences in age between the vertebrate and invertebrate HGT (see section ‘ Horizontal gene transfer is both ancient and ongoing ’, below), with the more recently acquired foreign genes in the invertebrates having a clearer role than the ancient foreign genes in the vertebrates, which have had longer to integrate into networks. Foreign genes predominately originate from bacteria and protists When calculating h , the likely taxon of origin of a foreign gene was taken to be the taxon of the best-matching protein. Bacteria and protists are the most common donors in all groups (Figure 4 ), which might reflect the relative abundance of the respective donor species in the environments of the recipient organisms. The phylogenetic validation of the foreign genes occasionally indicated a different origin than the original calculation (based on alignments and h index), but both methods agreed on average 92% of the time; performing the analysis shown in Figure 4 using phylogenetically predicted origins instead shows the same pattern of donors (data not shown). The identity of the actual donor species is much harder to determine, as the identified ‘donor’ is almost certainly just the most closely related species currently sequenced. This is especially the case for older HGT events where the same foreign gene appears in more than one species, that is, where horizontal transfer predates the divergence of species. However, we did find a number of recent transfers (present in only a single studied species) that were identified as originating specifically from Wolbachia , with one example each in D. ananassae , C. briggsae and C. japonica (GF19976, CBG07424 and Cjp-ubc-6, respectively). Figure 4 Mean origin of class C foreign genes for each taxon. Numbers show percentage contribution within each taxon (row). The same analyses for Class B or A genes show very similar patterns. The colour scheme is as in Figure 3 : origin from archaea is light blue, from bacteria is dark blue, from protists is grey, from plants is green and from fungi is pink. Full size image Our method also identified putative HGT from viruses: while rare in both Drosophila and Caenorhabditis , up to 50 more foreign genes of viral origin per species were identified in the primates (‘Class V’: Additional files 2 and 3 ). The majority of such genes only align to viral and metazoan sequences, making the direction of transfer unclear, and therefore we excluded them from the rest of our analysis. Foreign genes are as likely to contain introns as native genes The many foreign genes that originate from bacteria would originally have lacked introns, but may have gained them while becoming adapted to the recipient species (domestication). To test this we looked at whether bacterial-origin foreign genes have introns. The Drosophila species generally have too few foreign genes to perform the analysis, but in three Caenorhabditis species (all except C. japonica ) and all primates the percentage of bacterial-origin foreign genes with introns is around 95%. For all three classes of foreign gene (C, B and A), there was no significant difference between the proportion of bacterial-origin foreign genes with introns and the proportion of native genes with introns (as measured by a chi-squared test; Additional file 2 ). The same was true for foreign genes as a whole (all origins; Additional file 2 ). This observation also makes it unlikely that the detected HGT is actually contamination of the genome with bacterial sequences, as these would lack introns. The exception, C. japonica , has significantly fewer bacterial-origin foreign genes with introns than native genes in all three classes ( P < 8E-6), averaging only 29% of bacterial-origin foreign genes with introns. It also has significantly fewer class A foreign genes with introns than native genes with introns ( P < 0.001) as discussed below. Horizontal gene transfer is both ancient and ongoing To determine whether the detected HGT is ancient (prior to the divergence of the studied taxon), or has occurred throughout the evolution of a particular taxon, we mapped the foreign ortholog groups (representing founding HGT events) for each taxon onto the corresponding phylogenetic trees. In Drosophila species, there is a broad correspondence between length of branch (time) and the number of HGT events along each branch, suggesting that HGT has occurred throughout Drosophila evolution and is likely to be ongoing (Figure 1 ). The same can be inferred for the Caenorhabditis species. Interestingly, a much larger number of HGT events have occurred in C. japonica than in the other studied Caenorhabditis species or their common ancestors, and its foreign genes also have different properties: it is the only species studied where significantly fewer multi-exon genes are found among foreign genes of prokaryotic origin than among native genes (Additional file 2 ). Transferred prokaryotic genes presumably require some time to acquire introns, and the lower proportion of intron-containing foreign genes is consistent with comparatively recent HGT events. An alternative explanation is that the C. japonica genome is contaminated, since around twice as many of its foreign genes are unlinked to native genes as in other species (Additional file 2 ). However, even if all unlinked genes are considered to be contamination and are discounted, there would still be more HGT events unique to C. japonica than unique to the other studied Caenorhabditis species. The distribution of transfer events is different in the primates, with most foreign groups mapping to the base of the tree (a common ancestor of primates), suggesting that the majority of HGT in primates is ancient. In these cases we are not inferring that the HGT event occurred in the most recent common ancestor of all primates, but that it occurred sometime between the common ancestor of Chordata and the common ancestor of the primates, that is, prior to the time period shown in Figure 1 . For example, in the case of HAS1 (Figure 3 ), which is found in a wide variety of chordates, the HGT event likely occurred soon after the common ancestor of Chordata arose. Foreign genes undergo duplication and are distributed throughout the genome Horizontally acquired genes can undergo duplication and diversification: for example, the three hyaluronan synthases in Homo sapiens belong to the same ortholog group and probably result from a single transfer event, followed by duplications. We observed the same scenario for other genes in H. sapiens (for example, the four peptidyl arginine deiminases and the nine PRAME family members; Additional file 3 ), and also in other species. In an extreme case (the O-acyltransferases belonging to the same ortholog group in C. elegans ; Additional file 3 ) as many as 30 genes probably derive from a single HGT event. To ask whether there are ‘hotspots’ undergoing more frequent HGT in the studied genomes, we plotted the locations of foreign genes on the chromosomes or scaffolds of the respective genomes (Additional file 5 : Figure S3). We found no evidence for ‘hotspots’, but the limited number of HGT events per species and the frequent occurrence of chromosomal rearrangements during evolution, which complicate cross-species comparisons, make it difficult to draw reliable conclusions. HGT is a general feature of chordate genomes Because there is limited information on HGT in the chordates, we also identified foreign genes for 14 other vertebrate species (Additional file 2 ). We find 60 to 240 class C genes (approximately 0.4% to 1.3%) across all of these species, in line with our findings for Drosophila , Caenorhabditis and primates, suggesting that HGT is not restricted to a few animal groups. We did not try to identify class A and B genes, as our method does not produce reliable ortholog groups for species separated by large evolutionary distances. Discussion HGT occurs at low, but appreciable, levels across all the animal species we examined; it has occurred over time and is still occurring; it mainly originates from bacteria and protists; and the genes concerned frequently code for enzyme activities. Interestingly, overall levels of HGT do not appear to be conspicuously different in vertebrates and invertebrates. This is surprising given the difference in complexity between the groups, but may be explained by the observed older HGT in primates, suggesting that the vertebrate HGT may have occurred at an earlier stage of vertebrate evolution. All animal genomes we examined contain expressed foreign genes and therefore unusual circumstances, such as an asexual lifestyle and desiccation tolerance, are not required for transfer, although such characteristics might accelerate HGT rates, as in the bdelloid rotifer. We have been able to improve on earlier studies of HGT in model organisms [ 19 , 30 ] for two main reasons. The first is the much larger number of species, including many more metazoans, now represented in the sequence databases used for comparison. This reduces the false discovery rate, by increasing the number of species in which a gene must be lost before it is incorrectly called HGT in the species in which it is found. In the original human genome paper [ 19 ] only two metazoans were used, D. melanogaster and C. elegans , while we used hundreds of species. We also examined transfer from multiple taxa, rather than just bacteria [ 19 ] or fungi [ 30 ], which reduces the false negative rate: one of the rebuttals of the original human paper [ 20 ] correctly rejected hyaluronan synthases as prokaryote-to-vertebrate transfer, but failed to identify it as fungus-to-vertebrate transfer because fungal sequences were not considered as potential donors. The second major improvement is the use of multiple closely related animal species when testing for HGT, allowing the construction of ortholog groups. This reduces the false discovery rate by controlling for spurious alignments that could incorrectly identify a gene as foreign in the minority of species in a group. In particular, this increases our accuracy when searching for older HGT, which is likely to be found in more than one species. The clearest demonstration of HGT would depend on reliable identification of the donor species, but the source of a foreign gene can only be traced to the most similar extant organism with a sequenced genome. The identification of the donor species for anything but the most recent HGT is further complicated by the sequence evolution of both recipient and donor; as a consequence, absolute certainty in the assignment of most HGT is unachievable. To accommodate this issue, we have defined a set of HGT classes that display differing levels of phylogenetic validation. While class A HGT has the highest degree of validation (88%), the levels of class C HGT are more directly comparable to those of recent studies – because in most cases closely related species are not available, so ortholog groups cannot be constructed – yet even this least stringently selected class of genes is 55% validated (amounting to an additional eight to 58 validated genes per species on top of those in class A). Although phylogenetic validation is seen as the ‘gold standard’ for HGT discovery, it is important to note that many class A foreign genes (68%) have no metazoan alignments (bitscore <50) so need not be, indeed cannot be, validated as HGT in this way. In these cases, the lack of matches with metazoan genes, together with clear matches to non-metazoan sequences, is sufficient to demonstrate HGT, while phylogenetics can be used to suggest the origin of such sequences. Another issue with phylogenetic validation arises when it is used in an automated manner: for the highest degree of accuracy, trees should only contain orthologs, but determining orthologs from very large sets of sequences requires manual annotation. Phylogenetic trees for HGT validation are also sensitive to contamination, which is widespread in genomes deposited in UniProt (see Additional file 1 ). We find around 13% of our not validated trees would be validated if a single metazoan protein, grouping within another taxon without other metazoan proteins, were actually non-metazoan contamination, increasing the level of validated HGT proportionally. As many of these sequences likely are contamination it is clear that without very high quality databases phylogenetic approaches lose reliability. While most studies of HGT look at isolated species, our use of multiple closely related species to define ortholog groups, and thereby class B HGT, reduces the problem of potential contamination by asking that candidate foreign sequences are present in the whole taxon. We only see a modest increase in phylogenetic validation between class C (where ortholog data were not used) and class B HGT (55% to 65%), but this is based on the use of high quality genomes and we would expect a bigger increase when using lower quality genomes. Our analysis probably underestimates the true extent of HGT in animals for several reasons. First, we set a conservative threshold for the HGT index, that is, h ≥ 30, to minimise the false positive rate, but there are probably genes below this threshold that are also horizontally acquired. Second, although hard to detect with available data, metazoan-to-metazoan HGT remains plausible and is known for some host-parasite relationships [ 36 ]. Some of these transfers may be mediated by viruses, and in our study we specifically excluded potential virus-mediated HGT due to ambiguity in the direction of transfer. Third, eukaryotic genome projects routinely remove bacterial sequences from their assemblies on the assumption that they are contamination; for instance, this has resulted in the removal of all previously reported HGT from the D. ananassae genome. As a result, we may have missed further examples of bacterial HGT in our study, and such screening may explain the lower levels of HGT seen in the Drosophila species. While some screening for contamination is clearly necessary, the potential for apparently bacterial sequences to originate from HGT should not be ignored during genome assembly; this observation emphasises the importance of using high quality genome assemblies, as we did here, when searching for HGT. It is important to consider the likelihood of other explanations for our results. The most obvious is the possibility that the observed foreign genes were inherited by vertical descent, but have been lost from all other observed metazoan species outside the phylum of interest. Increasing the number of metazoan species with high quality genomes and transcriptomes will in future help shed light on this possibility. In the meantime, we observed a striking difference between all classes of HGT and the native genes found in chordates, but not in other metazoans. Thus, genes that are apparently missing in animals other than chordates are significantly less likely to have GO terms for enzyme activities than other native genes (4% vs. 20%), while in contrast the HGT candidates are significantly more likely to have GO terms for enzyme activities (42% vs. 26%). While we cannot completely rule out gene loss as an explanation for our observations, these findings, together with the other lines of evidence presented, suggest that HGT is the more likely explanation. Conclusions Although observed rates of acquisition of horizontally transferred genes in eukaryotes are generally lower than in prokaryotes, it appears that, far from being a rare occurrence, HGT has contributed to the evolution of many, perhaps all, animals and that the process is ongoing in most lineages. Between tens and hundreds of foreign genes are expressed in all the animals we surveyed, including humans. The majority of these genes are concerned with metabolism, suggesting that HGT contributes to biochemical diversification during animal evolution. Materials and methods Data sources Genomes, transcriptomes, proteomes and gffs for Drosophila species were obtained from FlyBase [ 37 , 38 ]. Additional annotation was obtained from FlyMine [ 39 , 40 ]. All data on Caenorhabditis species were obtained from WormBase [ 41 , 42 ], while data on primate and chordate species were obtained from Ensembl [ 43 , 44 ], with the exception of ortholog groups which were obtained from OrthoMCL [ 45 , 46 ]. Genome versions used are shown in Additional file 2 . Determination of HGT index, h The workflow for this step is shown in Additional file 5 : Figure S4. For each studied species, all transcripts were aligned with blastx [ 47 ] to two protein databases derived from complete proteomes in UniProt, one consisting of metazoan proteins (excluding proteins from species in the same phylum as the studied species - Arthropoda, Nematoda or Chordata), the other of non-metazoan proteins. The HGT index, h , was calculated by subtracting the bitscore of the best metazoan match from that of the best non-metazoan match [ 12 ]. The majority of transcripts in all species have h <0, indicating they match better to metazoan proteins, as would be expected from vertical gene transfer through the tree of life, where they have had longer to diverge from non-metazoan proteins than from metazoan ones. Therefore, transcripts with h >0, which are less diverged from non-metazoan proteins than metazoan, should have been acquired by horizontal transfer from non-metazoans. Rather than just take all transcripts with h >0, we require that they align much better to non-metazoan proteins than to metazoan proteins and define candidate (class C) HGT genes as those with h ≥30 that also have a best non-metazoan bitscore ≥100. The threshold of 30 was chosen because detailed analysis in our earlier paper [ 12 ] found this threshold to be the best trade-off between sensitivity and specificity. As bitscore is a logarithmic measure of sequence similarity, 30 is a large difference in alignment quality. For each gene, h was inherited from the transcript with the match with the highest bitscore. For the Drosophila , Caenorhabditis and primate species studied, all proteins in each group were aligned to each other with blastp, using a cutoff of 1E-5. Ortholog groups were determined from this alignment using MCL with I = 15 [ 48 ]. This value was determined by comparing the ortholog groups to preexisting groups (more details in Additional file 1 ). For each class C gene, the average h value of the members of its ortholog group was determined ( h orth ); if this was ≥30 the gene was considered to be a class B gene. Class A genes were defined as a subset of class B genes with no metazoan matches with bitscore ≥100 and no members of their respective ortholog group with metazoan matches with bitscore ≥100. Numbers of each class for each species are shown in Additional file 2 . Phylogenetic validation We phylogenetically validated all foreign genes that had any metazoan matches with bitscore ≥50 using a method based on that previously used, producing unrooted trees [ 12 ]. We used a strict validation, requiring that the trees showed no evidence that the foreign gene was metazoan. The trees were considered validated if the foreign gene was monophyletic either with a single donor taxon or with multiple potential donor taxa and was not monophyletic with the metazoa. In cases where the foreign genes were monophyletic with both the metazoans and the donor(s) the tree was not validated. We did not require the ‘own-phylum’ taxon (Arthropoda, Chordata, Nematoda) to be monophyletic, as in cases of recent HGT the best matches in this taxon are not orthologs to the foreign gene. For further details see Additional file 1 . All phylogenetic trees containing metazoan matches with bitscore ≥50 are available at [ 49 ]. Manual validation The 145 human genes classified as HGT were also subjected to manual validation. The transcript with the best blastx bitscore from the previous analysis was blastx-compared to the non-redundant protein sequence (nr) database, excluding Chordata (taxon id: 7711), Vertebrata (taxon id: 7742) or Metazoa (taxon id: 33208) in turn, using the NCBI website [ 50 , 51 ]. The results were manually inspected and the alignments checked for reliability. The same 145 transcripts were also analysed according to published protocols [ 12 ]; in summary, sequences were compared (using NCBI-blast-2.2.27+) [ 47 ] against the complete proteomes on UniProt. The comparison was done against kingdom-specific databases containing exclusively Metazoa (taxon id: 33208), Eubacteria (taxon id: 2), Archaea (taxon id: 2157), Fungi (taxon id: 4751), plants (taxon id: 3193) and protist (eukaryotes without Metazoa, Fungi and plants) sequences. Bitscores were recorded for the best hit in each taxon and h calculated as described. Results were manually analysed to check for agreement with the analysis using the nr database and the automated analysis. Genome linkage tests For each foreign gene, we identified to which contig/scaffold it mapped and determined whether native genes (for which h <30) were also found on that contig. If so the horizontally transferred gene was considered to be linked to a native gene. Results are shown in Additional file 2 . Discussion of species with lower than average levels of linkage is contained in Additional file 1 . Functional characterisation of genes To determine whether horizontally transferred genes encode enzymes, we examined GO annotation [ 52 ]. A more direct calculation using EC numbers is not possible due to a lack of EC annotation in most of the species studied. GO terms used in the test species were manually annotated to indicate whether they referred to an enzymatic activity. A hypergeometric test was performed to calculate per species which GO terms were enriched in each class of foreign gene (threshold of P ≤0.05). Benjamini-Hochberg multiple testing correction was performed to reduce the false positive rate. We then calculated whether enzymes were significantly over-represented in the enriched versus the non-enriched terms using a chi-squared test (threshold of P ≤0.05). Results are shown in Additional file 2 . Identification of non-HGT genes that are found in chordates and not in non-chordate metazoans The metazoan species used in the analysis (both the 40 studied and those with complete proteomes in the UniProt database) were placed into a phylogenetic, binary tree based on the NCBI taxonomy (Additional file 5 : Figure S5). This tree has six branchpoints between the origin of metazoa and the phyla in which the studied species are found (Arthropoda, Nematoda, Chordata), meaning that a minimum of six gene losses (one at each of these branchpoints) would be required for an HGT event occurring at the base of the phyla to appear to be HGT when in fact it was a result of gene loss. It must also be noted that as not all identified HGTs occur at the base of the phyla, as shown in Figure 1 , the number of required gene losses is greater for much of the HGT. For each studied primate species we identified all non-HGT (that is, native) genes that have been lost at least at these six branchpoints using a BLAST alignment to the metazoan database from UniProt. For each branchpoint where a loss must occur there are a varying number of species; if there were no matches with bitscore ≥100 to any proteins in these species then a gene loss was considered to have occurred in the relevant branch. These non-HGT genes were then analysed based on their GO terms, as done previously for the HGT genes (above), with the comparison made to non-HGT genes that did not have this pattern of loss. Introns To determine whether introns are present at significantly different rates in foreign vs. native genes, we compared the number of native genes with introns to the number of genes of each class of HGT with introns using a chi-squared test (threshold of p ≤0.05). Results are shown in Additional file 2 . Validation and discussion of methods is contained in Additional file 1 . Description of additional data files The following additional data are available with the online version of this paper. Additional file 1 contains validation and discussion of the methods used in this paper, as well as the legends for the other additional files. Additional file 2 is a table of HGT levels and analyses for all species. Additional file 3 is a table of the horizontally acquired genes in H. sapiens , D. melanogaster and C. elegans , listed by class. Additional file 4 is a table of H. sapiens genes previously identified as horizontally transferred. Additional file 5 contains the supplementary figures - Figure S1 shows the phylogenetic trees for the human genes discussed in the section ‘ Identification of new foreign genes and confirmation of previously reported examples ’. Figure S2 shows the amino-acid alignment between the C. elegans trehalose-phosphate synthase gene tps-1 and the D. melanogaster trehalose-phosphate synthase gene Tps1. Figure S3 shows the position of foreign genes on the D. melanogaster and C. elegans chromosomes. Figure S4 shows the workflow used to identify HGT. Figure S5 shows the simplified phylogenetic tree of species used in analysis. Figure S6 shows the phylogenetic trees for the six human genes originally labelled as horizontally acquired, and later rejected, which are reclaimed.
Many animals, including humans, acquired essential 'foreign' genes from microorganisms co-habiting their environment in ancient times, according to research published in the open access journal Genome Biology. The study challenges conventional views that animal evolution relies solely on genes passed down through ancestral lines, suggesting that, at least in some lineages, the process is still ongoing. The transfer of genes between organisms living in the same environment is known as horizontal gene transfer (HGT). It is well known in single-celled organisms and thought to be an important process that explains how quickly bacteria evolve, for example, resistance to antibiotics. HGT is thought to play an important role in the evolution of some animals, including nematode worms which have acquired genes from microorganisms and plants, and some beetles that gained bacterial genes to produce enzymes for digesting coffee berries. However, the idea that HGT occurs in more complex animals, such as humans, rather than them solely gaining genes directly from ancestors, has been widely debated and contested. Lead author Alastair Crisp from the University of Cambridge, UK, said: "This is the first study to show how widely horizontal gene transfer (HGT) occurs in animals, including humans, giving rise to tens or hundreds of active 'foreign' genes. Surprisingly, far from being a rare occurrence, it appears that HGT has contributed to the evolution of many, perhaps all, animals and that the process is ongoing, meaning that we may need to re-evaluate how we think about evolution." The researchers studied the genomes of 12 species of Drosophila or fruit fly, four species of nematode worm, and 10 species of primate, including humans. They calculated how well each of their genes aligns to similar genes in other species to estimate how likely they were to be foreign in origin. By comparing with other groups of species, they were able to estimate how long ago the genes were likely to have been acquired. A number of genes, including the ABO blood group gene, were confirmed as having been acquired by vertebrates through HGT. The majority of the other genes were related to enzymes involved in metabolism. In humans, they confirmed 17 previously-reported genes acquired from HGT, and identified 128 additional foreign genes in the human genome that have not previously been reported. Some of those genes were involved in lipid metabolism, including the breakdown of fatty acids and the formation of glycolipids. Others were involved in immune responses, including the inflammatory response, immune cell signalling, and antimicrobial responses, while further gene categories include amino-acid metabolism, protein modification and antioxidant activities. The team were able to identify the likely class of organisms the transferred genes came from. Bacteria and protists, another class of microorganisms, were the most common donors in all species studied. They also identified HGT from viruses, which was responsible for up to 50 more foreign genes in primates. Some genes were identified as having originated from fungi. This explains why some previous studies, which only focused on bacteria as the source of HGT, originally rejected the idea that these genes were 'foreign' in origin. The majority of HGT in primates was found to be ancient, occurring sometime between the common ancestor of Chordata and the common ancestor of the primates. The authors say that their analysis probably underestimates the true extent of HGT in animals and that direct HGT between complex multicellular organisms is also plausible, and already known in some host-parasite relationships. The study also has potential impacts on genome sequencing more generally. Genome projects frequently remove bacterial sequences from results on the assumption that they are contamination. While screening for contamination is necessary, the potential for bacterial sequences being a genuine part of an animal's genome originating from HGT should not be ignored, say the authors.
10.1186/s13059-015-0607-3
Other
Did fall from tree kill famous human ancestor Lucy?
Nature, nature.com/articles/doi:10.1038/nature19332 Journal information: Nature
http://nature.com/articles/doi:10.1038/nature19332
https://phys.org/news/2016-08-fall-tree-famous-human-ancestor.html
Abstract The Pliocene fossil ‘Lucy’ ( Australopithecus afarensis ) was discovered in the Afar region of Ethiopia in 1974 and is among the oldest and most complete fossil hominin skeletons discovered. Here we propose, on the basis of close study of her skeleton, that her cause of death was a vertical deceleration event or impact following a fall from considerable height that produced compressive and hinge (greenstick) fractures in multiple skeletal elements. Impacts that are so severe as to cause concomitant fractures usually also damage internal organs; together, these injuries are hypothesized to have caused her death. Lucy has been at the centre of a vigorous debate about the role, if any, of arboreal locomotion in early human evolution. It is therefore ironic that her death can be attributed to injuries resulting from a fall, probably out of a tall tree, thus offering unusual evidence for the presence of arborealism in this species. Main It is rare when an early hominin fossil composed of multiple skeletal elements representing a single individual is discovered 1 , 2 , 3 , 4 , 5 , and rarer still when a cause of death can potentially be attributed to its remains 6 , 7 . A.L. 288-1, named Lucy and dated to 3.18 million years in age 8 , is represented by elements of the skull, upper limb, hand, axial skeleton, pelvis, lower limb, and foot, with some bilateral preservation ( Fig. 1a ), and is popularly described as 40% complete 9 . We studied the original fossil and computed tomographic (CT) scans of the skeleton to assess cause of death. Our observation that the skeleton is marked by post-mortem damage largely agrees with the original description 9 ; however, we differ from the original authors in proposing that a subset of fractures are likely to be perimortem and were produced by a vertical deceleration event, or a fall and impact from considerable height, and not by fossilization processes. Figure 1: Perimortem fractures in A.L. 288-1 postcranial skeleton consistent with vertical deceleration event. a , Lucy. b , c , Right humerus ( b , top: stereo, superior, medial up; bottom: lateral; c , stereo, posterior) preserves valgus head-shattering four-part proximal fracture. d , Hinge and spiral fracture elevated, displaced, and fractured right midshaft humeral bone fragment (stereo, lateral; see b ). e , Head of left humerus (stereo, medial) is fractured and compressed inferomedially to override the neck. f , Fracture of right distal radius (posterior, stereo view). g , Fractures in sacrum (stereo, anterior) and left innominate just lateral to sacrum. Fractured superior pubic ramus also visible as is puncture hole (arrow). h , Left-lateral asymmetry of fractured sacrum (stereo, posterior) and fractured, elevated, and bent retroauricular surface of left innominate. i , Left femoral neck fractures (stereo, lateral at top). j , Superoposteriorly fractured epiphysis of left distal femur (stereo, anterior) in discovery state with lateral extent sheared superiorly along lateral edge of shaft. Central portion of anterodistal shaft fractured and secondarily driven into trabeculae. k , Fracture of right tibial plateau (stereo, superior, medial to right) with major fracture across medial condyle that with other fractures ( l ; stereo, anterior, medial to right) depress the plateau and add valgus cant to shaft. m , Proximal portion of right distal tibia (stereo, posteromedial, superior at top) preserves small bone fragments broken loose and driven into medullary canal at spiral shaft fracture. n , Fractures on talar articular surface of right distal tibia (stereo, anterior, medial to right) open onto anterodistal surface of shaft. o , Right talus neck fracture (stereo, superior, medial to right). Together, n and o are consistent with a pilon fracture. Red lines are fractures; green lines in g , h denote sacroiliac joint and transverse lines of sacrum. Specimens in g and h are casts because it was not practical to articulate the fossils, and j is a cast because the original specimen was reconstructed. Scale bars ( a , 50 mm; b – f , i – o , 10 mm; g , h , 20 mm) are approximate, given stereo photo parallax. See Extended Data Figs 1 , 2 , 3 , 4 , Supplementary Note 1 , and Supplementary Videos 1 , 2 , 3 , 4 . PowerPoint slide Full size image Perimortem compressive fractures The most striking feature of the nearly complete right humerus (A.L. 288-1m) is that its proximal end is severely damaged 9 ( Fig. 1b, c ). Close examination shows that it underwent severe valgus head-shattering compression that drove the head fragments into the shaft, fracturing the greater and lesser tuberosities, and fracturing and dislocating a portion of the proximal shaft with the intertubercular groove ( Fig. 1b , bottom). The shaft of the right humerus was found as multiple segments with generally tight fits. The two major segments conjoin near the midshaft, where a fragment of displaced cortical bone reveals that the shaft underwent a spiral fracture that operated in the same direction as the compressive fracture at the head ( Fig. 1b, d , Supplementary Note 1 , and Extended Data Figs 1 , 2 ). Lucy’s right scapula (A.L. 288-1l) was found as three pieces with the major fragment preserving a complete and undamaged glenoid and neck along with a portion of the base of the coracoid process; the other two fragments preserve a short portion of the lateral border and the base of the acromion. This pattern matches that of the most common fractures of the scapula 10 . A fracture of the articular head, lesser tuberosity, greater tuberosity, and shaft of the humerus is classified as a four-part proximal humerus fracture 11 . Under natural conditions, this fracture is commonly caused by an impact following a vertical deceleration event when an accident victim consciously stretches out their arm in an attempt to break their fall. Compressive contact between the hand and the ground impacts the humeral articular head against the glenoid which, with the glenoid acting as anvil 12 , fractures some or all of the components of the proximal humerus. This fracture leaves a unique signature on the humeral head and is common in two distinct populations: elderly people who have suffered a reduction in bone strength when even a fall from standing height onto an outstretched arm can fracture and sometimes compress the head into the greatly weakened shaft; and people with healthy bone strength who experience a fall from considerable height that in turn produces an impact with more powerful forces acting on the outstretched arm 13 . A 3D reconstruction of the right humerus based on CT data illustrates how Lucy’s articular head and shaft were compressively fractured ( Supplementary Note 2 and Supplementary Video 1 ). Lucy’s left proximal humerus (A.L. 288-1r) is largely complete but damage at the head reveals that it too suffered a compressive fracture ( Fig. 1e ). The general pattern is similar to that seen in the much more extensively fractured right humerus but it is less severely damaged ( Supplementary Note 1 ). These humeral fractures were long thought to have occurred post-mortem, but their close match to clinical cases 11 , 12 , 13 ( Table 1 ) suggests instead that they represent perimortem injuries. The fracture edges are sharp and clean, and bone fragments along with tiny bone slivers (<1 mm) of the severely shattered right articular head and shaft ( Fig. 1b–d and Extended Data Figs 1 , 2 ) and left proximal humerus ( Fig. 1e ) are preserved in their post-injury positions. This evidence suggests that the compressive impact event occurred while the periosteum and joint capsule were intact; if these factures had occurred after the periosteum and joint capsule had decomposed and when the bone was dry, it is likely that the slivers and fragments would have been dispersed onto the surface of the ground or into the soil. Additional evidence for perimortem fractures is found in the fact that there is no evidence of healing along any of these sharp fracture edges ( Supplementary Note 3 ). Although compressive bilateral proximal humerus fractures are not common, under natural conditions these fractures are usually associated with high energy trauma resulting from an impact on outstretched arms 14 . The fact that these fractures commonly occur when an accident victim actively abducts and stretches out their arms in an attempt to break their fall suggests that Lucy was conscious at the time of impact, offering additional support to the hypothesis that this was a perimortem event, with Lucy suffering a more severe impact on her right side. Table 1 Fractures in Lucy’s skeleton consistent with a vertical deceleration event Full size table The presence of bilateral proximal humerus fractures leads us to hypothesize that some of the other compressive fractures in Lucy’s skeleton, and especially those at major joints, also occurred perimortem and can be attributed to a severe impact. Compressive fractures in the left femur (A.L. 288-1ap) are especially informative ( Supplementary Note 1 ). There are two subparallel fractures in the neck oriented in a roughly parasagittal plane ( Fig. 1i ). The basicervical (narrower) and transcervical (wider) fractures are both widest in their superior aspect and wrap anteriorly and posteriorly around the neck to terminate inferiorly, slightly offsetting the head in an inferior direction. The location and orientation of these fractures suggest that when the pelvis and femur were articulated, a compressive force acted at the hip to drive the acetabulum and femoral head against one another, thereby fracturing the neck. The fractured fragments of the distal left femur ( Fig. 1j ) were separated from one another to reconstruct this element, so the following description is based on a cast of the bone in its discovery state ( Supplementary Note 1 ). This region was severely compressively fractured. The articular surface of the lateral condyle was shattered with the dislocated fragments forming a step along its lateral edge, while large cracks (subsequently closed in the reconstruction) separated fragments of the medial condyle’s fractured articular surface. The entire femoral epiphysis was separated and compressively driven into the distal shaft in a superoposterior direction. The shaft overrides the superior articular surface of medial condyle, the superior edge of the patellar surface, and the superior extent of the articular surface of the lateral condyle, with the lateral condyle and epicondyle and a portion of the lateral shaft also fractured and compressively sheared superolaterally along the edge of the shaft. The extent of shaft override and lateral shear is apparent in Fig. 1j and especially in the left image of the stereo pair, where the shadow cast by the fractured edge can be seen around the anterodistal circumference of the shaft. The compressive fractures at both the femoral neck and distal femur appear to have occurred perimortem when the foot impacted the ground following a fall, with the force acting along the long axis of the leg. We hypothesize that the tibial plateau acted as a punch when it impacted the epiphysis, in a manner similar to how the glenoid acted as an anvil at the proximal humerus. Although no left tibia was recovered, we tested this idea by articulating the femoral epiphysis with a 3D printout of Lucy’s right proximal tibia (A.L. 288-1aq) mirrored as a left, and the shape of the tibial plateau matches both the contour and dimensions of the indentation preserved on the epiphysis and shows a twist to the right ( Supplementary Note 1 ). The superoposterior orientation of the dislocated epiphysis suggests that the leg was hyperextended during impact. The impact also apparently produced the fractures in the femoral neck. The femoral fragments, and especially the small condylar fragments, remain in their original fractured positions and have sharp, clean edges that show no evidence of healing. As suggested for the humeri, if this fracture had occurred post-mortem on dry bone after the joint capsule and periosteum had decayed, it seems likely that the small fragments would have been dispersed onto the surface of the ground or into the soil. Together, these observations offer additional support for the hypothesis that these compressive fractures occurred perimortem while the tibia and femur were articulated and the joint capsule and periosteum intact. Although no left tibia was recovered, fractures in Lucy’s right tibia ( Table 1 , Fig. 1k–n , and Supplementary Note 1 ) are consistent with an impact following a fall from height and we propose that the left tibia would be in a similar condition if it were to be discovered. Additional compressive and hinge (greenstick) fractures are preserved in the forearms, lower limbs, pelvis, thorax (including the rarely fractured first rib), and skull ( Fig. 1 , Supplementary Note 1 , Table 1 , Extended Data Figs 3 , 4 , and Supplementary Videos 2 , 3 ). As seen with the humeri and femur, small bone fragments with sharp, clean edges and no evidence of healing often remain in their original fractured positions, again suggesting that these fractures occurred perimortem. The pattern is consistent with clinical presentations of fractures produced by a severe impact following a fall. Mechanisms of bone fracture There are other mechanisms, in addition to vertical deceleration events, that can fracture bone; these include collisions between a body and moving or stationary objects during floods, violent contact with animals, and even tetanic muscle contractions produced by seizures or lightning strikes ( Supplementary Note 4 ). However, these other mechanisms are uncommon and do not generally produce fractures by compression along the long axis of the bone (although some of these latter mechanisms can cause a fall that in turn generates compressive fractures). Lucy’s fractures most closely resemble and are consistent with those seen in patients who have suffered a vertical deceleration event from considerable height that in turn produces concomitant fractures across the skeleton 15 , 16 , 17 , 18 . Hadar palaeohabitats and inferred tree use Given this combination of evidence, the question remains as to how Lucy could have achieved the height necessary to produce the high velocity fall and impact required to fracture her skeleton so severely. One of the most vigorously debated questions in palaeoanthropology has been the role, if any, of arboreal locomotion in early hominin evolution and especially in Lucy’s species, Australopithecus afarensis 19 , which demonstrates convincing adaptations to terrestrial bipedalism. Given Lucy’s small size, she, like many small primates, probably sought nightly refuge in trees 20 and possibly foraged there, so it is reasonable to assess whether trees were available to her. Palaeohabitat reconstructions based on fossil mammals 21 , fossil pollen 22 , and palaeopedology and δ 18 O and δ 13 C analysis of palaeosol carbonates 23 have led to the conclusion that Hadar, where Lucy was found, was a grassy woodland with sizable trees. Lucy was found in a sandstone deposited as a distributary crevasse-splay channel over a low-relief area of the floodplain 24 ; this shallow channel was probably associated with one of the larger channel systems in the lower Kada Hadar Member 25 , which contain large root casts and tree trunks 24 ( Supplementary Note 5 ). Channels and crevasse-splays are usually heavily vegetated along their banks, and together these data show that trees were common at Hadar. If Lucy sought out trees for food and nightly nesting sites, other similar-sized primates, such as chimpanzees, offer important information about the heights to which she might have climbed. Chimpanzees forage widely through the canopy, and at Kibale log a daily average of 3–5 climbing bouts to heights of 95–135 m in fruit trees 26 . The typical heights of chimpanzees’ sleeping nests in savanna habitat study sites range from means of 8.3 to 21.0 m (mean 13.71 m, n = 10) whereas in forested habitat study sites the means range from 7.1 to 23.2 m (mean 13.08 m, n = 31) ( Table 2 ) 27 . If a building storey is taken to be about 3 m, these mean heights equate to four- to five-storey buildings, with the range of means varying from nearly three to seven stories, and maximum heights of about 16 stories; these are considerable heights. Table 2 Free fall velocity and energy from tree nest height Full size table Falls from height Chimpanzees and some modern humans move and forage in trees and sometimes experience skeletal trauma and death from falls ( Supplementary Note 6 ). Even though chimpanzee foraging heights frequently exceed sleeping nest heights 28 , nest data offer a conservative approach for assessing whether skeletal trauma and death are likely to result from a fall at these heights. Average velocities ( Table 2 ) for unimpeded free falls from nest height means reach nearly 60 km h −1 , and the associated energies 29 ( Table 2 ) are within the range known to cause fatal impacts in humans 30 , with the generally lower energies estimated for Lucy resulting from her lower body mass 31 ( Supplementary Note 7 ). Impacts from these heights often produce a wide range of concomitant fractures in humans 15 , 16 , 17 , 18 , 30 similar to those preserved in Lucy’s skeleton. Such falls usually severely damage internal organs because they too decelerate upon impact and can be penetrated by broken bones, damaged by compression between the sternum and spine, and experience a ‘hydraulic ram effect’ in which abdominal organs are thrust upwards to produce cardiac damage 16 . Scenario for Lucy’s perimortem fractures The pattern of compressive and hinge fractures, palaeohabitat reconstruction, sedimentology of the discovery site, and consistency with clinical cases lead us to propose that the following scenario is the most likely of the various possible injury mechanisms: Lucy fell out of a tall tree at or in proximity to the distributary crevasse splay channel where her remains were found. Given the severity of the fractures, it is likely that the impact occurred on a hard surface, perhaps the dry bed of the channel itself, which would represent a near-zero stopping distance, thereby maximizing the transfer of energy produced by the fall. The body appears to have experienced minimal transport and rapid burial after death in order to retain the relative positions of the small fractured bone fragments. The location and severity of the fractures suggest that impact progressed from the feet and legs to the hip, arms, thorax, and head ( Fig. 2 and Supplementary Video 4 ). Concomitant fractures and organ damage are witnessed in the most severe clinical cases and together contribute to the death of the victim 15 , 16 , 17 , 18 . Although the fractures in Lucy’s humeri provide evidence that she was conscious when she stretched out her arms in an attempt to break her fall, the severity of the numerous compressive fractures and presumed organ damage suggest that death followed swiftly. Figure 2: Reconstruction of Lucy’s vertical deceleration event. We hypothesize that Lucy fell from a tall tree, landing feet-first and twisting to the right, with arrows indicating the sequence and types of fractures. a , Pilon fracture, tibial plateau fracture, and spiral shaft fracture of right tibia. b , The impact of hyperextended left knee drove the distal femoral epiphysis into the distal shaft, and fractured the femoral neck and possibly the acetabulum, sacrum, and lumbar vertebra. c , The impact of the knee drove the patella into the centre anterodistal surface of the femoral shaft. d , Impact on the right hip drove the right innominate into the sacrum, and the sacrum into the left innominate, dislocating and fracturing the sacrum and left innominate, and elevating the retroauricular surface. e , Lucy was still conscious when she stretched out her arms in an attempt to break her fall and fractured both proximal humeri, the right more severely than the left with spiral fracture near the midshaft, a Colles’ (or Smith’s) fracture of the right radius, and perhaps other fractures of the radii and ulnae. The impact depressed and retracted the right scapula, which depressed the clavicle into the first rib, fracturing both. f , Frontal impact fractured the left pubis and drove a portion of the anterior inferior pubic ramus posterolaterally, and a branch or rock possibly created the puncture mark on the pubis. g , The impact of the thorax fractured many ribs and possibly some thoracic vertebrae. h , The impact of the skull, slightly left of centre, created a tripartite guardsman fracture of the mandible and cranial fractures. See Supplementary Methods and Supplementary Video 4 . PowerPoint slide Full size image Discussion Although most hominin fossils are fragmentary and broken because of a complex post-mortem history, skeletal elements sometimes preserve evidence of antemortem or perimortem fractures and injuries 5 , 6 , 7 . When examining fossil taxa, such as Australopithecus afarensis 3 , 19 , that appear to have practiced both terrestrial and arboreal locomotion, we suggest that the adaptations that facilitated bipedal terrestrial locomotion compromised the ability of individuals to climb safely and efficiently in the trees; this combination of features may have predisposed these taxa to more frequent falls from height. Close inspection of other fossil specimens for antemortem or perimortem fractures ( Supplementary Note 8 ) has the potential to offer important information about their lifestyles through an understanding of the trauma that they suffered and the mechanisms by which they died. Methods CT scanning of A.L. 288-1 All scans were performed at the University of Texas High-Resolution X-ray CT Facility with procedures described in ref. 32 , using a FeinFocus FXE 225 kV X-ray source and image intensifier detector captured by a 1024 × 1024 CCD camera. Samples were held in place by custom foam mounts within Plexiglas containers, and the X-ray signal was calibrated using empty containers. X-ray settings were 180 kV and 0.175–0.180 mA, with an estimated focal spot size of ~40 μm, and no beam filtration was used. Scanning parameters were optimized for each piece based on size, and in some cases multiple pieces were scanned simultaneously. During each turntable rotation, 1,200 views (projections) were acquired to obtain raw data for 25 slices. Raw data were reconstructed as 16-bit TIFF images. Beam hardening corrections were performed during reconstruction using polynomial linearization 33 , with coefficients selected independently for each scan owing to variations in mineralization. Ring artefacts were corrected either pre- or post-reconstruction 34 . Reconstruction scaling of CT numbers was reduced for pieces that had highly attenuating mineralization, probably oxides or sulfides, to avoid information loss from voxel saturation. See Extended Data Table 1 for acquisition parameters, data voxel dimensions, and scaling and artefact processing parameters. 3D reconstruction Data volumes were loaded into Avizo (FEI) in order to produce the 3D element and segment the individual bone fragments along fracture planes, with each fragment saved as an .stl file. These files were imported into Autodesk’s Maya and were repositioned, reoriented, and aligned along fracture planes to reconstruct the element. A 3D scan of the left proximal humerus mirrored as a right was used as an approximate template for reconstructing the right proximal humerus. The keyframe function in Maya was used to move each individual fragment from its reconstructed ‘before’ position to its discovery ‘after’ position in order to recreate the progression of the impact injury ( Extended Data Figs 1 , 2 and Supplementary Video 1 ). Digital photography Digital photographs of the original fossils and replica casts were taken against a black felt or velvet background and imported into Adobe Photoshop C5.1 Extended at full resolution. The element was isolated with the lasso tool and cut and pasted onto a solid colour background, with stereo views composed of elements in different layers. Tracings of the fractures were made in separate layers with Photoshop’s pencil or lasso and fill tools.
The famous human ancestor known as Lucy walked the Earth, but it was her tree climbing that might have led to her demise, a new study suggests. An analysis of her partial skeleton reveals breaks in her right arm, left shoulder, right ankle and left knee—injuries that researchers say resulted from falling from a high perch such as a tree. Lucy likely died quickly, said John Kappelman, an anthropologist at the University of Texas at Austin, who published the findings Monday in the journal Nature. "I don't think she suffered," Kappelman said. But several other researchers, including Lucy's discoverer, disagree. They contend most of the cracks in Lucy's bones are well documented and came after her death from the fossilization process and natural forces such as erosion. How Lucy met her end has remained a mystery since her well-preserved fossil remains were unearthed more than four decades ago. Her discovery was significant because it allowed scientists to establish that ancient human ancestors walked upright before evolving a big brain. Lucy was a member of Australopithecus afarensis, an early human species that lived in Africa between about 4 million and 3 million years ago. The earliest humans climbed trees and walked on the ground. Lucy walked upright and occasionally used her long, dangling arms to climb trees. She was a young adult when she died. This Aug. 14, 2007, file photo shows a three-dimensional model of the early human ancestor, Australopithecus afarensis, known as Lucy, on display at the Houston Museum of Natural Science. It's a scientific estimation of what Lucy may have looked like in life. A new study based on an analysis of Lucy's fossil by the University of Texas at Austin suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (AP Photo/Pat Sullivan, File) Tim White, a paleoanthropologist at the University of California, Berkeley, called the study's conclusion a "misdiagnosis." The Texas researchers "appear to have focused only on the cracks that they could attribute to an imagined fall, ignoring the additional abundant cracks," White said in an email. The split highlights the difficulty of pinpointing a cause of death from fossilized remains. Scientists rarely know how early humans died because skeletons are incomplete and bones tend to get crushed under sand and rocks. Over the years, Lucy's discoverer Donald Johanson has tried to solve the mystery. Lucy's skeleton, which is 40 percent complete, was recovered in Ethiopia in what was an ancient lake near fossilized remains of crocodiles, turtle eggs and crab claws. This undated image provided by the University of Texas at Austin shows the skeleton of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (University of Texas at Austin via AP) "There's no definitive proof of how she died," said Johanson of Arizona State University. The Texas team examined Lucy's bones and used high-tech imaging. Kappelman said the scans revealed multiple broken bones and no signs of healing, suggesting the injuries occurred around the time of death. He reconstructed her final moments: The 3-foot-6-inch (1.06-meter) Lucy fell from at least 40 feet and hit the ground at 35 mph. She landed on her feet before twisting and falling. Such an impact would have caused internal organ damage. Fractures on her upper arms suggest she tried to break her fall. Kappelman theorized that Lucy's walking ability may have caused her to be less adept at climbing trees, making her more vulnerable to falling from heights. UT Austin professor John Kappelman with 3-D printouts of Lucy's skeleton illustrating the compressive fractures in her right humerus that she suffered at the time of her death 3.18 million years ago Credit: Marsha Miller Not everyone agrees that her tree-climbing skills were lacking. Other scientists point out that there have been documented falls by chimpanzees and orangutans, which spend more time in trees than Lucy's species. "Without a time machine, how can one know that she didn't just get unlucky and fall?" William Harcourt-Smith of the American Museum of Natural History said in an email. This undated photo provided by the University of Texas at Austin shows the distal radius - a wrist bone - of Lucy, a fossil specimen of an early human ancestor, Australopithecus afarensis, undergoing computed tomographic scanning at the university in Austin, Texas. A new study based on an analysis of Lucy's fossil by the university suggests she died after falling from a tree. Several scientists, including Lucy's discoverer, reject that she plunged to her death from a tree. (Marsha Miller/University of Texas at Austin via AP)
nature.com/articles/doi:10.1038/nature19332
Space
Measuring gamma-ray bursts' hidden energy unearths clues about the evolution of the universe
Yuji Urata et al, Simultaneous radio and optical polarimetry of GRB 191221B afterglow, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01832-7 Journal information: Nature Astronomy
https://dx.doi.org/10.1038/s41550-022-01832-7
https://phys.org/news/2022-12-gamma-ray-hidden-energy-unearths-clues.html
Abstract Gamma-ray bursts (GRBs) are the most luminous transients in the universe and are utilized as probes of early stars, gravitational wave counterparts and collisionless shock physics. In spite of studies on polarimetry of GRBs in individual wavelengths that characterized intriguing properties of prompt emission and afterglow, no coordinated multi-wavelength measurements have yet been performed. Here we report the first coordinated simultaneous polarimetry in the optical and radio bands for the afterglow associated with the typical long GRB 191221B. Our observations successfully caught the radio emission, which is not affected by synchrotron self-absorption, and show that the emission is depolarized in the radio band compared with the optical one. Our simultaneous polarization angle measurement and temporal polarization monitoring indicate the existence of cool electrons that increase the estimate of jet kinetic energy by a factor of more than 4 for this GRB afterglow. Further coordinated multi-wavelength polarimetric campaigns would improve our understanding of the total jet energies and magnetic field configurations in the emission regions of various types of GRBs, which are required to comprehend the mass scales of their progenitor systems and the physics of collisionless shocks. Main Gamma-ray burst (GRB) 191221B was detected on 21 December 2019, 20:39:13 UT, and its X-ray afterglow was rapidly identified by the Neil Gehrels Swift Observatory 1 . The optical afterglow was discovered by the MASTER auto-detection system 2 . Optical polarization with a possible time evolution in the early afterglow phase was also detected by the Southern African Large Telescope (SALT) and Very Large Telescope (VLT) (Extended Data Table 1 ) 3 . The redshift was measured as z = 1.148 based on metal absorption lines in the optical afterglow observed by VLT/X-shooter 4 . The isotropic equivalent energy of E γ ,iso = (3.6 ± 0.4) × 10 53 erg and the rest-frame peak energy of the time-integrated spectrum \({E}_{{{{\rm{peak}}}}}^{{{{\rm{src}}}}}=810\pm 65\) keV were derived by the Konus–Wind observation (with the standard cosmological parameters, the Hubble constant H 0 = 67.3 km s −1 Mpc −1 , the matter density parameter Ω M = 0.315 and the dark energy density parameter Ω Λ = 0.685) 5 . The duration of the prompt emission in the 15–350 keV band is 48.0 ± 16.0 s (ref. 6 ). These prompt emission properties obey the empirical \({E}_{{{{\rm{peak}}}}}^{{{{\rm{src}}}}}-{E}_{\gamma ,{{{\rm{iso}}}}}\) correlation (Extended Data Fig. 1 ) and indicate that GRB 191221B is one of the typical long GRBs. The first semi-simultaneous polarimetry for the afterglow between millimetre and optical bands was conducted at 2.5 days after the burst by using Atacama Large Millimetre/submillimetre Array (ALMA) and VLT (Fig. 1 ). The VLT observation measured a linear polarization degree (PD) of 1.3 ± 0.2% (here we employed the systematic errors of 0.1% reported by ref. 7 and the range with 3 σ confidence level is 0.9−1.8%) with a polarization angle (PA) of 61.6 ± 6.3° at the R band. Hereafter, we note 1 σ errors for our measurements without special notification. The low dust extinction and Serkowski law 8 show an intrinsic origin of the polarization ( Methods , Extended Data Figs. 2 – 4 and Extended Data Table 2 ). The PD is consistent with other optical afterglows (an average of 1.6% across 84 polarimetric measurements) 9 . The ALMA observation put the upper limit on PD of 0.6% with a 3 σ confidence level at 97.5 GHz. The detection in Stokes U maps and non-detection in Stokes Q maps (Extended Data Fig. 5 ) constrained the range of PA to be 37.7−52.3° with a 1 σ confidence level. Therefore, this simultaneous polarimetry between optical and radio bands indicates depolarization in the radio band. The significantly low upper limit is also consistent with the first detection of linear polarization in the GRB radio afterglow (that is, 0.2% for the low-luminosity GRB 171205A) 10 . Fig. 1: Spectral flux distribution and polarization spectrum of GRB 191221B afterglow at 2.5 days. a , Spectral flux distribution (red points). The black dotted line is the forward shock model fit to the observed data. b , PDs at 97.5 GHz (3 σ upper limit) and optical R band (red points), and polarization spectrum of the simple one-zone model (grey dashed line), the plasma-scale magnetic field model (purple dash-dotted line) and the cool electron model (green solid line). c , PAs at 97.5 GHz (1 σ range) and optical R band (red points). The observed difference of PAs with ~90% confidence level (that is, 16.6 ± 9.6°) supports the cool electron model. The plasma-scale magnetic field model predicts a constant PA over the frequencies (for example, purple dash-dotted line). All error bars represent 1 σ uncertainties. Full size image The synchrotron self-absorption (SSA) effect, which suppresses polarization below the SSA frequency ( ν a ), is not a reliable explanation for the observed depolarization. ALMA observations at 97.5 GHz, 145 GHz and 203 GHz (Fig. 2 , Table 1 and Extended Data Table 3 ) show that the light curve at the 97.5 GHz band exhibited a broken power-law evolution with power-law indices of α = 0.26 ± 0.02 (before the break) and α = −1.62 ± 0.13 (after the break), and a break time at 3.77 ± 0.35 days, where and hereafter we describe the temporal and spectral properties of the flux density as F ν ∝ t α ν β ( t and v are time since the burst in days and frequency). The multi-frequency measurements (Fig. 3 ) showed that the spectral slope changed from positive power-law index of β ≈ 0.602 ± 0.007 at 1.5 days and β ≈ 0.32 ± 0.15 at 2.5 days to a negative one ( β ≈ −0.7) at 9.5 and 18.4 days. These spectral slopes are in disagreement with the SSA effect which leads to β = 2 (refs. 11 , 12 ). Fig. 2: Radio afterglow light curve of GRB 191221BA with the simultaneous optical (R band) polarimetric observation. The red dashed line indicates the best fitted smoothly connected broken power-law functions of the 97.5 GHz light curve. The radio light curves and the optical R band photometric measurement are described by the standard forward shock synchrotron radiation model. Differences in the early optical afterglow (green small circles) and its wiggles may be caused by the magnitude-to-flux conversion of optical observations made by the very broad-band clear filter. The forward shock model describes the passing of synchrotron spectral peak over the ALMA observing band around 4 days, which is consistent with observed spectrum change between 2.5 and 9.5 days (Fig. 3 ). All error bars represent 1 σ uncertainties. Full size image Table 1 Radio polarization observing log. Measurements with no special notation are summarized with 1 σ errors Full size table Fig. 3: Spectral flux distributions of the GRB 191221B afterglow at 1.5, 2.5, 9.5 and 18.4 days after GRB. The photometry with high signal-to-noise characterized the spectral slope β as 0.602 ± 0.007 at 1.5 days, 0.32 ± 0.15 at 2.5 days and − 0.7 at 9.5 and 18.4 days, respectively. The changing of spectral indices from positive to negative indicated the passing of the spectral peak frequency through the radio band. All error bars represent 1 σ uncertainties. Full size image These afterglow properties are instead reproduced by the standard model 11 , 12 of optically thin synchrotron emission from an expanding shock in a uniform density medium with an isotropic energy that increases with time by long activity of the central engine E iso = 9.4 × 10 52 t 0.25 erg, an ambient medium density n = 5.9 cm −3 , a fraction of shocked energy transferred to non-thermal electrons ϵ e = 6.5 × 10 −2 , that to magnetic field ϵ B = 1.2 × 10 −2 , the energy spectral slope of non-thermal electrons p = 2.4, the jet opening half-angle θ j = 2.6° and the viewing angle θ v = 1.9° ( Methods ). The model lines in the top panel of Figs. 1 and 2 are the results of our numerical calculations of synchrotron flux by taking account of the equal observed times of photons 13 , 14 . The model also explains the X-ray and optical afterglows (Extended Data Fig. 6 ). The peak frequency at ~200 GHz in the top panel of Fig. 1 is the synchrotron frequency of minimum-energy electrons ν m , and the temporal and spectral changes around t ≈ 4 days are found to be consistent with the crossing of ν m at the observed frequencies. The energy scale is comparable to the observed gamma-ray energy E γ ,iso , and the micro-physical parameters are also consistent with other cases 15 . The deviation of the observed radio flux from the model light curve at 1.5 days (Fig. 2 ) and its slightly hard spectrum (Fig. 3 ) would imply an additional emission component, but it is negligible at t ≥ 2.5 days (see Methods for more discussion). If the magnetic field is ordered in the emitting shocked region, the PD of synchrotron emission is ≃ 70% at ν > ν m while it is 50% at ν < ν m , and the polarization direction is perpendicular to both the magnetic field direction and the line of sight in the comoving frame, where the electron momentum distribution is assumed to be isotropic 16 . Usually, however, the magnetic field in the shocked region is tangled through its amplification process from the field of the ambient medium by some type of instability 17 , 18 and, therefore, the net polarization of observed synchrotron emission is reduced, depending on the magnetic field configuration in the visible region. One may consider a simple one-zone model in which the PDs at various frequencies, that is, ν > ν m and ν < ν m , are reduced by the same factor 19 , 20 , 21 (the grey dashed line in Fig. 1b ), but this model is ruled out by the observed PD data in the radio and optical bands. One of the most actively discussed magnetic field amplification processes is the Weibel instability, which occurs at relativistic collisionless shocks and generates strong magnetic fields with random directions on a plasma skin depth scale 17 , 19 , 22 , 23 . In this case, the field component parallel to the shock plane may be dominant. This anisotropy results in a sizable PD at each position of the shock, although the field is tangled on the tiny scale 24 , 25 . We numerically calculated the net linear polarization in various frequencies based on the synchrotron emission model explained above (see Methods for more details). As shown in the middle panel of Fig. 1 , the PD at ν ≲ ν m is much lower than that at ν > ν m since the surface brightness distribution is significantly non-uniform at the frequencies ν > ν m (refs. 13 , 14 ). This property can be consistent with the data. However, this model has a clear prediction that the PAs at ν > ν m and ν ≲ ν m are the same or 90° different 14 . The difference in the observed PAs at the radio and optical bands (the bottom panel of Fig. 1 ) does not support this model. The temporal evolution of PD is also incompatible with this model (Extended Data Fig. 6 ). Another possible process of magnetic field amplification is magnetohydrodynamic instabilities at the shock. These include Richtmyer–Meshkov instability, which occurs in the case of ambient medium with inhomogeneous density 18 , 26 . In this case, the magnetic field directions in the shocked region can be random mainly on hydrodynamic scales comparable to the typical width of the bright region in the shock downstream, and the internal Faraday depolarization can be significant in the radio band if the number of non-thermal electrons is a fraction f (<1) of the total shocked electrons 21 . The fraction 1 − f of the total shocked electrons remain so cool that they cause the Faraday depolarization on the emission from the non-thermal electrons. The true isotropic energy is E iso / f and the true fraction of non-thermal electron energy is ϵ e f in this case 27 . We calculate the PD in the one-zone model similar to the procedure in Sokoloff et al. 28 , and plot the model in the middle panel of Fig. 1 (see more details in Methods ). To explain the observed PDs, the Faraday rotation in the shocked region should be significant at ν < 100 GHz. This leads to an upper limit f ≲ 0.3. The difference in the surface brightness distributions in the optical and radio bands or the contribution from the ordered magnetic field 28 explain the observed difference in PAs. Our observations performed simultaneous polarimetry between the millimetre and optical bands for the typical long GRB 191221B. The multi-frequency observations of the afterglow were also described by the standard model. The measured radio PD was significantly lower than the optical one. Two plausible models that provide new insights into GRB sciences were considered for the origin of the polarization. The measured PAs and the PD temporal evolution indicated that the Faraday depolarization was caused by cool electrons in the hydrodynamic-scale magnetic field. Our observation consolidates a new methodology for revealing the total jet energies and collisionless shock physics of various types of GRBs. If f is very small for low-luminosity GRBs and/or short GRBs, their true total jet energies are much larger than the current estimates, which may increase their neutrino production rates 29 , 30 and the lower bound of total explosion energies or mass scales of progenitor systems. Methods ALMA and Atacama Compact Array observations A total of 11 epochs of radio observations were conducted using the ALMA and Atacama Compact Array (Table 1 and Extended Data Table 3 ). Four epochs (0.5, 1.5, 2.5 and 9.5 days) of observations were performed with the polarization mode at 97.5 GHz (that is, band 3). Multi-frequency observations were managed with the photometry mode at 145 GHz among 5 of the 11 epochs. At 1.5 and 2.5 days, two additional photometry observations at 203 GHz were also conducted. Semi-simultaneous optical polarimetry was also performed 2.5 days after the GRBs using the VLT. Regarding the ALMA calibrations, the bandpass and flux were calibrated using observations of J1037-2934, and observations of J1036-3744 was used for the phase calibration. The polarization calibration was performed by observing J1058+0133. The raw data were reduced at the East Asian ALMA Regional Center using CASA (v.5.6.1) 31 . We further performed interactive CLEAN deconvolution imaging with self-calibration. The Stokes I , Q and U maps were CLEANed with an appropriate number of CLEAN iterations after the final round of self-calibration. The off-source root mean square (r.m.s.) levels in I , Q and U are consistent with the expectations for thermal noise alone. The quantities that can be derived from the polarization maps are the polarized intensity ( \(\sqrt{{Q}^{2}+{U}^{2}}\) ), PD ( \(100\sqrt{{Q}^{2}+{U}^{2}}/I\) %) and polarization position angle ( \(1/2\arctan (U/Q)\) ). By applying the polarization calibration to the phase calibrator J1036-3744 and creating Stokes maps for 6, 9 and 18 epochs during the 3 h observing period, we confirm that the stability of linear PD is <0.02%, which is consistent with the systematic linear polarization calibration uncertainty of 0.033% for compact sources. The Atacama Compact Array data were flagged, calibrated and imaged with standard procedures with CASA (v.5.6.1). The bandpass and flux were calibrated using observations of J1058+0133 and J1107-4449. Observations of J1018-3123 were used for the phase calibration. VLT spectroscopic observations The VLT also obtained X-shooter spectra for the afterglow of GRB 191221B at ~10 and ~34 h after the GRB onset. X-shooter spectra cover a very large wavelength range, from the ultraviolet (UV) atmospheric cutoff to more than 2 μm. This range is covered by the three arms of the instrument: the UVB, visible (Vis) and near-infrared (NIR) arms. Observations consisted of two sets of 600 s exposures in the three arms using the AB nod-on-slit offset mode. Data in the UVB and Vis arms have been reduced using the stare mode standard data reduction, namely by extracting the science spectra as if they were obtained without any offset. The NIR arm was extracted using the standard X-shooter Nodding mode pipeline. Each single spectrum has been corrected for slit losses, due to the finite aperture of the X-shooter slit, and the residual sky emission subtracted. The final stacked spectrum has been finally corrected for telluric features, whose response correction for the Vis and NIR arms has been estimated from the spectrum of the standard telluric star 32 , 33 , 34 . A full study of these spectra is well beyond our interest and, in this work, our goal is just to model the afterglow-only spectrum (the one obtained ~10 h after the burst) to derive an estimate of the optical extinction along the line of sight. This would allow us to compute a plausible maximum level of host-galaxy dust-induced (that is, non-intrinsic to the GRB afterglow) polarization. Properly connecting the three X-shooter arms requires a careful cross-calibration again beyond our interests, and therefore we limited our analysis to the UVB arm covering the rest-frame wavelength range from approximately 1,650 to 2,550 Å (from 3,450 to 5,500 Å in the observer frame). The resolution of the X-shooter spectra is 0.2 Å per bin. We first rebinned the spectra to 20 Å per bin by the algorithm described in Carnall 35 and then manually removed all the main emission or absorption lines. The resulting spectrum shows small-scale variations, which are probably an artefact of the reduction process related to the different orders of the Echelle spectrograph. This does not affect our fits, although we had to add, in quadrature, a systematic error of 7.5 × 10 −18 erg s −1 cm −2 Å −1 to the uncertainties computed by the reduction pipeline. We fit the afterglow spectrum in this wavelength range by a simple power law affected by rest-frame extinction following the Small or Large Magellanic Cloud or the Milky Way extinction curves 36 , 37 . The rest-frame extinction turns out to be very low, and therefore the three extinction recipes yield essentially the same results (see also ref. 38 ): \(\beta =-0.9{7}_{-0.07}^{+0.14}\) , E B – V < 0.038 (95% upper limit). The best-fit for the Small Magellanic Cloud recipe is shown in Extended Data Fig. 2 . The VLT also obtained spectro-polarimetric observations with the Focal Reducer and low dispersion Spectrograph (FORS2) instrument at about 10 h after the GRB onset. These data have already been reported in Buckley et al. 3 . The data show (their Fig. 4) a fairly constant polarization level and position angle. In Buckley et al. 3 , spectro-polarimetry obtained with the SALT/Robert Stobie Spectrograph telescope ~3 h after the burst is also reported. The more modestsignal-to-noise ratio S/N prevents us from carrying out further analyses on these data regarding the possible evidence for a Serkowski-law behaviour. We have downloaded the VLT spectrum and carried out a fit with a Serkowski law 8 and the predictions for afterglow polarization in the optical band (that is, constant polarization). As expected, both scenarios can provide an acceptable fit to the data, although for the Serkowski law the wavelength corresponding to the polarization maximum is pushed into the far UV (~200 nm) to give a roughly constant polarization in the wavelength range covered by the FORS2 spectro-polarimetry. This is a rather unusual result but not totally unprecedented 39 . However, the Serkowski-law fit is not statistically favoured compared with the afterglow only, since it requires a larger number of free parameters. Therefore, when also considering the low extinction along the line of sight derived by the analysis of the X-shooter spectra, an intrinsic origin (that is, due to the afterglow) of the observed polarization compared with the dust-induced hypothesis appears to be more in agreement with the data. VLT polarimetric observations Polarimetric observations were acquired using the FORS2 mounted on the VLT. A Wollaston prism was inserted in the light path to split the image of each object in the field into two orthogonal polarization components. A mask was used to avoid overlap of the two images; we used the FORS2 R band filter. For each position angle ϕ /2 of the half-wave-plate rotator, we obtained two simultaneous images of cross-polarization at angles ϕ and ϕ + 90°. We obtained observations at position angles 0, 22.5, 45 and 67.5° of the half-wave plate. This technique allowed us to remove any differences between the two optical paths (ordinary and extraordinary ray), including the effects of seeing and airmass changes. With the same setup we also observed polarized and unpolarized standard stars so we could convert position angles from the telescope to the celestial reference frame, and to correct for the small instrumental polarization introduced by the telescope. Reduction, that is, bias, flat-field correction, bad pixel masking and so on, were carried out following standard recipes. Aperture photometry was obtained for the target and several nearby sources in the field. We also confirmed that the GRB polarization measurement is unaffected by Galactic dust-induced polarization (Extended Data Fig. 3 .) We used custom software tools based on the Python Astropy library ( ). More details on polarimetric data analysis are reported in Covino et al. 40 and Wiersema et al. 41 . Extended Data Figure 6 shows the temporal evolution of polarization combined with the optical polarimetric results reported by Buckley et al. 3 . There are three epochs of optical observation including our measurements. Buckley et al. 3 made two epochs of polarimetry during the wiggles of the optical afterglow and reported the marginal decrease of PD (by ~0.3%) over a timescale of ~7 h. We derived the PDs and PAs in the wavelength range of the R band based on the data reported by Buckley et al. 3 (Extended Table 1 ). Based on the two-sample t -test, the PA between the radio and optical bands is different at an ~90% confidence level. The temporal evolution of PDs is also inconsistent with the plasma-scale magnetic field model. These properties do not support the plasma-scale magnetic field model. X-ray spectrum of GRB afterglows with optical polarimetry We checked the hydrogen column density of the line of sight ( N H ), which is one of the indicators of dust extinction of afterglows at their host galaxies. There are eight known- z GRB afterglows, including GRB 191221B, available with optical polarimetry and Swift X-ray observations. For six events (GRB 080928, GRB 091018, GRB 091208B, GRB 110205A, GRB 121024A and GRB 131030A), the optical observations reported the detection of intrinsic polarization 9 , 41 , 42 , 43 , 44 , 45 . For GRB 190114C, Jordana-Mitjans et al. 46 reported the detection of polarization induced by the dust in the host galaxy. The X-ray data obtained by the Swift/X-ray telescope were collected from the UK Swift Science Data Centre 47 , 48 . We rebinned the spectra so that each spectral bin contained more than five counts. Using the software XSPEC 12, we performed spectral fitting with a single power law modified with intrinsic and Galactic absorptions, the latter of which were fixed at values calculated from Willingale et al. 49 . The XSPEC 12 model components TBabs and zTBabs that incorporate three absorption elements (that is, gas, molecules and grains) 50 were used to describe the spectral absorptions. The derived best-fitting values are summarized in Extended Data Table 2 . Using the results presented in Schady et al. 51 , the measured N H are converted to the extinction at optical V-band A V . Five events, including GRB 191221B, exhibited an intrinsic absorption column density of order 10 21 . The intrinsic absorption column density of GRB 191221B is the smallest one ( \({N}_{\mathrm{H}}=1.{6}_{-0.8}^{+0.9}\times 1{0}^{21}\) cm −2 ). This result is consistent with the low dust extinction derived by the analysis of the VLT/X-shooter spectra. In contrast, the GRB 190114C X-ray spectrum is highly obscured by the intrinsic absorption column density of N H = 8.5 × 10 22 cm −2 (Extended Data Fig. 4 ). These results naturally explain the dust-induced optical polarization of GRB 190114C and support the intrinsic polarization observed in other events. Hence, these results also indicate the intrinsic origin of the optical polarization measured on the GRB 191221B optical afterglow. Afterglow modelling The observed radio spectra and the light curves in the radio, optical and X-ray bands are explained by the standard forward shock model 11 , 12 . The temporal change of the spectral slope in the range of 97.5–203 GHz (Fig. 3 ) and the breaks of the 97.5 and 145 GHz light curves (Fig. 2 ) at t ≈ 4 days indicate the crossing of the synchrotron frequency of minimum-energy electrons ν m at the observed frequencies. From the observed spectral slope β ≈ −0.7 at ν > ν m , the electron energy spectral index is estimated to be p = −2 β + 1 ≈ 2.4. Then the theoretical temporal decay indices at ν < ν m and at ν > ν m are 1/2 and 3(1 − p )/4 ≈ −1.1, respectively, in the case in which the collimated forward shock expands in a uniform density medium and its edge is not visible owing to the strong relativistic beaming. These are not consistent with the observed indices of approximately 0.26 ( t ≲ 4 days) and −1.6 ( t ≳ 4 days). After the edge becomes visible (without sideways expansion of the shock), the geometrical flux reduction \({\theta }_{j}^{2}/{(1/{{\varGamma }})}^{2}\propto {t}^{-3/4}\) results in decay indices of −1/4 (at ν < ν m ) and −1.8 (at ν > ν m ), where θ j and Γ are the opening half-angle and Lorentz factor of the shock. The wind type ambient medium gives decay indices of −3/4 (at ν < ν m ) and −2.3 (at ν > ν m ). We find that the observed indices can be fit by the model in which the ambient density is uniform and the energy continues to be injected into the shock by long activity of the central engine 52 , 53 . We note that our assumption of no sideways expansion of the shock is based on the results of high-resolution hydrodynamic simulations, which show that the collimated shock after the time of 1/ Γ ≈ θ j expands sideways logarithmically, not exponentially 54 , 55 . We performed numerical calculations of flux from the shock with a fixed θ j , which evolves following the Blandford–McKee self-similar solution, by taking account of the equal arrival time surface of the photons 13 , 14 . Then we tried to fit the model flux to the observed data by adjusting the model parameters, namely the isotropic energy E iso , the ambient medium density n , the fraction of shock energy carried by the electrons ϵ e , that carried by amplified magnetic field ϵ B and the viewing angle θ v , as well as θ j and p . This model can fit the data in the radio, optical and X-ray bands (Figs. 1 and 2 , and Extended Data Fig. 6 ). The model parameters are constrained to be E iso = 9.4 × 10 52 t 0.25 erg, n = 5.9 cm −3 , ϵ e = 6.5 × 10 −2 , ϵ B = 1.2 × 10 −2 , θ v = 1.9°, θ j = 2.6° and p = 2.4. These values of n , ϵ e and ϵ B are typical of GRB afterglows 15 . It is well known that X-ray flares with fast variability sometimes dominate the forward shock emission. If the flares have broad emission spectra, they might also affect the radio light curves. The slight deviations of radio data from the forward shock model light curves at t = 0.5 and 1.5 days might be related to the X-ray flares observed at similar times. Since they have fast variability they may not contribute to the spectrum at t = 2.5 days. Figure 1 simply indicates that the standard forward shock synchrotron spectrum with the radio data at t = 2.5 days (that is, the peak flux ~5 mJy, ν m ≈ 200 GHz and the spectral index (at ν > ν m ) β ≈ −0.7) can explain the optical data, and does not require any additional emission component. The possibility that the short-lived reverse shock explains the radio emission at t ≳ 1.5 days is excluded since the minimum synchrotron frequency of the reverse shock 56 \({\nu }_{\mathrm{m}}^{\mathrm{r}} \approx 200\) GHz with the peak flux ~5 mJy requires ϵ e ≈ 1. We also examined possible long-lasting reverse shock emission in the long-active central engine model, such as in our model shown above. Suppose the reverse shock emission is dominant in the radio band while the forward shock is dominant in the optical band; the difference in polarization in the two bands could be caused by possible differences in magnetic field structures in the two shocked regions. However, this scenario is also disfavoured owing to a high value of ϵ e . According to Sari and Mésáros 57 , the minimum synchrotron frequencies of the forward and reverse shocks for our model parameters at t = 1.5 days are ν m ≈ 9.8 × 10 11 Hz and \({\nu }_{\mathrm{m}}^\mathrm{{r}} \approx 2.9\times 1{0}^{10}\) Hz, respectively, where the equal arrival time surface of the photons is not taken into account. To increase \({\nu }_{\mathrm{m}}^{\mathrm{r}}\) to a frequency at ν m , without changing the forward shock X-ray flux which is proportional to \({\epsilon }_{\mathrm{e}}^{p-1}{E}_{{{{\rm{iso}}}}}^{(p+2)/4}\) , requires ϵ e ≈ 0.53. This value is unusually high, compared to ϵ e ≲ 0.3 estimated by systematic studies using well-sampled multi-frequency observations 15 , 58 , 59 . Polarization in the plasma-scale magnetic field model The synchrotron polarization depends on the magnetic field configuration at each position in the shocked fluid. Here we focus on the turbulent magnetic field with coherence length on a plasma skin depth scale, which is many orders of magnitude smaller than the shock width. Such a magnetic field is created by the Weibel instability at relativistic collisionless shocks 17 , 19 , 22 , 23 , and in this case the field may be anisotropic, that is, \({\xi }^{2}\equiv 2\langle {B}_{\parallel }^{2}\rangle /\langle {B}_{\perp }^{2}\rangle \ne 1\) , where \(\langle {B}_{\parallel }^{2}\rangle\) and \(\langle {B}_{\perp }^{2}\rangle\) are the averages of the powers of the magnetic field components parallel and perpendicular to the shock normal, respectively. On the basis of this model, we can calculate the local Stokes Q and U parameters corresponding to the surface brightness of the shock by averaging the emissivity with respect to the field directions at each position, and we find that the synchrotron emission at each position is polarized owing to the anisotropy of the turbulent magnetic field 24 , 25 , 60 . The polarization directions are symmetric around the line of sight, so that the net PD is non-zero only when the visible region of angular size ~1/ Γ includes the jet edge and becomes asymmetric 24 , 25 , 60 . We numerically calculated the linear PDs in various frequencies based on the light curve model explained above. The parameter value ξ 2 = 0.56 leads to an optical PD of ≃ 1.3% at t = 2.5 days. In the optical band, the surface brightness has a peak at θ ≈ 1/ Γ from the line of sight, while in the radio band, the region around θ ≈ 0 with low local PD is also bright 13 , so that the net radio PD is lower 14 . As a result, the model polarization spectrum at 2.5 days (Fig. 1 , middle panel) is consistent with the upper limit on the radio PD. In this model, however, the PA in the radio band is the same as that in the optical band. The difference in the observed PAs at the radio and optical bands does not support this model. The temporal changes of optical PD and PA in this model are plotted in Extended Data Fig. 6 . The PD changes as the angular size of the visible region ~1/ Γ increases. It has a maximum value when 1/ Γ ≈ θ j + θ v . The PA experiences a sudden 90° change at t ≈ 0.06 day, and it is constant before and after that time. The model with ξ 2 = 0.56 exhibits a PD as high as ≃ 5% at t ≃ 0.4 day, which is not consistent with the observed data. The less anisotropic turbulence leads to lower PD, as shown by the model with ξ 2 = 0.81 in Extended Data Fig. 6 , but it also appears incompatible with the observed data. Faraday depolarization model The low PD in the radio band could be ascribed to an internal Faraday depolarization effect by cool electrons. The standard forward shock model usually assumes that all shocked electrons gain energy from shocked protons and form a power-law energy distribution d n /d γ ∝ γ − p for γ ≳ γ m ≈ ϵ e ( m p / m e ) Γ (here γ, m p , m e are the electron Lorentz factor, proton mass, and electron mass). Plasma particle simulations showed that all the electrons gain energy from shocked protons 61 , but this has not yet been confirmed by observations. Indeed, the forward shock model in which only a fraction f (<1) of the total electrons is energized can also explain the observed afterglow light curves and spectra 27 . In this case, a fraction 1 − f of the total electrons remains as cool thermal electrons with the Lorentz factor \({\tilde{\gamma }}_{\mathrm{m}}=\eta {{\varGamma }}\) , where η is a factor of the order of unity if the cool electrons are just isotropized at the shock front, and the correct physical parameters are \(E^{\prime}_{iso}\) = E iso / f , \(n^{\prime} =n/f\) , \(\epsilon ^{\prime}_{\mathrm{e}} ={\epsilon }_{\mathrm{e}}f\) and \(\epsilon ^{\prime}_{B} ={\epsilon }_{B}f\) . The cool electrons cause Faraday depolarization of the synchrotron emission of the non-thermal electrons above the self-absorption frequency 21 . We assume that the magnetic field in the shocked fluid is turbulent on a hydrodynamic scale, which is comparable to the typical width of the bright region in the shock downstream. Such a field can be created by magnetohydrodynamic instabilities at the shock, such as the Richtmyer–Meshkov instability 18 , 26 . For simplicity, we consider that the globally ordered magnetic field is negligible and that the plasma in the visible region consists of N random cells, in each of which the magnetic field is ordered. At the optical band, for which the Faraday effect is not significant, the net PD is \({P}_{0} \approx \frac{(p+1)}{(p+7/3)}\frac{1}{\sqrt{N}}\) , so that N ≈ 5,000 can explain the optical PD ≈ 1%. The Faraday rotation effect within the emission region results in the PD being 28 P 0 [(1 − e − S )/ S ], where \(S={(\nu /\tilde{{\nu }}_{V})}^{-4}\) and \(\tilde{{\nu }}_{V}\) is the frequency at which the Faraday depth is unity, \({\tilde{\nu }}_{V} \approx 200\) \({\left(1+z\right)}^{-15/16}{\left(\frac{1-f}{10f}\right)}^{1/2}{\eta }^{-1}\sqrt{\ln {\tilde{\gamma }}_{\mathrm{m}}}{N}^{-1/12}{\left(\frac{{E}_{{{{\rm{iso}}}}}}{1{0}^{52}\,{{{\rm{erg}}}}}\right)}^{3/16}{n}^{9/16}{\left(\frac{{\epsilon }_{B}}{0.01}\right)}^{1/4}{\left(\frac{t}{1\,{{{\rm{day}}}}}\right)}^{-1/16}\) GHz (ref. 21 ). The middle panel of Fig. 1 shows the Faraday depolarization model for the radio and optical data, which indicate \({\tilde{\nu }}_{V}\gtrsim 100\) GHz. This leads to \({f}^{-1}-1\gtrsim 2.5{\eta }^{2}{(\ln {\tilde{\gamma }}_{\mathrm{m}})}^{-1}\) . Data availability Processed data are presented in the tables and figures in the paper. The ALMA data are available from the ALMA Science Archive. The VLT data are available from the ESO Science Archive Facility. Code availability We used standard data reduction tools in Python and CASA 31 . The theoretical calculation code of the flux and polarization used in this work is not publicly available. Results presented in this work are available from the corresponding author upon reasonable request.
Gamma-ray bursts are the most luminous explosions in the universe, allowing astronomers to observe intense gamma rays in short durations. Gamma-ray bursts are classified as either short or long, with long gamma-ray bursts being the result of massive stars dying out. They provide hidden clues about the evolution of the universe. Gamma-ray bursts emit gamma rays as well as radio waves, optical lights, and X-rays. When the conversion of explosion energy to emitted energy (i.e., the conversion efficiency) is high, the total explosion energy can be calculated by simply adding all the emitted energy. But when the conversion efficiency is low or unknown, measuring the emitted energy alone is not enough. Now, a team of astrophysicists has succeeded in measuring a gamma-ray burst's hidden energy by using light polarization. The team was led by Dr. Yuji Urata from the National Central University in Taiwan and MITOS Science CO., LTD and Professor Kenji Toma from Tohoku University's Frontier Research Institute for Interdisciplinary Sciences (FRIS). Details of their findings were published in the journal Nature Astronomy on December 8, 2022. When an electromagnetic wave is polarized, it means that the oscillation of that wave flows in one direction. While light emitted from stars is not polarized, the reflection of that light is. Many everyday items such as sunglasses and light shields utilize polarization to block out the glare of lights traveling in a uniform direction. Measuring the degree of polarization is referred to as polarimetry. In astrophysical observations, measuring a celestial object's polarimetry is not as easy as measuring its brightness. But it offers valuable information on the physical conditions of objects. The team looked at a gamma-ray burst that occurred on December 21, 2019 (GRB191221B). Using the Very Large Telescope of the European Southern Observatory and Atacama Large Millimeter/submillimeter Array—some of the world's most advanced optical and radio telescopes—they calculated the polarimetry of fast-fading emissions from GRB191221B. They then successfully measured the optical and radio polarizations simultaneously, finding the radio polarization degree to be significantly lower than the optical one. "This difference in polarization at the two wavelengths reveals detailed physical conditions of the gamma-ray burst's emission region," said Toma. "In particular, it allowed us to measure the previously unmeasurable hidden energy." When accounting for the hidden energy, the team revealed that the total energy was about 3.5 times bigger than previous estimates. With the explosion energy representing the gravitational energy of the progenitor star, being able to measure this figure has important ramifications for determining stars' masses. "Knowing the measurements of the progenitor star's true masses will help in understanding the evolutionary history of the universe," added Toma. "The first stars in the universe could be discovered if we can detect their long gamma-ray bursts."
10.1038/s41550-022-01832-7
Space
Sun eruptions hit Earth like a 'sneeze', say scientists
M. J. Owens et al. Coronal mass ejections are not coherent magnetohydrodynamic structures, Scientific Reports (2017). DOI: 10.1038/s41598-017-04546-3 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-04546-3
https://phys.org/news/2017-06-sun-eruptions-earth-scientists.html
Abstract Coronal mass ejections (CMEs) are episodic eruptions of solar plasma and magnetic flux that travel out through the solar system, driving extreme space weather. Interpretation of CME observations and their interaction with the solar wind typically assumes CMEs are coherent, almost solid-like objects. We show that supersonic radial propagation of CMEs away from the Sun results in geometric expansion of CME plasma parcels at a speed faster than the local wave speed. Thus information cannot propagate across the CME. Comparing our results with observed properties of over 400 CMEs, we show that CMEs cease to be coherent magnetohydrodynamic structures within 0.3 AU of the Sun. This suggests Earth-directed CMEs are less like billiard balls and more like dust clouds, with apparent coherence only due to similar initial conditions and quasi homogeneity of the medium through which they travel. The incoherence of CMEs suggests interpretation of CME observations requires accurate reconstruction of the ambient solar wind with which they interact, and that simple assumptions about the shape of the CMEs are likely to be invalid when significant spatial/temporal gradients in ambient solar wind conditions are present. Introduction Coronal mass ejections (CMEs) are large, episodic eruptions of coronal plasma and magnetic flux that are ejected out into the heliosphere at speeds typically 1 ranging from 300–2000 km s −1 . They are of great interest both for their central role in extreme space weather 2 , 3 and in the solar cycle evolution of the coronal magnetic field 4 , 5 . In situ spacecraft observations of CMEs show that around a third to a half of all CMEs contain a magnetic flux-rope structure and low plasma beta 6 , 7 . These “magnetic clouds” are generally assumed to be (quasi-) coherent magnetohydrodynamic (MHD) structures, wherein the magnetic pressure and curvature forces act, to a greater or lesser extent, to resist deformation by external forces such as solar wind speed shear. This, in principle, enables a magnetic cloud to evolve as a single cohesive body. For example: Observations of CME-CME interactions in the heliosphere 8 have been interpreted as elastic or even super-elastic collisions 9 , suggesting the CMEs are solid-like, coherent structures. Non-radial deflection of CME trajectories, possibly by interaction with coronal hole magnetic flux, has been observed 10 , 11 , 12 . While this has largely been interpreted as centre-of-mass deflection, which would require the CME to behave as a coherent structure, distortion of the CME shape could equally explain the available observations. Methods for tracking CMEs through the corona and heliosphere assume the CME front remains quasi-spherical (or some other simple shape) 13 , 14 , 15 , 16 , implying the CME front remains a coherent structure throughout the heliosphere. There is observational evidence, however, for significant disruption of CME structure by solar wind inhomogeneity 17 . Numerous studies (including some by the authors of present paper) either explicitly or implicitly assume that single-point in situ measurements of a magnetic cloud are representative of its global structure 7 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , implying a large degree of coherence of CMEs. Single- 25 and multi-point 26 , 27 observations, even at relatively modest spacecraft separations, often reveal this picture to be far too simplistic, with evidence of CME distortion by the ambient solar wind. Numerical MHD models provide a complementary means to test the coherence of CMEs. There have been a number of numerical experiments investigating interaction of CMEs both with a structured solar wind and other CMEs, which often reveal significant distortion of CME structure 28 , 29 , 30 , 31 , 32 , 33 . Interpretation of the results, however, has largely focussed on the issue of force balance, with internal magnetic pressure/curvature from the magnetic flux-rope unable to resist distortion from interaction with external solar wind structures. Here, we investigate a fundamental physical limit on a CME’s ability to act as a coherent magnetohydrodynamic structure; namely the inability of information to propagate within a CME. We use a simple analytical model for CME evolution in the heliosphere to calculate the Alfvén wave speed [ V A ] within the CME at a range of heliocentric distances. We also estimate the geometric speed of separation of plasma parcels [ V G ] within the CME that results from purely radial heliocentric propagation. For a range of CME parameters, we determine the heliocentric distance at which V G exceeds V A and hence information can no longer propagate within the CME. Methodology The geometric and dynamic effects of CME propagation are investigated using a simple analytical model, closely following Owens, et al . 21 , which agrees well with numerical MHD simulations of CME evolution 34 . In summary, CMEs are assumed to initially take the form of a circular cross-section, force-free flux rope in the low corona and subsequently be deformed by a combination of CME-centred self-expansion and heliocentric radial propagation. The internally-driven self-expansion is limited to the heliocentric radial direction, so that the CME maintains constant angular width, as is commonly observed 1 . Figure 1 shows snapshots of the resulting CME cross section at increasing times (in arbitrary units), using typical CME parameters: an initial (at time t = 0) circular cross-section of radius 1 solar radii [ r S ] at a height of 2 r S gives a CME angular extent with respect to the Sun of approximately 60°; a constant CME transit speed [ V TR ] of 600 km s −1 and a constant internally-driven expansion speed [ V EX ] of 90 km s −1 35 . The CME rapidly “pancakes” due to radial propagation in spherical geometry 34 , 36 . The change in CME cross-sectional area, computed by numerically integrating the analytical model, is shown in Fig. 2a . By 1 AU, the cross-sectional area of the CME is approximately 3000 times its initial value. Figure 1 An analytical model for the cross-sectional area of a CME as it propagates anti-sunward. Snapshots are shown at successive times. The plane is perpendicular direction of propagation (e.g., the ecliptic or RN planes in heliocentric radial-tangential-normal, RTN, coordinates). Points P A and P B on the leading edge of the CME subtend an angle θ at the centre of the Sun. Due to radial propagation in spherical geometry, P A and P B separate with time, leading to the geometric speed V G . Full size image Figure 2 Evolution of CME properties with heliocentric distance, using V TR = 600 km s −1 , B 1AU = 15 nT and n 1AU = 7 cm −3 . Panel (a) shows the cross-sectional area of the CME. Panel (b) shows the magnetic field intensity ( B , in black), assuming constant magnetic flux threading the CME cross section, and the ion number density ( n , in red), assuming conservation of mass within the CME. Panel (c) shows the resulting Alfven speed within the CME ( V A , black). Coloured lines show the geometric separation speeds [ V G ] of points on the CME leading edge as a result of expansion in spherical geometry for a range of separation angles [ θ ], from 5° (red) to 60° (blue) in 5° steps. Full size image From this model and a number of reasonable assumptions, it is possible to estimate the bulk properties within the evolving CME and so compute the Alfvén speed. We assume that the total magnetic flux within the CME is conserved (true to within a few percent 37 ) and that the magnetic flux is orientated perpendicular to the CME cross section. Thus B , the magnetic field intensity within the CME at a heliocentric distance R , will scale with the CME cross-sectional area, A : $$B={B}_{0}\frac{{A}_{0}}{A}$$ (1) where the subscript 0 refers to values at a reference distance. Figure 1b shows the profile for B 0 = 15 nT at R 0 = 1 AU, a typical value observed in situ 35 . Similarly, if the amount of plasma within the CME is assumed to be constant, the ion density [ n ] at distance R will scale as the volumetric increase of the CME: $$n={n}_{0}\frac{{A}_{0}{R}_{0}}{A\,R}$$ (2) The black line in Fig. 2b shows the n profile for a CME proton density of n 0 = 7 cm −3 at R 0 = 1 AU, again a typical observed value 38 . Combining these two parameters allows approximation of the Alfvén speed [ V A ] within a CME as a function of heliocentric distance, R : $${V}_{A}=\frac{B}{\sqrt{{\mu }_{0}n\,{m}_{i}}}$$ (3) where μ 0 is the magnetic permeability of free space and m i is the mean ion mass. For simplicity we here assume a proton ion plasma which gives an upper limit for the Alfvén speed: for helium ion composition of 8%, m i is 1.24 a.m.u. and the Alfvén speed would be 0.9 times the values given here. Note that the maximum wave speed within a magnetised plasma is the fast magnetosonic speed, a combination of V A and the ion-acoustic wave speed [ V S ] which results from the finite plasma temperature. Using a typical 1-AU temperature and a polytropic index as high as 4/3, V S within a CME remains at least an order of magnitude lower than V A at all heliocentric distances, so can be ignored for the purposes required here. Results The black line in Fig. 2c shows V A as a function of heliocentric distance. The coloured lines show the separation speed [ V G ] of points on the CME leading edge which results from radial expansion in spherical geometry. The red line shows points separated by a heliocentric angle θ = 5°, while the blue line shows θ = 60°, the angular extent of a typical CME 1 . Coloured lines show separations in 5° steps between these two limits. For small values of θ (<10°), the Alfven speed is greater than the geometric separation speed for the entirety of the CME’s transit to 1 AU. For plasma parcels separated by θ = 15°, a quarter of the total angular extent of a typical CME, V G first exceeds V A at approximately 0.45 AU. We refer to this distance as the critical distance [ R CRIT ] as once this V G > V A conidtion is met information can no longer travel between plasma parcels of the given angular separation and the CME has lost coherence over such length scales. For increasing angular separations, this critical distance moves ever closer to the Sun. For θ = 60°, the typical CME angular width, magnetic coherence is lost almost immediately after eruption, at least in this example (i.e., for CME transit speed of 600 km s −1 and B at 1 AU of 15 nT). We now investigate the effect of CME properties on the critical distance. Figure 3 shows R CRIT as a function of CME transit speed [ V TR ] and magnetic field intensity at 1 AU [ B 1AU ]. n is fixed at 7 cm −3 , though similar results are found for a reasonable range of n . Panels, from left to right, show angular separations of 15°, 30° and 60°. These correspond to a quarter, half and the full angular extent of a typical CME, respectively. The general trend is for R CRIT to increase with CME magnetic field intensity and to decrease with CME transit speed. For extremely narrow CMEs (~15°), or plasma parcels within a typical CME that are separated by approximately a quarter of the total angular extent, V A can remain above V G out to 1 AU as long as the CME speed is relatively low and the magnetic field intensity is relatively high. The blue dots in Fig. 3 show values of B 1AU and V TR from observations of 477 CMEs, obtained by combining coronagraph and in situ over the period 1995–2016 38 . Only a small fraction of these observed CMEs (<10%) have properties which suggest they remain coherent over an angular extent of 15° out to 1 AU. The bulk of the CMEs, approximately 70%, have lost coherency across 15° of angular extent within 0.4 AU. Increasing the angular separation to 30°, about half the angular extent of a typical CME, none of the observed CMEs remain coherent to 1 AU, with most losing coherence within 0.2 AU. Finally, looking at the full angular extent of a typical CME, 60°, all observed CMEs have lost coherence by 0.3 AU, with ~90% losing coherence within 0.1 AU. Figure 3 The critical distance, R CRIT , at which expansion speed exceeds the Alfven speed and the CME ceases to be a coherent structure, as a function of CME transit speed [ V TR ] and the magnetic field intensity within a CME at 1 AU [ B 1AU ]. The panels, from left to right, show angular separations on the CME front of 15°, 30° and 60°, respectively. These correspond to a quarter, half and the full angular extent of a typical CME, respectively. The cyan dots show CME observations from the Cane and Richardson 38 catalogue, updated to the end of 2016. Full size image Discussion and Conclusions This study has investigated the speed at which information can propagate between CME plasma parcels (the Alfvén speed, V A ), relative to the speed at which CME plasma parcels separate owing to radial propagation in spherical geometry [ V G ]. Where V G exceeds V A , plasma parcels can no longer be considered to constitute a single, coherent magnetohydrodynamic (MHD) structure. Figure 4 illustrates this idea. It shows a CME travelling through fast solar wind, but the upper flank encounters a slow wind stream. This results in distortion of the magnetic field structure within the CME. An Alfven wave is launched at a speed V A from point P B , which lies within the CME at the latitude of the solar wind speed shear, towards a point P A , located near the centre of the CME. Geometric expansions means that P B is moving away from P A at a speed V G . If V G > V A , as shown in this example, information cannot travel between the two points. Thus P A and P B are effectively isolated, and the response of the CME at points P A and P B to a structured solar wind is entirely independent; there can be no action as a single body, regardless of the magnitude of restoring forces such as magnetic pressure and curvature forces. A similar effect is expected within the deflected solar wind flow in the sheath region ahead of a fast moving CME 39 . Due to the large V G , the deflected solar wind flow within the sheath (labelled V SH in Fig. 4 ) 24 cannot keep pace with a point on the leading edge and thus does not flow around the obstacle, but piles up ahead of it. Figure 4 A schematic of one flank of a CME (white) propagating through a structured solar wind, in the reference frame of a point P A , located close to the centre of the CME. The shock (thick black line), and CME leading/trailing edges move away from P A at the CME expansion speed, V EX . Fast solar wind, in beige, flows into the CME shock at a speed V TR + V EX − V FSW ( V TR and V FSW are the CME transit speed and the fast solar wind speed, respectively). Slow solar wind, in blue, flows into the shock at a speed of V TR + V EX − V SSW , (where V SSW is the slow solar wind speed). The point P B , located at the fast/slow solar wind interface, experiences a distortion of the CME magnetic field and launches an Alfven wave at speed V A towards P A . Point P B , however, is moving away from P A due to geometric expansion at a speed V G , thus the information can never arrive. Similarly, V SH , the speed of the deflected solar wind flow in the sheath behind the shock, is smaller than V G and thus the sheath flow cannot travel around the CME. Full size image We estimate V A and V G using an analytic model, allowing parameter space to be fully and efficiently explored. Where simplifying assumptions are required, they have been chosen as far as possible to act in the favour of CME coherence (e.g., limiting the expansion of CMEs to the radial direction reduces V G ; coherence is defined to be lost when V G exceeds V A , rather than when the information travel time becomes large compared to the CME life time; helium is not included in the Alfvén speed estimation, etc). Thus we effectively examine the “best case scenario” for CME coherence. Nevertheless, we find that all observed CMEs lose coherence over their full angular extent by 0.1 to 0.2 AU. Even considering Alfvén wave propagation over half the typical CME angular extent, which would allow, e.g., the east flank of an ICME to know what’s happening to the west flank, no observed CMEs are expected to maintain coherence to 1 AU; indeed, less than 0.5% of all observed CMEs are expected to maintain flank-to-flank coherence past 0.3 AU. One aspect that requires further investigation is the assumption that the fastest information path between two points is a straight line. While this is true for the analytical model employed here, as it has constant magnetic field intensity within a CME, in a real magnetic cloud this need not be the case. For an ideal force-free magnetic flux rope, the magnetic field intensity is highest at the flux rope axis (i.e., the centre of the CME). Thus shorter information travel times between two points on the CME leading edge could, in principle, be obtained using a non-linear ray path taking advantage of the increased Alfvén speed deep within the CME. An alternative preferential wave path could be through the high magnetic field intensities in the sheath region ahead of a fast CME, though the sheath is often high plasma density too, meaning the Alfvén speed may not be enhanced. These dynamic effects will be fully investigated using numerical magnetohydrodynamic modelling of an erupting magnetic flux rope and ray-tracing at each time step. In practice, however, these effects are unlikely to provide significantly different results to those presented here. Any increased Alfvén speed will be offset by an increased path length, and compression of the CME leading edge by interaction with the ambient solar wind means the highest magnetic field intensities are usually located near the CME leading edge, not near the centre of the CME 35 . In light of these findings, new approaches are required for the interpretation of CME observations. We discuss a few examples here. The highly structured intensity patterns routinely seen within CMEs in Heliospheric Imager (HI) observations 40 by the STEREO spacecraft may be a direct result of both the scale of coherence within a CME and the variability of the solar wind through which a CME is travelling. These relatively small-amplitude, small-scale structures are unlikely to be a significant issue for interpretation of the global properties of CMEs, either with the geometric models applied to HI observations to determine CME speed and direction 13 , or to flux-rope models applied to in situ observations 18 . Larger amplitude gradients in the solar wind, however, such as a sharp latitudinal or longitudinal transition between fast and slow wind (Fig. 4 ), are likely to invalidate both forms of reconstruction technique by generating both large distortion to the CME shape and radically altering the pile-up of the solar wind plasma in the CME sheath, which is the plasma that is imaged by Thompson-scattered photospheric light. The results presented here also suggest CME arrival-time forecasting is sensitive to ambient solar wind structure at the local scale, not just at a global scale 41 : application of a drag equation to a CME’s interaction with the solar wind 42 is only really valid along an individual radial flow line, not to the CME as a whole. We suggest CME reconstruction techniques need to be modified to incorporate information about solar wind structure, either from global MHD models or from previous solar wind observations (e.g., assuming corotation of the solar wind). Ultimately, this may require solar wind data assimilation, to best interpolate and extrapolate between the available observations using physics-based models 32 .
Long-term power cuts, destruction of electronic devices and increased cancer risk for aeroplane passengers are all potential effects of the Earth being hit by a powerful solar eruption. Yet, new research has found space scientists have their work cut out to predict when these coronal mass ejections (CMEs) are on a collision course with Earth. A study of CMEs by scientists at the University of Reading has found they have cloud-like structures. This means they are more influenced by solar wind, through which they pass to reach Earth, making their movements much harder to predict than if they were single bubble-like entities as was previously thought. CMEs are huge blasts of solar plasma and magnetic fields from the sun's atmosphere that can reach Earth in one to three days. A direct hit could have catastrophic consequences, as CMEs are capable of damaging satellites, destroying electronic devices and potentially exposing people at high altitude, such as astronauts and aviation crew and passengers, to cancer-causing radiation. They occur frequently, but predicting which ones will impact Earth and how severely is difficult. Clouds not bubbles Professor Mathew Owens said: "Up until now, it has been assumed CMEs move like bubbles through space, and respond to forces as single objects. We have found they are more like an expanding dust cloud or sneeze, made up of individual plasma parcels all doing their own thing. "This means that trying to predict the shape and movement of CMEs as they pass through the solar wind becomes extremely difficult. Therefore if we want to protect ourselves from solar eruptions, we need to understand more about the solar wind." The new study, published in Nature Scientific Reports on Friday 23 June, looks in detail for the first time at how CMEs behave as they make their way through space, and how they interact with external forces like solar wind. The Reading scientists took a cross section of a CME to examine its structure more closely. They found that a CME quickly reaches the point at which the speed of its expansion exceeds the speed at which information can travel within the CME. At this point, it ceases to be a coherent structure, so any distortion to one part of the cloud caused by external forces does not affect it as a whole. Space weather threat Scientists are constantly monitoring the sun to track solar wind and extreme space weather. The Reading team recommends that information about solar wind should be incorporated into CME observations to ensure we are fully aware of the threat they pose to Earth. A previous study by University of Reading scientists found a shift in solar activity, expected to occur by the middle of the century, could make us more vulnerable to CMEs, as well as concentrating the Northern Lights around the poles – out of view of Great Britain. In 2011, the threat of space weather was added to the Government National Risk Register of Civil Emergencies.
10.1038/s41598-017-04546-3
Physics
New spin Seebeck thermoelectric device with higher conversion efficiency created
Akihiro Kirihara et al. Flexible heat-flow sensing sheets based on the longitudinal spin Seebeck effect using one-dimensional spin-current conducting films, Scientific Reports (2016). DOI: 10.1038/srep23114 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep23114
https://phys.org/news/2016-04-seebeck-thermoelectric-device-higher-conversion.html
Abstract Heat-flow sensing is expected to be an important technological component of smart thermal management in the future. Conventionally, the thermoelectric (TE) conversion technique, which is based on the Seebeck effect, has been used to measure a heat flow by converting the flow into electric voltage. However, for ubiquitous heat-flow visualization, thin and flexible sensors with extremely low thermal resistance are highly desired. Recently, another type of TE effect, the longitudinal spin Seebeck effect (LSSE), has aroused great interest because the LSSE potentially offers favourable features for TE applications such as simple thin-film device structures. Here we demonstrate an LSSE-based flexible TE sheet that is especially suitable for a heat-flow sensing application. This TE sheet contained a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film which was formed on a flexible plastic sheet using a spray-coating method known as “ferrite plating”. The experimental results suggest that the ferrite-plated film, which has a columnar crystal structure aligned perpendicular to the film plane, functions as a unique one-dimensional spin-current conductor suitable for bendable LSSE-based sensors. This newly developed thin TE sheet may be attached to differently shaped heat sources without obstructing an innate heat flux, paving the way to versatile heat-flow measurements and management. Introduction As efficient energy utilization is becoming a crucially important issue for sustainable future, a heat management technique to optimally control the flow of omnipresent thermal energy is currently of great interest. To realize smart thermal management with real-time controllability, there has been a growing demand for visualizing the flow of heat in various places such as industrial facilities and large-scale data centres. The thermoelectric (TE) conversion technique 1 , 2 , 3 , which directly converts a thermal gradient into an electric current, is one of the most powerful methods utilized to sense a heat flow as a voltage signal. In fact, heat-flow sensors based on the Seebeck effect 4 , which have thermopile structures consisting of π-structured thermocouples, are commercially available and used for various purposes such as the evaluation of materials. To further extend heat-flow-sensing capabilities to other widespread applications, however, such conventional devices face certain challenges. First, because Seebeck-based TE devices exhibit a relatively high heat resistance, the introduction of these devices into a heat-flow environment inevitably obstructs the heat flux and alters the distribution of the heat flow. Therefore, it is difficult to correctly evaluate the innate heat flux which we actually want to determine. Second, most of the commercially available heat-flow sensors are rigid and not easily applied to curved or uneven surfaces, making it difficult to monitor the heat flux around irregularly shaped heat sources. Because conventional TE devices, in which thermocouples are connected electrically in series, are intrinsically vulnerable to bending stresses, materials and structures for flexible TE devices have been extensively studied 5 , 6 , 7 . For such sensing applications, an emerging research field, spin caloritronics 8 , 9 , will provide new device-design opportunities. For example, TE devices based on the anomalous Nernst effect (ANE), which exhibit transverse TE voltage in ferromagnetic metals (FM), can be suitably utilized for sensing purposes 9 , 10 , 11 , 12 , 13 . In this work, we present another promising approach to realizing flexible heat flow sensors using the longitudinal spin Seebeck effect (LSSE) 14 , 15 , 16 , 17 , 18 . First reported in 2010, the LSSE offers an unconventional method to design TE devices by making use of a physical quantity called a spin current. The LSSE devices, typically composed of a ferromagnetic insulator (FI) and a normal metallic film (NM), have gained attention because of the simple device structure and novel scaling capability, leading to novel TE devices 16 . The LSSE also has potential to realize practical sensing applications. It is recently reported that FI/NM multilayer structure unexpectedly exhibit significantly enhanced LSSE signal 19 , which may lead to high-sensitive heat-flow sensors. Furthermore, combination of the LSSE and ANE is also a quite hopeful approach. In hybrid TE devices consisting of FI and FM layers, both the LSSE and ANE can constructively contribute to the output voltage, leading to largely enhanced TE signals 20 , 21 . To pave the way for practical sensing applications using the LSSE, here we have demonstrated LSSE-based heat-flow sensing sheets. The concept of the LSSE-based flexible TE sheet is schematically depicted in Fig. 1 . The sheet consists of a magnetic (ferro- or ferrimagnetic) film with in-plane magnetization M and a metallic film formed on a flexible substrate. When a heat flux q flows through the TE sheet perpendicularly to the film plane, a spin current density j s is induced by q via the LSSE. The value of j s is proportional to q (| j s | ∝ q ). Then, j s is converted into an electric field in the transverse direction via the inverse spin Hall effect (ISHE) 22 , 23 : Figure 1 Concept of TE sheet for heat-flow sensing based on the LSSE. The LSSE-based TE sheet consists of a metallic film and a magnetic (ferro- or ferrimagnetic) film formed on a flexible substrate. When a heat flux q flows through the TE sheet, a spin current j s is induced and injected from the ferrite film into the metallic film by the LSSE. Then the j s is finally converted into an electric voltage V as a result of the inverse spin Hall effect (ISHE) in the metallic film. The thin and simple bilayer structure of the TE sheet allows us to design novel heat-flow sensors with low thermal resistance and a flexible shape. Full size image In the above equation, θ SH and ρ represent the spin-Hall angle and resistivity of the metallic film, respectively. Therefore, the voltage signal V between the two ends of the TE sheet can be employed to evaluate the heat flux q penetrating through the sheet, because V is proportional to q ( V = E ISHE l ∝ q ). Here, it should also be emphasized that a longer sheet length l straightforwardly leads to larger output voltage V . This scaling law is in stark contrast to that of conventional TE devices, in which TE voltage scales with the number of thermocouples connected within the devices. These features enable us to design simple bilayer-structured devices suitable for heat-flow sensors. But there is a problem when we use the above setup for broad heat-flow sensing purposes. When q flows obliquely to the TE sheet, the in-plain component of q gives rise to another TE effect called the transverse spin Seebeck effect (TSSE) 24 , 25 , 26 , 27 , 28 , which can also contribute to the output voltage. Since the mixed output signals from the LSSE and TSSE cannot be distinguished from each other, the TSSE becomes an encumbrance to the correct evaluation of q penetrating the TE sheet in this case. To exclude the TSSE contribution, here we used unique one-dimensional (1D) spin-current conductor, which enables us to detect only the LSSE contribution and to correctly evaluate q flowing across the TE sheet. Results Fabrication of the LSSE-based TE sheet To demonstrate such an LSSE-based flexible TE sheet, we used a spray-coating technique known as “ferrite plating” to grow magnetic films. Ferrites refer to oxide ceramics containing iron, which typically exhibit ferromagnetic properties and have been successfully used as magnetic materials for LSSE devices 29 , 30 . However, conventional ferrite-film preparation techniques, such as liquid phase epitaxy and pulsed-laser deposition, require a high temperature process (400–800 °C) for crystallizing the ferrites, hindering the formation of films on soft-surfaced materials, such as plastics. By contrast, ferrite plating is based on a chemical reaction process in which the Fe ion is oxidized (Fe 2+ → Fe 3+ ); therefore, no high-temperature processes, such as annealing, are required 31 , 32 . This feature enables us to coat ferrite films on a variety of substrates, including plastic films. In this work, we prepared a ferrite Ni 0.2 Zn 0.3 Fe 2.5 O 4 film using this method. As schematically illustrated in Fig. 2(a) , we grew the film by simultaneously spraying an aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) onto a substrate. In this process, the oxidizer reacts with the metal chlorides on the surface, forming a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film on the substrate. All the processes were performed below 100 °C. Figure 2 Demonstration of heat-flow-sensing TE sheet based on the LSSE. ( a ) Schematic of the ferrite-plating method. An aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) are sprayed onto a substrate mounted on a rotating stage. ( b ) SEM image of a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film grown on a SiO 2 /Si substrate using the ferrite-plating method. The film exhibits a columnar-crystal structure. The typical diameter of the columnar grains is approximately 100 nm. ( c ) Photograph of an LSSE-based flexible TE sheet, in which a Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was formed on a 25-μm-thick polyimide substrate. ( d ) TE voltage V as a function of an external magnetic field H , measured when a heat flux q was applied across the TE sheet. The sign of V is reversed, when the sign of H or q changes. ( e ) TE voltage from the TE sheet as a function of q . From the fitting with the solid line, the heat-flow sensitivity of this TE sheet was V / q = 0.98 nV/(W/m 2 ). Full size image A noticeable feature of the ferrite film, grown via such a layer-by-layer chemical process, was its columnar-crystal grain structure. Figure 2(b) depicts the cross-sectional scanning electron microscope (SEM) image of a Ni 0.2 Zn 0.3 Fe 2.5 O 4 film that was grown on a SiO 2 /Si substrate for the purpose of the SEM observation. The diameter of the columnar grain was typically approximately 100 nm. We also verified via transmission electron microscopy and electron diffraction measurements that the crystal orientation of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 was coherently aligned within a single columnar grain. Such a columnar structure can function as a 1D spin-current conductor favorable for LSSE-based (and TSSE-free) heat-flow sensors because of the following two reasons. First, in the LSSE configuration shown in Fig. 1 , a magnon spin current is driven along the columnar grain and is thus less subject to grain scattering, effectively leading to the LSSE signal. Second, since the columnar-grain boundaries impede the transverse propagation of both magnons and phonons, in-plane components of a heat flow cannot effectively produce the TSSE in the light of previous studies (e.g., see ref. 33 , 34 ). Thus we can exclude the possible TSSE contribution, enabling us to correctly measure a heat flow penetrating the TE sheet via the LSSE. Using the ferrite plating technique, we successfully fabricated a flexible TE sheet based on the LSSE. Figure 2(c) represents a photograph of the prepared TE sheet. First, a 500-nm-thick Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was grown on a 25-μm-thick polyimide substrate. Then, a Pt film with a thickness of 5 nm was formed on the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film by means of magnetron sputter deposition. As shown in Fig. 2(c) , our TE sheet was highly flexible and easily bent without breaking the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 film. The sheet was then cut into small pieces with a size of 8 × 4 mm 2 for TE measurements. Demonstration of the LSSE-based TE sheet for heat-flow sensing To evaluate how well the LSSE-based TE sheet functioned as a heat-flow sensor, we investigated its TE property in the following fashion. A heat flux q was driven across the 4 × 4-mm 2 central area of the TE-sheet sample by sandwiching the sheet between two Peltier modules. While driving the heat flow in such a manner, we simultaneously monitored the exact value of q penetrating the TE-sheet sample with a commercially available thin-plate-shaped heat-flow sensor, which was set immediately above the sample. Because the commercial heat-flow sensor was placed in direct contact with the central area of the TE-sheet sample, we could assume that the heat flux value monitored by the sensor was the same as the q actually penetrating across the sample. An external magnetic field H , which controls the direction of the magnetization M of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 films, was also applied to the entire system. The TE voltage V between the two ends of the Pt film was measured with two contact probes. The resistance of the Pt film was determined to be R Pt = 238 Ω. Figure 2(d) represents V as a function of H , measured when heat fluxes of q = −13.7, −6.5, 0.0, 5.6 and 11.6 kW/m 2 were driven across the TE-sheet sample. The TE voltage was observed along the direction perpendicular to the direction of both q and H , as derived from equation 1 (see the inset of Fig. 2(d) ). The result shows that the sign of V is flipped when q or H is reversed, which is a typical behaviour of LSSE-based devices. The heat-flux dependence of the TE voltage in Fig. 2(e) clearly demonstrates that V is proportional to q . The heat-flow sensitivity derived from the fitted line is V / q = 0.98 nV/(W/m 2 ). The demonstration of this linear relationship between V and q suggests that our LSSE-based TE sheet functioned as a heat-flow sensor. In an additional experiment, we have confirmed that the TE sheet exhibits no output signal when a temperature gradient was applied in the in-plane direction (see Supplementary Information ). It suggests that the TSSE is negligibly small in our ferrite-plated film because of its 1D spin-current conducting property. Ferrite-thickness dependence of the LSSE-based TE sheet We performed additional experiments to ascertain the origin of the observed TE signal. Given that a ferrite composed of Ni 0.2 Zn 0.3 Fe 2.5 O 4 is typically a semiconducting ferrimagnet with a small but non-zero electrical conductivity, it can exhibit the ANE, which also produces a transverse voltage in the same experimental configuration as the LSSE. In our ferrite plated film, however, the in-plane electrical resistance of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was too high to be measured, which may be partly attributed to the vertically oriented grain boundaries of the columnar-structured film. Due to this transverse electric insulation, we could not observe any signals originating from the bulk ANE in the Ni 0.2 Zn 0.3 Fe 2.5 O 4 . However, there still remains a possibility that the TE signal includes ANE contribution caused by magnetic proximity effects at the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 interface 35 . To shed light on such TE conversion mechanism, we investigated the TE properties of samples with varied ferrite-film thicknesses t F . Figure 3 presents the ferrite-thickness dependence of the heat-flow sensitivity ( V / q ) Norm normalized to the sensitivity at t F = 500 nm. The ( V / q ) Norm values monotonically increase for t F < 100 nm, whereas the t F dependence of V becomes saturated for t F > 100 nm. The plots are well fitted to an exponential curve ( V / q ) Norm = 1 − exp (− t F /λ) with λ = 71 nm. Similar to recent LSSE studies using yttrium iron garnet (YIG) films 36 , this ferrite-thickness dependence is consistently explained according to the magnon-driven LSSE scenario 27 , 28 , 37 , 38 , in which a certain ferrite thickness region (corresponding to the magnon-propagation length) below the Pt/ferrite interface effectively contributes to the voltage generation. On the other hand, the proximity-ANE scenario, which can occur at the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 interface, is not able to explain this dependence. Thus, our finding indicates that the obtained signal originated mainly from the bulk magnon spin current driven by the LSSE. The result also suggests that our columnar-crystalline film possesses good spin-current-conduction properties suitable for LSSE-based sensors. Though it is beyond the scope of this work, such 1D spin-current conductors might have unconventional magnon-propagation properties which is different from that of 3D conductors, because magnon-scattering events can be altered in such confined structure. Control of magnon propagation in low-dimensional conductors will be an exciting research topic for future work from both academic and practical viewpoints. Figure 3 Ferrite-film-thickness dependence of LSSE-based TE sheets. The t F dependence of the heat-flow sensitivity V / q , where the longitudinal axis is normalized by the sensitivity at t F = 500 nm. The dependence is well fitted by an exponential curve ( V / q ) Norm = 1 − exp (− t F /λ) with λ = 71 nm, consistent with the magnon-driven LSSE scenario. Full size image Bending-curvature dependence of the LSSE-based TE sheet Finally, we investigated heat-flow-sensing capability of the flexible LSSE-based TE sheet when the sheet was bent. Figure 4(a,b) depicts the H -dependence of the TE voltage V for the same Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample when a heat flux q was applied across the samples over a 20 × 20-mm 2 area under condition where the sample was flat or bent (with a radius of curvature of r = 17 mm), respectively. The dependence of the heat-flow sensitivity, V / q , on the curvature r −1 is presented in Fig. 4(c) . The result clearly demonstrates that V / q is nearly constant independent of r −1 , suggesting that the bending stresses applied to the Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 films do not significantly affect the TE conversion process consisting of the LSSE and the ISHE. This TE property, i.e., the TE conversion is independent of bending condition, is quite desirable for heat-flow sensing applications on various curved surfaces, because we are able to avoid additional calibration steps that depend on individual measuring objects with various surface curvature. Figure 4 Heat-flow sensing with a bent TE sheet. ( a ) TE voltage V from a flat Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample as a function of an external magnetic field H measured when the heat flux q was applied across the sample over a 20 × 20-mm 2 area. ( b ) H dependence of the TE voltage V from a Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 /polyimide sample that was bent with a radius of curvature r = 17 mm, when the heat flux q was applied across the sample over a 20 × 20-mm 2 area. ( c ) Heat-flow sensitivity V / q as a function of curvature r −1 , indicating that V / q is almost independent of r −1 . Full size image Discussion We successfully demonstrated that an LSSE-based flexible TE sheet with 1D spin-current conducting film functions as a heat-flow sensor. The ferrite-thickness dependence of the TE voltage suggests that the TE signal was caused predominantly by the LSSE, which is consistent with other reports using Pt/NiFe 2 O 4 39 . The magnon-propagation length in our Ni 0.2 Zn 0.3 Fe 2.5 O 4 film perpendicular to the film plane is approximately 71 nm. The TE sheet exhibit nearly identical heat-flow sensitivity regardless of bending curvature, suggesting that our columnar-crystalline film retains good 1D spin-current-conduction properties even when bent. The outstanding features of our TE sheet in contrast to currently available sensors are high flexibility in shape and remarkably low thermal resistance, which is a highly desirable feature for versatile heat-flow sensing. Although we formed ferrite films on plastic substrates in this work, it is also possible to directly plate various heat sources with ferrite films, thereby offering a thermal-sensing function while minimally obstructing the innate heat flux. Such features will offer a variety of opportunities for less destructive heat-flow measurements. To use the TE sheets for a wide range of practical applications, the heat-flow sensitivity V / q must be further improved. A straightforward method to enhance the V / q is to enlarge the size of the TE sheet, as the output voltage scales linearly with the film length l . We can also increase the effective length l inside a certain area by adopting meandering-patterned metallic film structures 20 , 40 . Another strategy is to replace Pt. We investigated TE-sheet samples with different metallic materials instead of Pt and found that heat-flow sensitivity V / q of W/Ni 0.2 Zn 0.3 Fe 2.5 O 4 was 3.5-fold larger than that of Pt/Ni 0.2 Zn 0.3 Fe 2.5 O 4 (see Supplementary Information ). Moreover, we can enhance the sensitivity further by adopting recently reported FI/NM multilayer structures 19 , or FI/FM structure which can utilize both the SSE and ANE 20 , 21 . Although we applied an external magnetic field H to the TE sheet for our experimental demonstration, this step is not necessary if the spontaneous magnetization M of the ferrite is sufficiently stable. The improvement of such magnetic stability is realized, for example, by doping cobalt into ferrite-plated films, which is known to enhance a coercive field of the ferrites 41 . The LSSE-based heat-flow-sensing technique, in which a heat flux induces an electrical signal indirectly via an LSSE-driven spin current, offers unconventional device-design opportunities, leading to novel heat-managing applications. Methods Sample preparation To prepare the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film for the TE sheet via a ferrite plating method, we first mounted a 25-μm-thick polyimide substrate on a rotating stage and then sprayed an aqueous reaction solution (FeCl 2 + NiCl 2 + ZnCl 2 ) and an oxidizer (NaNO 2 + CH 3 COONH 4 ) from two nozzles placed above the stage, as shown in Fig. 2a . This setup enabled us to grow the ferrite film by alternating adsorption and oxidation of the ingredient materials (including Fe, Ni and Zn). During the process, the temperature of the stage was maintained at approximately 90 °C. The thickness of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film was controlled via the time period of this formation process. The composition of the ferrite film was analysed by inductively coupled plasma spectroscopy (ICPS). A Pt film was deposited on the top of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 film with a magnetron sputtering system. Immediately before the sputtering process, the sample was exposed to argon plasma for 10 s to clean the surface of the Ni 0.2 Zn 0.3 Fe 2.5 O 4 . TE conversion measurements To evaluate the TE conversion of a heat flow to electric voltage, the sample was cut into small 8 × 4-mm 2 pieces using a cutter. To investigate the heat-flow-sensing properties of the LSSE-based TE sheet, we drove a heat flow across the sheet using two commercial 4 × 4-mm 2 Peltier modules. The two Peltier modules were attached to the top and bottom of the TE sheet, enabling us to heat one side and cool the other side of the TE sheet. The temperature difference, applied in such a manner, led to a heat flux penetrating through the TE sheet. Because the in-plane thermal conductance in our thin TE sheet was quite small, we can assume that the direction of the heat flux was nearly perpendicular to the TE sheet. While driving the heat flow, we simultaneously monitored the exact value of q penetrating the TE sheet using a commercial thin-plate-shaped heat-flow sensor. The sensor was placed between the upper Peltier module and the TE sheet, in direct contact with the Pt film of the TE sheet. With this setup, we can assume that the same amount of heat flux q flowed across both the TE sheet and the sensor. The generated TE voltage was measured with a digital multimeter. TE measurements of the bent samples To evaluate bent LSSE-based TE sheets as shown in Fig. 4 , we used pairs of oxide-coated aluminium blocks with curved (concave and convex) surfaces. In the experiments, the TE sheet was sandwiched by the concave and convex blocks with a certain bending curvature. To investigate the bending-curvature dependence, we prepared several pairs of such blocks with different surface curvatures, in which the lateral size of the blocks was fixed to 20 × 20 mm 2 . The heat-flow-sensing properties of the bent TE sheets were evaluated in the same manner as described above. The heat flux was driven across the TE sheet by two Peltier modules attached to the top and bottom of the block pair that sandwiched the sheet. Commercially available 20 × 20-mm 2 heat-flux sensors were also used to monitor the level of the heat flux penetrating across the TE sheet. Additional Information How to cite this article : Kirihara, A. et al. Flexible heat-flow sensing sheets based on the longitudinal spin Seebeck effect using one-dimensional spin-current conducting films. Sci. Rep. 6 , 23114; doi: 10.1038/srep23114 (2016).
A thermoelectric (TE) device using cutting edge thermoelectric conversion technology has been created by a team comprising NEC Corporation, NEC TOKIN Corporation and Tohoku University. The new technology, known as the spin Seebeck effect, has conversion efficiency 10 times higher than the conventional method. Thermoelectric conversion technology that converts energy abandoned as waste heat back to electric power could potentially save energy and reduce greenhouse gas emissions. Although conventional spin Seebeck thermoelectric devices have the advantage of low manufacturing costs and high versatility and durability, their energy conversion efficiency is inferior. "We have improved the conversion efficiency of this spin Seebeck thermoelectric device by more than 10 times because of its newly developed material and device structure," says Soichi Tsumura, General Manager, IoT Device Research Laboratories, NEC Corporation. "Furthermore, devices made of flexible material, such as resin, have been achieved using a manufacturing process that does not require high-temperature heat treatment." "The conversion efficiency of this new spin thermoelectric device has been improved by almost one million times when compared to the earliest device, and has taken an important step towards practical use as a generator element. The achievement of practical use as a heat flux sensor is also in sight," says Tsumura. Devices with bending resistance and low heat treatment temperature achieved by new deposition technology.New deposition technology fabricates a fine ferrite film for spin Seebeck thermoelectric devices at 90°C, much lower than the 700°C used with the conventional method. Owing to the decrease in heat treatment temperature, elements can be created on the surface of plastic film, etc., and flexible devices of various shapes are created. Credit: NEC Corporation The three parties aim to further the research and development of technologies to generate electricity from the large amount of waste heat emitted by things such as plants, data centers and vehicles. These results were achieved as part of the "Saitoh Spin Quantum Rectification Project" led by Tohoku University Professor Eiji Saitoh. It is funded by the Exploratory Research for Advanced Technology (ERATO) program of the Japan Science and Technology Agency (JST).
10.1038/srep23114
Biology
New study shows that the resilience of ecosystems can be measured from space
Taylor Smith et al, Empirical evidence for recent global shifts in vegetation resilience, Nature Climate Change (2022). DOI: 10.1038/s41558-022-01352-2 Journal information: Nature Climate Change
https://dx.doi.org/10.1038/s41558-022-01352-2
https://phys.org/news/2022-04-resilience-ecosystems-space.html
Abstract The character and health of ecosystems worldwide is tightly coupled to changes in Earth’s climate. Theory suggests that ecosystem resilience—the ability of ecosystems to resist and recover from external shocks such as droughts and fires—can be inferred from their natural variability. Here, we quantify vegetation resilience globally with complementary metrics based on two independent long-term satellite records. We first empirically confirm that the recovery rates from large perturbations can be closely approximated from internal vegetation variability across vegetation types and climate zones. On the basis of this empirical relationship, we quantify vegetation resilience continuously and globally from 1992 to 2017. Long-term vegetation resilience trends are spatially heterogeneous, with overall increasing resilience in the tropics and decreasing resilience at higher latitudes. Shorter-term trends, however, reveal a marked shift towards a global decline in vegetation resilience since the early 2000s, particularly in the equatorial rainforest belt. Main Natural ecosystems are severely threatened by climate change and biodiversity loss; the Amazon, African and southeast Asian rainforests are key examples that have attracted substantial recent attention 1 , 2 , 3 . These tropical vegetation systems have been inferred to exhibit multistability for broad ranges of mean annual precipitation 4 , 5 ; within the same precipitation ranges, both the rainforest state and an alternative savannah state are simultaneously stable. This implies that, even absent long-term changes in local or regional precipitation, transitions from the current rainforest state to the savannah state are possible and may be triggered by external perturbations such as droughts, forest fires and deforestation 6 . Although ecosystem transitions in tropical rainforests have received widespread attention, the risk of transitions to alternative ecosystem states appears to be a global characteristic that extends to high-latitude 7 , 8 and dryland ecosystems 9 . Given that ecosystem transitions could turn net carbon sinks into carbon sources 3 and the tremendous potential of vegetation to reduce atmospheric carbon dioxide concentrations 10 , the mitigation of anthropogenic climate change and the maintenance of global biodiversity are strongly dependent on the resilience of vegetation systems worldwide. Ecosystem resilience is typically defined as the capacity to resist and recover from external disturbances 11 , 12 , 13 . Unfortunately, this definition only allows for the empirical measurement of resilience either in controlled experiments (by applying an artificial disturbance) or by waiting for occurrences of large external disturbances to natural vegetation systems. Due to the scarcity of suitably strong external perturbations, it is difficult to quantify the resilience of natural ecosystems at a global scale, and in particular to investigate resilience changes over time. Theoretically, the fluctuation–dissipation theorem (FDT) from statistical mechanics 14 , 15 , 16 , 17 suggests that for specific classes of systems, the response to external perturbations can be expressed in terms of the characteristics of natural fluctuations around the equilibrium state. In other words, the FDT states that the rate at which a system will return to equilibrium following an external disturbance can be determined from its internal natural fluctuations. The tremendous practical value of the FDT comes from the fact that, if it can be shown to hold for a given system, the response to external perturbations can be predicted on the basis of the internal variability of the system in question. Evidence that the FDT holds has been revealed in several real-world systems 17 , ranging from financial market data 18 , 19 to atmospheric and climate dynamics 20 , 21 . Several studies have suggested that the lag-one autocorrelation (AC1)—a measure of how strongly correlated neighbouring time spans of a given time series are—and variance of a system can be used as measures of vegetation resilience 1 , 22 , 23 , 24 , 25 , 26 , 27 . The variability of natural fluctuations can be estimated in terms of the variance 22 , 27 , 28 , while the strength of the system’s memory can be measured using the AC1 1 , 23 , 24 , 25 , 28 . Low-dimensional dynamical system frameworks and designed experiments justify this choice by showing that variance and AC1 increase as the system approaches a critical threshold beyond which a bifurcation-induced transition—a jump to an alternative stable state—occurs, which is interpreted as a loss of resilience 29 , 30 . The increase in AC1 together with a corresponding increase in variance have been termed early-warning signals for critical transitions; the underlying change in dynamics is referred to as ‘critical slowing down’ 22 , 28 . It has been shown that early-warning signals can be identified before abrupt climate transitions evidenced in palaeoclimate records 31 , 32 , 33 as well as in ecosystem 28 and climate 34 , 35 model simulations. However, although the AC1 and variance have been used to quantify the stability or resilience of different systems, their actual suitability as measures of ecosystem, and in particular vegetation, resilience has not been confirmed outside of controlled and model-based experiments 36 , 37 , and in particular not based on empirical evidence. In this article, we use empirical remotely sensed vegetation data to test for the correspondence between theoretical vegetation resilience—AC1 and variance—and the rates of recovery from perturbations. We first use large perturbations to derive empirical recovery rates for diverse landscapes, vegetation types and climate zones using two independent vegetation datasets based on optical (advanced very-high-resolution radiometer (AVHRR) normalized difference vegetation index (NDVI), 1981–2015 38 ) and passive microwave (vegetation optical depth (VOD), 1992–2017 39 ) data; these data measure changes in vegetation with different methods and thus provide complementary information for our analysis. We then show that for VOD, the empirically estimated recovery rates from large external perturbations are indeed closely related to the continuously measurable response to small natural fluctuations, quantified here by AC1 and variance. We further show that the AC1 and variance of NDVI are not well matched to empirically estimated recovery rates from large disturbances and conclude that VOD is a more suitable basis for measuring vegetation resilience. We emphasize that while both AC1 and variance have previously been used to estimate vegetation resilience 1 , their theoretically expected relationships with recovery rates from perturbations, and thus with resilience, have yet to be confirmed empirically for vegetation systems. Moreover, temporal changes in AC1 and variance of remotely sensed vegetation indices, as we investigate here, have rarely been studied 40 , 41 . By comparing with the empirical rates of recovery from external perturbations, we demonstrate using VOD that both AC1 and variance provide robust, empirically verified global resilience measures. On the basis of this relationship, we further quantify global-scale changes in vegetation resilience since 1992 and find coherent resilience loss across land-cover types that has accelerated in the past two decades. Quantifying vegetation recovery from external perturbations Vegetation in the natural world is constantly subject to disturbances that vary greatly in frequency and intensity. Many of these signals are subtle, and identifying minor and short-term disturbances is difficult. Large excursions from the typical vegetation state of an ecosystem can, however, be identified by abrupt transitions in time series of vegetation indices. The empirical local recovery rate can then be estimated after each abrupt negative transition by fitting an exponential function to the time series as it recovers towards its previous state (Fig. 1 , see Methods for details). Fig. 1: Global vegetation data. a , Global long-term mean of VOD 39 (1992–2017). b , VOD time series for a given location in the Brazilian Amazon (8.375° S, 50.875° W). Raw time series in black, with deseasoned and detrended time-series residual in blue (see Methods for details). c , Recovery of the exemplary time series to the previous mean state after a rapid transition, with commensurate exponential fits. Rare large disturbances, such as those in 2007 and 2010, can be used to track the recovery of vegetation and assign a recovery rate using an exponential fit. See Extended Data Fig. 1 for a corresponding figure based on the NDVI. Exp., exponential. Full size image Both VOD (Fig. 1 ) and NDVI (Extended Data Fig. 1 ) are subject to the same types of major external disturbances (for example, droughts or fires) that can rapidly reduce both vegetation density (VOD) and vegetation productivity or greenness (NDVI). It is important to note that while both datasets measure vegetation, the data do not describe the same vegetation parameters and hence do not respond identically to external shocks; this can in some cases mean that the number of detected transitions differs between the two vegetation datasets over the same period. In addition, while vegetation recovery is measurable in both data, the time frame of those recoveries, and hence the fitted exponential function, can be dramatically different for the same perturbation (Fig. 1 and Extended Data Fig. 1 ). Further discussion of the limitations of the disturbance detection procedure can be found in Methods . Estimating resilience from intrinsic variability We find globally well-distributed recovery rates from diverse external shocks (Fig. 2 and Extended Data Fig. 2 ). Not all landscapes have experienced rapid and drastic changes in vegetation over the satellite measurement period; for such regions, it is impossible to directly measure vegetation resilience in terms of recovery from an external shock. Even in regions where perturbations are relatively frequent, they are too sparsely distributed to allow for an estimation of changes in the recovery rate, and thus resilience changes, through time (Fig. 2a ). Fig. 2: Global distribution of recovery rates. a , Recovery rate (for well-determined exponential fits, R 2 > 0.2) for VOD ( n = 11,538 perturbations for 10,620 unique locations). b , Theoretical estimate of the recovery rate computed via r AC1 = log[AC1] ( Methods ) from the AC1 of the detrended and deseasoned VOD time series at each location. c , Theoretical estimate of the recovery rate computed via r Var = σ 2 /(2〈 x 2 〉) ( Methods ) from the variance 〈 x 2 〉 of the detrended and deseasoned VOD time series at each location. Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 ( Methods ). Note the sparsity of grid cells where there have been abrupt shocks that can be exploited to estimate the recovery rate ( a ), as opposed to theoretical measures ( b , c ) that can be computed for all grid cells with vegetation. Also note the similarity of the spatial patterns in b and c and their resemblance to the spatial pattern shown in a as far as there are values for the recovery rate available. d , e , Relative deviation d of theoretical recovery rate estimated from AC1 ( d ) and variance ( e ) (for example, d = ( r − r AC1 )/ r ). Clear patterns of over- and underestimation of recovery rate indicate that the theoretical framework does not perform equally in all locations. See Extended Data Fig. 2 for a corresponding figure based on the NDVI. Full size image The FDT suggests that the rate of a system’s recovery from large external perturbations is related to the variability (quantified by variance 22 , 27 , 28 ) and memory timescale (quantified by AC1 1 , 23 , 24 , 25 , 28 ) of natural fluctuations around the equilibrium 16 . Theory predicts an exponential relationship between the AC1 and the negative recovery rate r , that is, AC1 = e r Δ t , and a power-law relationship between the variance of the VOD time series x and the recovery rate r , that is, 〈 x 2 〉 = − σ 2 /2 r Δ t , where σ is the standard deviation of the driving noise, r < 0, and we set the time steps to Δ t = 1 (see Methods for details). For the set of locations where empirical recovery rates can be estimated (Fig. 2a ), both AC1 and variance can be derived directly from the corresponding time series. For areas where it was possible to empirically estimate the recovery rate from large perturbations (Fig. 2a ), there is broad spatial agreement with the AC1 (Fig. 2b ) and variance (Fig. 2c ) estimates (see also zoomed-in maps, Supplementary Figs. 1 and 2 ). Moreover, the two theoretical recovery rate estimates themselves, which are available for all vegetated grid cells, exhibit similar spatial distributions (compare Fig. 2b,c ), especially if the relative order of values is considered (see the rank comparison in Supplementary Fig. 3 ). Note that the AC1-based estimate for the recovery rate r mostly underestimates the recovery rate (Fig. 2d ), especially in parts of North America, central Europe and Southern Africa, while the variance-based estimate for the recovery rate mostly overestimates the recovery rate (Fig. 2e ). To more concisely compare the empirical (Fig. 2a ) with the two theoretical recovery estimates (Fig. 2b,c ), we compare them on a point-by-point basis (Fig. 3 ). For the VOD, the expected relationships hold remarkably well; for NDVI, the link between empirical and theoretical resilience metrics is much weaker (see Extended Data Fig. 3 ). Fig. 3: Empirical confirmation of recovery rates. Comparison between empirically measured recovery rates and theoretical resilience metrics calculated over the five years preceding each transition, for VOD data 39 . a , AC1 versus recovery rates r from exponential fits to recovering time series with R 2 > 0.3; the magenta (blue) line shows binned medians (means), which are close to the exponential fit of the empirical relationship between recovery rate and AC1 values (red line). Grey shading shows data interquartile range. The AC1 thus shows the expected exponential relationship with the recovery rate, but quantitatively, some deviations from the theoretically expected AC1 = e r (black line) are apparent. b , Same as a but for the variance. The variance indeed shows the expected power-law relationship with the recovery rate, but as for the AC1, there are some deviations from the theoretically expected \(\langle {x}^{2}\rangle =-{\sigma }_{r}^{2}/2r\) relationship (black line), where we use the spatial mean of the driving noise σ r . The mean variance and corresponding interquartile range are also shown for the case where the individual σ r values for each grid cell are used to compute the variance (orange line, with shaded interquartile range). c , Binned medians of AC1 as a function of the empirically measured recovery rate r , for increasing thresholds on R 2 of the exponential fit to the recovering time series after abrupt transitions, as indicated in the legend. d , Same as c but for the variance. Note that the match between empirical and theoretical estimates of the recovery rate improves the more restrictive the empirical estimation of the recovery rate is; low R 2 variance medians in d plot on top of each other until R 2 > 0.3. Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 ( Methods ). See Extended Data Fig. 3 for a corresponding figure based on the NDVI and Extended Data Fig. 4 for a corresponding figure using the whole time series to compute AC1 and variance with VOD. See Supplementary Fig. 4 for alternative measures of theoretical variance based on different σ estimates. Full size image When considering the AC1 and variance values directly as functions of the recovery rates for all available grid cells together, the theoretically expected relationships are overall corroborated by the observational data, although differences between geographical regions are neglected when investigating the relationship in this way. As expected, some differences are therefore visible (compare Fig. 2 ). We note that the correspondence between theoretical and empirical estimates becomes substantially better if only recovery rates from exponential fits with R 2 > 0.5 are considered, compared with recovery rates from all fits with R 2 > 0.1 (Fig. 3 ). This indicates that the poor exponential fits to the recovering time series after transitions are a key reason for the differences between measurement and theory and suggests in turn that the more reliable the recovery rate estimate, the closer is the match between empirical and theoretical estimates of the recovery rates. We also note that for the variance, uncertainties in estimating the standard deviation of the driving noise σ also probably play a role. Estimating the variance from the empirically determined recovery rate via 〈 x 2 〉 = − σ 2 /2 r Δ t requires an estimate of σ . We calculate each individual σ i and then bin the resulting data points to obtain the orange curve in Fig. 3b , while we use the globally averaged σ to obtain the black curve. Global shifts in vegetation resilience Rapid large-scale perturbations are not evenly distributed in space and time (compare Fig. 2 ), which renders a reliable estimation of temporal resilience changes in terms of empirical recovery rates impossible. As justified by the relationship between recovery rates and theoretical resilience metrics (compare Fig. 3 ), we instead calculate resilience in terms of both the AC1 and variance in rolling five-year windows over all vegetated areas (Fig. 4 ). In the following, we define resilience loss (gain) if at least one of the two indicators (AC1 or variance) shows a statistically significantly increasing (decreasing) trend while the other indicator does not exhibit a significant trend in the other direction (Fig. 4 ). Fig. 4: Global resilience trends. a – c , Direction (+/–) of global resilience trends for AC1 and variance using VOD data 39 for 1992–2017 ( a ), 1992–2004 ( b ) and 2004–2017 ( c ). Bare earth, snow and anthropogenic land covers are excluded from the analysis 42 (white areas) ( Methods ). Linear trends are calculated on the basis of five-year rolling-window AC1 and variance estimates; only trends with P < 0.05 in either AC1 or variance are shown in colour (see Methods for details on significance testing). Pixels with mixed significant trends (for example, AC1 positive, variance negative) are shown in grey. See Supplementary Table 1 for raw pixel counts. See Extended Data Fig. 5 for global AC1 and variance trends for all three periods, and Supplementary Fig. 5 for latitude-aggregated trends. Note the increases in the strength of resilience loss since the 2000s, especially in the tropics (Extended Data Figs. 5 – 7 ). Full size image Over the period 1992–2017, the spatial pattern of resilience trends in terms of AC1 and variance is complex (Fig. 4a and Extended Data Fig. 5 ) but follows consistent latitudinal patterns where equatorial (for example, Amazon and Congo basins) and monsoon-driven (for example, southeast Asia) areas show generally increasing resilience (negative trends in both indicators), and high-latitude areas typically show decreasing resilience trends, especially for the Northern Hemisphere. The global picture for long-term resilience trends is thus mixed; there is only a slight majority of grid cells with resilience losses (54.2%) compared with the number of grid cells with resilience gains (41.6%) over the whole period 1992–2017. The given percentages refer to the set of grid cells that have at least one statistically significant trend in either of the two indicators; the unconfined class with significant yet opposing trends contributes the remaining ~4%. When we restrict the analysis to the first half of our study period (1992–2004), trends are again mixed, with increasing resilience in the tropics and decreasing resilience at higher latitudes (Fig. 4b ); these trends are stronger for variance than for AC1 (Extended Data Fig. 5 ). From the early 2000s onward, however, we observe a marked increase in resilience loss in terms of both indicators (that is, significantly positive trends in AC1 and variance; Fig. 4c and Extended Data Figs. 5 – 7 ). We observe an increase from 28.2% to 59.4% of pixels with resilience loss between the periods 1992–2004 and 2004–2017; the percentage of pixels showing resilience gains decreased from 37.9% to 33.8%. Areas with significant yet opposing trends contribute the remaining 33.8% and 6.8%, respectively; many regions with opposing significant trends until 2004 show coherent resilience loss in both indicators for the period since 2000, 2002 or 2004 (Fig. 4c and Extended Data Fig. 7 ). Some regions, such as the high northern latitudes, southern Africa and parts of Australia, show consistent resilience losses throughout the study period, which broadly agrees with previous findings based on alternative resilience metrics and AVHRR NDVI data 41 . Many regions, in particular the equatorial rainforest belt, have reversed from gaining resilience (blue regions, 1992–2004) to losing resilience (orange and red regions, 2000s onwards). Long-term (1992–2017) trends thus conceal a strong reversal from gains to losses in resilience in many regions. When changes in AC1 and variance are aggregated by land cover 42 , we infer that evergreen broadleaf forests show overall lower AC1 and variance (higher resilience) than other land-cover types (Extended Data Fig. 6 ); nevertheless, the global tendency is towards aggregate decreases in resilience (in terms of AC1) across all land-cover classes. These trends maintain a similar form if a three-year or seven-year rolling window is used to calculate continuous changes in resilience (Extended Data Fig. 6 ). It should be noted, however, that this approach conceals considerable spatial trend variability (Extended Data Fig. 5 ) and will (although confined to single land-cover types) smooth over vast differences in biomes worldwide; hence, these aggregated time series should be carefully interpreted in the context of the global trend maps (Fig. 4 and Extended Data Fig. 5 ). Variance presents a more mixed picture when aggregated by land-cover class, with losses of resilience being expressed more strongly since the early 2000s (Extended Data Fig. 6 ). Previous work proposed that AC1 will always increase towards a critical transition, but variance can in some cases decrease 27 ; the two metrics are also not guaranteed to change at the same rate. This is also to some degree expressed in our global trends (Extended Data Fig. 5 ), where variance trends, particularly for the tropics, are more strongly negative than for AC1 over both the whole period 1992–2017 and the early period 1992–2004. Both AC1 and variance trends, however, are majority positive for the recent period ~2000–2017 (Fig. 4c and Extended Data Figs. 5 – 7 ). Note that many regions where we observe strong vegetation resilience loss are also fire prone (for example, Siberia, Canada and western North America); increasing fire frequencies due to drier conditions in these regions could explain some of the observed recent vegetation resilience loss 43 . Increases in temperature, alongside changes in precipitation and weather extremes, could also be a potential driver of changing vegetation resilience; we emphasize, however, that a detailed analysis of the different potential causes for the inferred resilience loss (and in particular its acceleration during the past two decades) is still lacking and is an important topic for future research. Discussion Our results provide empirical evidence that both AC1 and variance are directly related to vegetation resilience, defined as the recovery rate from external perturbations. The AC1 and variance can hence be used to estimate resilience in situations where controlled experiments are not possible and external perturbations are rare. Our findings, therefore, justify the usage of AC1 and variance as vegetation resilience metrics 1 , 41 , 44 and provide an empirical basis for future studies based on these theoretical resilience metrics. However, our results also show that the resilience estimates derived from the common AC1 and variance metrics directly using theoretical relationships may be slightly biased, and instead the modified empirical relationships revealed in Fig. 3 should be used to translate AC1 and variance into the recovery rate as a measure of resilience. On the basis of the thus empirically confirmed relationship between AC1/variance and vegetation resilience, we infer a heterogeneous spatial pattern of resilience gains and losses; resilience losses in the high northern latitudes are consistent since the early 1990s, but in the tropics, we detect gains during the 1990s and pronounced resilience losses since around the year 2000. While the directions of AC1 and variance trends broadly agree (Fig. 4 and Extended Data Figs. 5 – 7 ), there remains considerable spatial heterogeneity. We find marked differences in our results when using the NDVI instead of the VOD data. While we cannot say with complete certainty what drives this disparity, it is likely that differences in the parameters measured by the satellites play a critical role. VOD is primarily sensitive to vegetation density and, thus, will respond to changes in both leafy and woody biomass 39 . NDVI, however, is sensitive to ‘greenness’, which is often interpreted as vegetation productivity or chlorophyll content; it is well known that NDVI is a poor estimator of biomass 45 . Recovery in NDVI after a disturbance can thus be rapid, even if a completely new species mix accounts for the post-disturbance vegetation growth (for example, forest replaced by grass). VOD, however, will remain suppressed until vegetation density (for example, leaves and stems) returns. It is thus likely that the empirically derived recovery rates for NDVI contain much higher levels of noise and that some recoveries to previous NDVI values represent a transition to a new vegetation mix rather than a return to the actual previous vegetation state. The relatively poor constraint on vegetation type provided by NDVI is a major barrier to its use in assessing ecosystem state and stability; we therefore propose to rather employ VOD data for such purposes. A few potential caveats should be kept in mind when interpreting our results. (1) We do not have a strong constraint on the type and cause of the vegetation perturbations used to calculate recovery rates. Sufficient data on all types of disturbances, their spatial extent and their magnitude do not exist; we thus rely on a data-driven approach to estimate the timing and magnitude of a given disturbance. We note, however, that since we determine the empirical recovery rates using only parts of the time series following an abrupt transition, we can estimate a recovery rate without knowing what kind of event (for example, fire, drought) caused the abrupt transition. (2) Possible spurious or missed time-series transitions are carried forward into our analysis of the global relationship between empirical and theoretical vegetation resilience; this probably accounts for some of the scatter seen in Fig. 3 (see Methods for further details). (3) Some changes in variance and autocorrelation are not necessarily related directly to vegetation resilience, for example, in the case of time-lagged vegetation response to water deficits 46 that could modify the measured AC1. At the global scale of our analysis, however, we posit that our empirical confirmation of resilience metrics and long-term trends remain robust. (4) We are limited by the mathematical framework to studying only systems that return to the previous state and therefore probably miss many important ecosystem transitions from which there has been no recovery to the original state. Finally, (5) it is important to note that we cannot say for certain whether the acceleration of resilience loss observed in the past decades (Fig. 4 ) will continue into the future; indeed, it is possible that global vegetation resilience is responding to a (multi-)decadal climate variability mode (compare Extended Data Fig. 6 ), which could in principle drive a global-scale reversal towards renewed resilience gains. Theoretically, a critical transition will occur when the AC1 reaches a value of one, corresponding to a vanishing recovery rate; in practice, however, extrapolating AC1 trends into the future is not feasible. Our results are based on empirical data and are thus not predictive; they show only how vegetation resilience has changed in recent decades. We have also not assessed changes in the magnitude or frequency of external disturbances (for example, droughts 47 ), which also play a key role in controlling global vegetation health; a comparison between vegetation resilience and contemporaneous changes in external disturbances would provide key context for the attribution of observed resilience changes to explicit drivers. Despite these caveats, our work represents the first empirical confirmation of previously proposed vegetation resilience metrics in terms of variance and AC1 and thus provides the basis for further investigations. Our study shows that the satellite-derived VOD data can be used to establish a global empirical manifestation of the FDT for vegetated ecosystems. Vegetation resilience, defined as the capacity to recover from external perturbations, can hence be approximated from the characteristics of natural internal variability in terms of AC1 and variance. On the basis of this correspondence, we identify a global loss in vegetation resilience over the course of the past decades, although the spatial pattern is heterogeneous and the inferred resilience changes depend on climate zones. The spatial pattern is complex for the full period for which reliable VOD data are available (1992–2017), with overall resilience gains in the tropical belts and losses in the higher northern and southern latitudes. From the 2000s onwards, however, we find globally almost coherent resilience loss; further work is required to constrain the causes of this loss and especially to investigate whether the observed resilience losses can be attributed to anthropogenic climate and land-use change. Our results establish a firm basis for a global, satellite-driven monitoring system of ecosystem resilience. Methods Data preparation We use two vegetation datasets in our analysis to provide a holistic view of vegetation response to shocks and stresses. (1) VOD at 0.25° spatial resolution; specifically, we employ the Ku-band and use daily values for the period 1992–2017 39 . Note that we do not use the entire VOD data record (1987–) as some pixels exhibit extreme discontinuities before 1992 (Extended Data Fig. 6 ). We posit that this is due to the change from Special Sensor Microwave/Imager satellite F08 to F11 in the VOD dataset 39 . While we observe these discontinuities only in the tropics, we choose to discard all data before 1992 for consistency; it should be noted, however, that our global-scale results are robust whether we use 1987 or 1992 as our first year of data (Extended Data Fig. 6 ). (2) NDVI (from AVHRR) at 1/12° spatial and 15-day temporal resolution for the period 1981–2015; specifically, we use GIMMSv3g 38 . We further use the moderate-resolution imaging spectroradiometer (MODIS) MCD12C1 land-cover database (2014 annual composite, resampled via the mode of land covers in each VOD/NDVI pixel) 42 to break our analyses into distinct land-cover types (for example, Extended Data Fig. 6 ). To limit the impact of anthropogenic land use on our results, we further use MODIS MCD12Q1 (500 m, annually 2001–2017) land-cover data to identify any pixels that were at any point during the period 2001–2017 subject to human land use (for example, urban, cropland). We then remove any NDVI/VOD pixels that had one or more anthropogenic land-cover pixels (at least one 500 m pixel) in at least one year between 2001 and 2017. This step helps to remove pixels that, for example, were once logged and then returned to grasslands; those pixels would not be classified as ‘anthropogenic’ for the entire period following the logging and thus might introduce spurious results. While this does not completely eliminate anthropogenic influence from our results (we do not have sufficient land-cover data before the MODIS sensing period), it conservatively removes all 0.25° (~25 × 25 km) regions where human use occurred. We thus cannot completely rule out the influence of human-driven land-cover change on our results at the global scale but have endeavoured to remove it to the furthest extent possible given data limitations. As a final robustness check, we have also used the ref. 48 global deforestation dataset to remove any pixels from our long-term trend data (Fig. 4 ) that suffered forest loss (Supplementary Figs. 6 and 7 ); as this dataset also includes non-anthropogenic forest loss—for example, due to natural fires—it serves as an even more conservative land-cover removal step. Removing these additional pixels does not substantially impact our reported long-term trend results or our inferred conclusions. Cloud cover and other data artefacts are removed from the NDVI data using an upward-smoothing approach to gap filling 49 . VOD data are resampled to a twice-monthly time step to match the temporal resolution of the NDVI data by taking the median of each time window; this step ensures that divergent results between the two vegetation datasets are not due to spatial or temporal sampling differences. Using these cleaned and evenly sampled time series, we then deseason and detrend the data using seasonal trend decomposition by loess (STL 50 , 51 , 52 ). We decompose the full-year signal using a period of 24 (one year at bi-monthly time sampling) and an adaptive loess filter. We use a value of 47 for the trend smoother (one point less than two years) and 25 for the low-pass filter (one point more than one year), according to the rules of thumb originally presented by ref. 50 (see code archive 53 for details). We then maintain the residual (deseasoned and detrended) time-series term for analysis. Note that the VOD dataset is a multi-satellite composite, with variable overlap between different input Ku-band datasets 39 . As multiple datasets are averaged in different configurations throughout the VOD period, there is the potential for changes in noise levels that could influence the computed AC1 and variance values if the underlying signal (for example, vegetation) changes on a slower timescale than the measurement noise. Stronger averaging associated with an increasing number of satellites would lead to step-wise increases in AC1 and step-wise decreases in variance. For the period we consider, we do not see step changes in AC1 or variance as would be expected if the noise level or character changed with the introduction or removal of a new satellite; indeed, we see consistent resilience loss during long periods of constant satellite configurations (for example, 2002–2009, Extended Data Fig. 6 ). Furthermore, there are no contemporaneous jumps in the variance, which would also be expected to change with shifts in data averaging. We posit that the changes in AC1 and variance that we observe are highly unlikely to be driven by data aggregation and are instead representative of a global change in vegetation resilience. Perturbation detection and recovery analysis We use two methods to detect perturbations in our residual time series: (1) a moving-average 54 and (2) a linear-fit approach 55 . For both methods, we use an 18-point (9 month) moving window over our residual time series and calculate either the simple mean difference between the first and second halves of our moving window (method 1) or a linear trend over the moving window (method 2). We then smooth these resultant derivative time series with a Savitzky–Golay filter (7 points, first order) to remove high-frequency noise 56 . Finally, we isolate any derivative values above the 99th percentile and label consecutive time steps as individual disturbance periods. We then use the highest peak within each disturbance as the perturbation date. Note that the results of our analysis are nearly identical whether we use method (1) or method (2) to detect perturbations; thus, we present here only data based on method (1). In our tests, a comparable set of disturbances was found using 12-, 24-, 36- and 72-point moving windows, which resulted in similar spatial (for example, Fig. 2 ) and global (for example, Fig. 3 ) patterns; for simplicity, we present only results using the 18-point moving window here. As we use a percentile approach to delineate large perturbations, we will not always capture each perturbation for a given time series; our detected perturbations will be biased towards the largest excursion within each individual time series. We acknowledge that not all events will be equally represented in both the VOD and NDVI datasets; in the case where a much stronger response is engendered in one dataset than the other, the percentile threshold may not identify the same event in both time series. Furthermore, we will by construction detect some non-significant perturbations, in particular for the case where a given time series does not experience a strong disturbance. We thus impose the condition that the raw time series must descend more than 0.01 to be considered a valid perturbation. While we do not identify every perturbation over the entirety of both datasets, we generate a large and diverse set of recovery rates that are well distributed in space and time. To ensure that our estimated recovery rates represent a return to the previous state, and not a transition to a new vegetation regime, we further apply the condition that the five years of data before and after the disturbance must pass a two-sample Kolmogorov–Smirnov test ( P < 0.05). We choose five years as our baseline to minimize the impact of long-term (for example, decadal) changes in vegetation state while maintaining enough data on both sides of the transition for a robust comparison. For each detected time-series perturbation, we then find the local minimum of the residual time series with a two-month constraint to account for the fact that disturbances are often detected before the residual time series reaches its lowest point. We then take a period of five years after the local minimum and fit an exponential function, capturing both the exponent r and the coefficient of determination R 2 . To create the map for Fig. 2 , if there is more than one transition at a given pixel location, we use the average recovery rate of all transitions. For Fig. 3 , we maintain all recovery rates (for example, a single time series could contribute more than one recovery rate). We note that most locations studied have only one significant transition during the study period, and it is a relatively small number that have two or more. The computed transition points and recovery rates can be found in our data repository 53 . Resilience estimation Resilience is defined as the capacity to recover from external perturbations 11 , 12 . Quantitatively, it can be determined in terms of the recovery rate r after a perturbation to some value x 0 : $$x(t)\approx {x}_{0}{e}^{rt}$$ where x ( t ) is the state of the system at time t after the perturbation. If r is negative, the system will recover to its equilibrium state at rate ∣ r ∣ . The characteristic recovery time is given by ∣ r ∣ −1 . Note that for positive r , the initial perturbation would instead be amplified, indicating that the system is not resilient. Empirically, we estimate r for each perturbation in each residual NDVI and VOD time series as described in the previous section. The AC1, a measure of how strongly correlated neighbouring time spans of a time series are, has been suggested as a measure for resilience 1 , 23 , 24 , 25 , 57 and more generally as an early-warning indicator for forthcoming critical transitions 28 , 31 . Theoretically, this can be motivated from a linearization of the stochastic dynamics around a given equilibrium point x* . For the fluctuations \(\bar{x}=x-{x}^{* }\) $$\frac{{\mathrm{d}}\bar{x}}{{\mathrm{d}}t}=\kappa \bar{x}+\sigma \eta \,,$$ which defines an Ornstein–Uhlenbeck process with linear damping rate κ < 0 and white-noise forcing with standard deviation σ > 0. It can be shown that the variance \(\langle {\bar{x}}^{2}\rangle\) and lag- n autocorrelation α ( n ) of the stochastic process obtained from a discretization of the Ornstein–Uhlenbeck process into time steps Δ t are given by 58 $$\langle {\bar{x}}^{2}\rangle =\frac{{\sigma }^{2}}{1-{e}^{2\kappa {{\Delta }}t}}\approx -\frac{{\sigma }^{2}}{2\kappa {{\Delta }}t}$$ and $$\alpha (n)={e}^{n\kappa {{\Delta }}t}\,.$$ If the stability of an equilibrium state gradually decreases, κ will approach zero from below, and correspondingly, the variance \(\langle {\bar{x}}^{2}\rangle\) will diverge to positive infinity and the AC1 α (1) will increase towards + 1. These increases in the damping rate κ , as well as the variance of the fluctuations \(\langle {\bar{x}}^{2}\rangle\) and the AC1 α (1) can thus serve as precursor signals for a forthcoming critical transition and, in relative terms, as measures for stability or resilience changes. The theoretical estimates for the recovery rates shown in Fig. 2b for AC1 and in Fig. 2c for the variance are given in terms of the damping rate κ , obtained by inverting the preceding equations. For the variance, an estimate of the driving noise σ r is also needed, which we obtain from $$\frac{{\mathrm{d}}\bar{x}}{{\mathrm{d}}t}=r\bar{x}+{\sigma }_{r}\eta \,,$$ where we used the empirically estimated recovery rate r rather than the damping rate κ on the right-hand side. Practically, we obtain very similar theoretical expressions for the variance when computed using the σ κ obtained when putting κ instead of r into the preceding equation (Supplementary Fig. 4 ). For an empirical confirmation of the FDT, we thus have to show that for the observed vegetation data, an exponential relationship between the AC1 and the recovery rate r , as well as a power-law (1/ r ) relationship between the variance and the recovery rate r , hold. It is important to note that for the comparison between empirical recovery rates and AC1 or variance, we consider only time series that eventually return to their pre-disturbance state, implying that the residual time series under study are, apart from infrequent large perturbations, approximately stationary. Long-term trend estimation To better understand temporal changes in vegetation resilience, we calculate the AC1 and variance on moving windows (with a size of 3, 5 and 7 years) over each entire residual time series. Using these windowed AC1 and variance measurements, we calculate Kendall–Tau 59 statistics to check for increasing or decreasing trends. As our rolling-window data are by construction serially correlated, we test for statistical significance based on a set of 10,000 phase-shuffled surrogates, which preserve the variance and autocorrelation function of the original time series 31 , 32 , 33 . These phase surrogates are obtained by computing the Fourier transform of the original time series, uniformly randomly shuffling their phases and then applying an inverse Fourier transform to each of them. We then calculate the probability that our measured AC1 Kendall–Tau trends are significant using a threshold of P < 0.05. Finally, we discard six months of data at either end of each time series before calculating trends, as the variance and autocorrelation of the residual produced by the STL procedure are less reliable within one half of the length of the seasonal decomposition window. The python codes to replicate our trend estimation procedure can be found in our code repository 53 . Data availability The satellite data used in this study are publicly available 38 , 39 , 42 . The data used for Figs. 2 and 3 are available via Zenodo: . Code availability The Python codes used in this study are available via Zenodo: .
A natural habitat's ability to withstand and recover from damage can be empirically monitored from space—and the method may prove important during upcoming decades of climate and land-use change. The first study to empirically document that vegetation resilience can be measured from space is published today in Nature Climate Change by a research team from the University of Potsdam, the Potsdam Institute for Climate Impact Research (PIK), the Technical University of Munich (TUM) and the University of Exeter. The method will likely be important for future assessments of declines in vegetation resilience due to anthropogenic climate change and unsustainable resource management. "New ways of handling large data sets make it possible to check on widely held theories and assumptions about how ecosystems function," said lead author Taylor Smith, from the University of Potsdam. "Our work empirically confirms of one of those theories—that it is possible to measure how resilient vegetation is to outside pressure with a straightforward mathematical model." The study used observational data to estimate the variability of global vegetation as well as the speed of recovery after large losses in vegetation. By analyzing different satellite products since 1992, the group shows that simple metrics can be used to estimate the resilience of ecosystems to large shocks—even where large losses of vegetation haven't happened yet. "So far it has been difficult to reliably measure vegetation resilience at a global scale," said co-author Niklas Boers, TUM, PIK and Exeter's Global Systems Institute. "We used powerful mathematical results to overcome this problem. This allows us to continuously measure changes in vegetation resilience at any place on the Earth's surface. We provide a solid, empirically confirmed framework for monitoring vegetation resilience from space." The work further reveals that in many regions, global vegetation has lost resilience over the last two decades, meaning vegetation has become more vulnerable and takes longer to regain its natural equilibrium after disturbances. "Vegetation resilience can be thought of as the ability to recover from large shocks such as droughts or fires. We find very different long-term trends in resilience—depending on climate zone and vegetation type—but overall, declines in vegetation resilience have become more common during the last two decades," said Smith. The analysis shows that on average, vegetation initially gained resilience globally during the '90s. Then a shift took place with a more pronounced resilience loss since the early 2000s. The finding indicates that especially tropical rainforests and Siberian Boreal forests have grown more vulnerable to events like wildfires, pests, human disturbances, and natural catastrophes. Numerous factors might contribute to this shift, such as natural variability, anthropogenic climate change, increasing human land use and deforestation, and a higher frequency of droughts and wildfires. "We urgently need to intensify our efforts to detect potential changes in vegetation resilience and to understand the underlying drivers," said Boers. "We expect anthropogenic global heating as well as land-use change to play an important role, but many processes aren't well understood, making it difficult to predict the fate of natural vegetation systems in the coming decades." Smith added: "Satellite data can play a crucial role here, particularly in continuously monitoring the health of vegetation and other ecosystems." The study is part of the TiPES project, an EU Horizon 2020 interdisciplinary climate science project on tipping points in the Earth system. Eighteen partner institutions work together in more than 10 countries. TiPES is coordinated and led by The Niels Bohr Institute at the University of Copenhagen, Denmark and the Potsdam Institute for Climate Impact Research, Germany.
10.1038/s41558-022-01352-2
Medicine
Automated next generation sequencing platform can accurately screen thousands for COVID-19
Nature Communications (2021). DOI: 10.1038/s41467-021-21653-y Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-21653-y
https://medicalxpress.com/news/2021-03-automated-sequencing-platform-accurately-screen.html
Abstract Population scale sweeps of viral pathogens, such as SARS-CoV-2, require high intensity testing for effective management. Here, we describe “Systematic Parallel Analysis of RNA coupled to Sequencing for Covid-19 screening” (C19-SPAR-Seq), a multiplexed, scalable, readily automated platform for SARS-CoV-2 detection that is capable of analyzing tens of thousands of patient samples in a single run. To address strict requirements for control of assay parameters and output demanded by clinical diagnostics, we employ a control-based Precision-Recall and Receiver Operator Characteristics (coPR) analysis to assign run-specific quality control metrics. C19-SPAR-Seq coupled to coPR on a trial cohort of several hundred patients performs with a specificity of 100% and sensitivity of 91% on samples with low viral loads, and a sensitivity of >95% on high viral loads associated with disease onset and peak transmissibility. This study establishes the feasibility of employing C19-SPAR-Seq for the large-scale monitoring of SARS-CoV-2 and other pathogens. Introduction Viral pathogens, such as SARS-CoV-2, that incorporate large numbers of asymptomatic or mild symptom patients present unique challenges for public health agencies trying to manage both travel and local spread. Physical distancing is the current major strategy to suppress spread of the disease, but with enormous socio-economic costs. However, modeling and studies in isolated jurisdictions suggest that active population surveillance through systematic molecular diagnostics, combined with contact tracing and focused quarantining can significantly suppress disease spread 1 , 2 , 3 and has significantly impacted disease transmission rates, the number of infected people, and prevented saturation of the healthcare system 4 , 5 , 6 , 7 . However, reliable systems allowing for parallel testing of tens of thousands to hundreds of thousands of patients in larger urban environments have not yet been employed. Here we describe “COVID-19 screening using Systematic Parallel Analysis of RNA coupled to Sequencing” (C19-SPAR-Seq), which is a next generation sequencing (NGS)-based platform 8 for analyzing tens of thousands of COVID-19 patient samples in a single instrument run. To enable NGS-based diagnostics we employed large numbers of control samples embedded in each run coupled to control-based Precision-Recall and predictive Receiver Operator Characteristics (coPR) analysis that assigns run-specific thresholds and quality control metrics. C19-SPAR-Seq coupled to coPR on a trial cohort of over 600 patients performed with a specificity of 100% and sensitivity of 91% on samples with low viral loads versus >95% on samples with the higher viral loads associated with disease onset and peak transmissibility. Our study thus establishes the feasibility of employing C19-SPAR-Seq for the large-scale monitoring of SARS-CoV-2 and other pathogens. Results Multiplex detection of SARS-CoV-2 using C19-SPAR-Seq The current gold standard diagnostic for SARS-CoV-2 is Real-Time Quantitative Polymerase Chain Reaction (RT-qPCR), which is not readily adaptable to large-scale population testing 9 . To establish a population-scale testing platform we designed a SPAR-Seq multiplex primer mix v1 that targets RNA-dependent RNA polymerase ( RdRP ), Envelope ( E ), Nucleocapsid ( N ), and two regions of the Spike ( S ) gene that correspond to the receptor-binding domain (RBD) and the polybasic cleavage site (PBS) (Fig. 1a , Supplementary Table 1 and Supplementary Data 1 ). The latter two are SARS-CoV-2-specific regions that capture five key residues necessary for ACE2 receptor binding ( Srbd ) and the furin cleavage site ( Spbs ) that is critical for viral infectivity 10 , 11 . Thus, the RdRP-specific primers could produce an amplicon from SARS-CoV-1 that can be readily distinguished based on sequence analysis, while the Spike-specific primers, targeting the RBD and Polybasic site regions, would distinguish a SARS-CoV-2 infection. For quality control, we targeted Peptidylprolyl Isomerase B ( PPIB ). Current standard testing strategies for viral pathogens employ gene-specific primers in “all-in-one” qRT-PCR reactions that could in principle be adapted to incorporate barcodes into gene-specific primers. However, to allow for rapid adaptation to test for novel and multiple pathogens, and/or profiling host responses we used a generic oligo-dT and random hexamer primed reverse transcription step followed by multiplex PCR and barcoding in a rapid, readily automated format we call “COVID-19 screening using Systematic Parallel Analysis of RNA coupled to Sequencing” or C19-SPAR-Seq (Fig. 1b , Supplementary Table 1 and Supplementary Data 1 ). Although cost is often cited as a concern for NGS-based testing, our platform is cost effective with retail material costs ranging from USD ~$9 to $6 for 500 versus 10,000 sample batch sizes, respectively (Supplementary Data 2 ). Fig. 1: Application of C19-SPAR-Seq to detect SARS-CoV-2. a Schematic representation of the SARS-CoV-2 with the five regions targeted for multiplex C19-SPAR-Seq indicated: RdRP (purple), S receptor-binding domain ( Srbd ) (red), S polybasic cleavage site ( Spbs ) (light red), E (yellow), and N (orange). b Schematic of the C19-SPAR-Seq strategy for detecting SARS-CoV-2. cDNA is synthesized using reverse transcriptase (RT) from RNA extracted from clinical samples, subjected to multiplex PCR, then barcoded, pooled, and analyzed by next generation sequencing (NGS). c Analysis of archival NASOP swab eluents by C19-SPAR-Seq. A Proof-of-Concept (PoC) cohort ( n = 19) was analyzed by C19-SPAR-Seq and read numbers for each of the indicated amplicons are presented in a heatmap. Control samples (HEK293T, synthetic SARS-CoV-2 RNA) are represented in the left panel, while the right panel shows unsupervised 2D hierarchical clustering of results from negative (blue) and positive (red) patients. Full size image To assess C19-SPAR-Seq performance, we assembled a proof-of-concept (PoC) cohort of 19 archival Nasopharyngeal (NASOP) swab eluents from the Toronto University Health Network-Mount Sinai Hospital clinical diagnostics lab (Supplementary Data 3 ), 17 of which were positive for SARS-CoV-2. Viral load in these archival samples was quantified using the clinically approved TaqMan-based SARS-CoV-2 RT-qPCR detection kit (‘BGI’, see the “Methods” section), which identified five SARS-CoV-2 low (Ct > 25), seven SARS-CoV-2 medium (Ct between 20 and 25), and five SARS-CoV-2 high (Ct < 20) patients (Supplementary Data 3 ). After confirming the efficiency of multiplex v1 primer pairs using a SARS-CoV-2 high sample (LTRI-18, Ct < 20; Supplementary Fig. 1 ), we performed C19-SPAR-Seq using HEK293T RNA as a negative control ( n = 2), and serial dilutions of synthetic SARS-CoV-2 RNA (Twist) as positive controls ( n = 5). Pooled sequence data was demultiplexed to individual samples prior to mapping to amplicon sequences. C19-SPAR-Seq was sensitive in detecting as low as 12.5 copies/μL of E, Srbd, and Spbs amplicons from Twist RNA (Fig. 1c , left panel). In patient samples, PPIB was present in all samples, and all viral targets were robustly detected in high/medium load samples, with reduced detection of E and RdRP genes in low samples (Fig. 1c , right panel). Development of a C19-SPAR-Seq diagnostic platform to detect SARS-CoV-2 To establish a diagnostic platform, we performed C19-SPAR-Seq on a larger test development cohort of 24 COVID-19 positive and 88 negative archival patient samples ( n = 112; Supplementary Data 4 ). The SARS-CoV-2 RNA standard curve showed a linear relationship between total viral reads and estimated viral copy numbers (Supplementary Fig. 2a ). Negative patient samples had low viral reads (median of 4; range 0–55) compared to positive samples (median of 5899; range 2–253,956 corresponding to 18–705,960 amplicon reads per million reads per sample) (Fig. 2a ). C19-SPAR-Seq read counts tracked inversely with qRT-PCR Ct values for RdRP , E , and N genes quantified in the diagnostic lab using the Seegene Allplex TM assay (see the “Methods” section) (Fig. 2b ). Unsupervised clustering showed that the controls performed similarly to the PoC cohort (Fig. 2c ), as did the positive and negative patient samples, with two exceptions: clinical samples LTRI042 and LTRI050, which displayed background signal, and corresponded to samples with extreme Ct values in only one viral gene ( N gene, Ct > 38; Supplementary Data 4 ). ROC analysis using total viral reads (Fig. 2d ) showed excellent performance with an area under the ROC curve (AUC) of 0.969. Using PROC, the point on the ROC curve that minimizes the distance to (0,1) 12 , defined a total viral read cut-off of 116 for calling a positive sample and yielded a sensitivity of 92% (95% confidence interval; CI of 73–99%), specificity of 100% (CI: 95–100%), and overall accuracy of 98% (Fig. 2d ). Using Youden parameters that maximize sensitivity and specificity defined a viral cutoff of 26 and yielded better sensitivity (96%), but lower specificity (95%) and accuracy (96%). Other than the two positive samples mentioned above that possessed extremely low levels of viral RNA (Ct 38 and 40), all other positive samples were above the C19-SPAR-Seq viral threshold limit, indicating that the lower limit of sensitivity in the CI is dictated by these samples that lie at the border of the detection limit of the diagnostic lab test. Thus, C19-SPAR-Seq robustly detects SARS-CoV-2 transcripts, correlates with Ct values from clinical diagnostic tests, and displays excellent performance in distinguishing positive and negative samples. Fig. 2: Performance of C19-SPAR-Seq in detecting SARS-CoV-2. a C19-SPAR-Seq of the test development cohort was performed and total viral reads+1 (log 10 ) ( Y -axis) are plotted for negative ( n = 88, black) and positive ( n = 24, red) patient samples, HEK293T RNA ( n = 6, blue), and the indicated serial dilutions of synthetic SARS-CoV-2 RNA ( n = 2-6, orange). For each group, the median, lower and upper confidence limits for 95% of the median are plotted. Whiskers are minimum and maximum values. Two-tailed unpaired t -test of negative versus positive samples (**** p = 1.67 × 10 −8 ). b C19-SPAR-Seq reads for the indicated gene in each patient sample were compared to Ct values obtained by the clinical diagnostics lab using the ‘Seegene’ Allplex assay. c Heatmap of C19-SPAR-Seq results. Read counts for the indicated target amplicons in control samples ( n = 16; left) and patient samples ( n = 112; right) are plotted according to the scale, and sample types labeled as indicated. Samples are arranged by hierarchical clustering with euclidean distance indicated by the dendrogram on the top, which readily distinguishes positive from negative samples. d Performance of C19-SPAR-Seq. ROC analysis on patient samples was performed using clinical diagnostic results (Seegene Allplex qRT-PCR assay, Supplementary Data 4 ) and total viral reads for patient samples ( n = 112). AUC (area under the curve) scores are indicated on the graph (left), with statistics at the optimal cutoff as indicated (right). Full size image An internal control-based classifier to assess patient samples Robust application of C19-SPAR-Seq as a diagnostic tool requires assigning thresholds for both viral RNA detection, as well as host RNA for filtering poor quality samples. In qRT-PCR diagnostics, external validation studies and rigorous standard operating procedures establish pre-defined cutoffs for sample quality and positive versus negative assignment (Seegene, see the “Methods” section); BGI (see the “Methods” section) However, in scalable, massively parallel, multiplexed NGS assays, variability in sample numbers and flow cell loading can create run-to-run variations in read numbers, while index-mismatching 13 , as well as trace cross-contamination events can create technical noise that are challenging to control. Furthermore, external validation strategies create a laborious path to adapt and test new multiplex designs to SARS-CoV-2, additional respiratory pathogens, or host responses. We therefore exploited the throughput of C19-SPAR-Seq to include in every run a training set of large numbers of controls that can be exploited to define cutoffs tailored to each C19-SPAR-Seq run (Fig. 3a ). To define quality metrics, we computed precision-recall (PR) curves for classifying control samples as either negative (H 2 O blanks), or positive for any anticipated amplicon (HEK293T for PPIB or synthetic SARS-CoV-2 RNA for viral amplicons) and calculated the highest F1 score, which is the harmonic mean of PR and a common measure of classifier accuracy (Fig. 3b ). When mapped onto a ROC curve this corresponded to the region closest to perfect sensitivity and specificity (0, 1) (Supplementary Fig. 2b ). To define the threshold for identifying SARS-CoV-2-positive cases, we next analyzed the embedded standard curve of synthetic SARS-CoV-2 RNA. This displayed a linear relationship over four orders of magnitude that extended to lower limits of detection indistinguishable from background reads from HEK293T cells (Fig. 2a and Supplementary Fig. 2a ), thus allowing us to identify the viral read count in each C19-SPAR-Seq run that most accurately distinguishes positive from negative (Fig. 3a ). To identify this threshold, we computed PROC01, which optimizes negative predictive value (NPV) and positive predictive value (PPV) 12 and defined a point (88 viral reads) close to perfect PR (Fig. 3c ) and sensitivity and specificity on the ROC curve (Supplementary Fig. 2c ). Importantly, these methods control for run-specific variables by employing training sets that are embedded in every C19-SPAR-Seq run. Fig. 3: Performance of C19-SPAR-Seq in detecting SARS-CoV-2 using control-based classifier. a Schematic of the control-based cut-off procedure for RNA quality and viral threshold by coPR analysis. b Thresholding sample quality. coPR analysis on control samples: PRC of control samples for accurate detection of mapped reads are plotted. The optimal precision and recall read cut-off associated ( P = 110) with the highest F1 (0.97) score, and AUC (area under the curve) are indicated in the PR plot. c Threshold for classification of positives in the test cohort. Optimum cut-off for viral threshold is calculated by PROC01 using clinical diagnosis and total viral reads and plotted on the precision-recall curve. d Threshold assignments for sample quality and classification. Total viral reads +1 ( Y -axis) are plotted against PPIB reads +1 ( X -axis) for positive (red) and negative (blue) patient samples. coPR-based RNA-QC filter and viral read filter are shown as indicated. Assay statistics using coPR thresholding are listed (right). Full size image We next mapped the control-based cutoffs onto our patient SPAR-Seq data (Fig. 3d ). This showed 15 of these archival samples had low PPIB counts that may be due to lost RNA integrity upon repeated freeze–thaw cycles (Fig. 3d and Supplementary Data 4 ), a variability we also observed in the PoC cohort (Fig. 1c ). Of note, C19-SPAR-Seq performance was not affected by filtering poor quality samples (AUC = 0.970; Supplementary Fig. 2d ). Furthermore, using PROC01 thresholding of viral reads identified 22/24 positives with no false positives (Fig. 3d ). This yielded an overall test performance of 92% sensitivity, 100% specificity, and 98% accuracy (Fig. 3d and Supplementary Tables 2 , 3 ). This is similar to the observed performance of C19-SPAR-Seq on clinical samples quantified by ROC analysis (Fig. 2d and Supplementary Fig. 2d , respectively). Thus, an extensive array of internal reference samples is effective as an embedded training set for implementing a control-based PR/PROC classifier (coPR) that is tailored to each C19-SPAR-Seq run. Negative samples create noise in C19-SPAR-Seq To validate our C19-SPAR-Seq platform we established a pilot cohort of 378 samples that contains 89 positive samples collected in May of 2020. We first screened for positivity using the clinically approved BGI SARS-CoV-2 kit (see the “Methods” section) which showed 52 samples were positive with >4 viral copies/μL (Supplementary Data 5 and Supplementary Table 4 ). Of the 37 failed samples, 86% had very low viral RNA (only 1 or 2 of the 3 genes detected and/or Ct > 35 on the ‘Seegene’ platform) that may have lost integrity upon storage. Indeed, comparison of Ct values for RdRP detection showed an overall increase of four cycles in these archived samples (Supplementary Fig. 3a ), despite the high sensitivity of the BGI platform 14 . The cohort also contained 289 negative samples collected prior to Ontario’s 15 first confirmed COVID-19-positive case in January 20, 2020, and 1 negative sample collected in May 2020 (Supplementary Data 5 ), and included broncho-alveolar lavages (BALs) and NASOP swabs. Surprisingly, the detection of human RNA dropped substantially to a median of 29 (range 0–41,874), compared to 15,058 (range 2–170,870) in the original test cohort. coPR filtering (Supplementary Fig. 3b ), marked 50% of samples as inconclusive compared to 13% in the test cohort (Supplementary Fig. 3c ), despite similar distribution of raw reads per sample (Supplementary Fig. 3d ), while mapping rates in the PoC, test and pilot cohorts, progressively declined to as low as 0.1% (Supplementary Fig. 3e ). To understand this collapse we analyzed unmapped reads and found that >90% were consumed by non-specific amplification products (NSAs; Supplementary Fig. 4a ) that comprised complex chimeric combinations of many viral and human primers (Supplementary Fig. 4a, b ). For example, RdRP and PPIB contributed to 4 of the top 5 NSAs (NSA1–4), and 2 had a spurious sequence (NSA4, 5). Indeed, analysis of C19-SPAR-Seq PoC, test and pilot libraries using a Bioanalyzer, showed that as cohort size and number of negatives increased, NSAs were more apparent, and dominated the pilot library (Supplementary Fig. 4c ). This suggests that NSAs, enriched in negative samples (3.7-fold increase in the pilot cohort), clog the NGS pipeline as sample numbers rise (Supplementary Table 4 ). This has serious implications for deploying an NGS platform in a population-scale COVID-19 surveillance strategy and highlights the importance of using large-scale cohorts during the development of multiplex testing platforms. Analyzing an extended cohort using an optimized multiplex panel v2.0 SARS-CoV-2 RNA concentration spans a large dynamic range, such that spike-in mutant amplicons which have been suggested to improve performance of NGS-based strategies 16 might interfere with detection of COVID-19-positive cases with low viral reads. Therefore, we instead used our NSA data to create multiplex panel v2.0 (see the “Methods” section) that removed primers yielding NSAs by targeting a distinct region of RdRP , removing E and N genes, and switching to primers that amplify intron spanning regions of the ACTB and ACTG genes (Supplementary Table 1 , Supplementary Data 1 and Supplementary Fig. 1 ). We extended the pilot cohort to 663 samples that included 98 confirmed positives and performed C19-SPAR-Seq, which showed targeted amplicons were the predominant product generated by multiplex panel v2.0 (Supplementary Fig. 5a ), and mapping percentages were restored to test cohort levels (Supplementary Fig. 5b ). Total viral read distributions for multiplex panel v2.0 showed good separation in clinically positive samples (Fig. 4a and Supplementary Fig. 5c ), while applying coPR thresholding (Supplementary Fig. 5d ) identified 121 samples as inconclusive (Fig. 4a ), all of which were older, pre-COVID19 material. Of these, 112 were BALs (40% of all BALs), 1 was a bronchial wash (BMSH), and only 8 were NASOPs (1.8% of all NASOPs) (Supplementary Data 6 ). Furthermore, analysis of 10 BAL samples below the QC threshold revealed little or no RNA, contrasting BALs with moderate levels of ACTB/G transcripts (representative examples in Supplementary Fig. 6a ), and BAL ACTB/G read distributions were much lower than NASOPs (Supplementary Fig. 6b ). This suggests that archival BALs suffered from substantive sample degradation and also highlights how coPR-based thresholding successfully identifies poor quality samples and readily adapts to the use of distinct primer sets. Fig. 4: C19-SPAR-Seq of a large patient cohort. a C19-SPAR-Seq on an extended patient cohort. coPR thresholds for sample quality and classification of a 663 patient cohort of negative (blue) and positive (red) specimens are shown as in Fig. 2a . Performance metrics with 95% confidence intervals for sample classification according to coPR thresholding are shown in the table. NA not applicable. b Heatmap of C19-SPAR-Seq results. Read counts for the indicated target amplicons in the filtered set of samples ( n = 542) are plotted according to the scale, and sample types labeled as indicated. Samples are arranged by hierarchical clustering with euclidean distance indicated by the dendrogram on the right. c Scatter plot of total viral reads+1 (left Y -axis, blue) versus Ct values of positive samples ( n = 98, BGI) ( X -axis). C19-SPAR-seq sensitivity at the indicated Ct values is overlaid (right Y -axis, red). Gray dashed lines indicate average copies/μL (c/μL). d ROC curve analysis. ROC curves were processed on filtered samples ( n = 542). AUC scores are indicated for filtered samples (blue; left) with corresponding performance statistics for the optimal cut-off indicated below. Full size image Next, we analyzed viral reads, which had a broad range in positive samples (median = 680.5 reads per sample, range 0–200,850; Fig. 4a and Supplementary Fig. 5c ). Two-dimensional clustering showed background SARS-CoV-2 products in negative samples were low to undetectable, and ACTB typically yielded higher reads than ACTG , likely reflecting their differential expression (Fig. 4b ). Positive samples were generally well separated, although some distinct clusters with lower SARS-CoV-2 reads were apparent (Fig. 4b and Supplementary Fig. 5e ). Indeed, total read distributions in positive samples displayed biphasic distribution (Supplementary Fig. 5e ), similar to observations made from RT-qPCR analyses of ~4000 positive patients 17 . Since the early rapid increase in SARS-CoV-2 viral load at symptom onset is followed by a long tail of low viral load during recovery 18 , 19 , this biphasic distribution could reflect patients in distinct phases of the disease. We also assessed viral amplicon sequences which matched the SARS-CoV-2 reference (MN908947.3 20 ) and found no variants (Supplementary Fig. 5f ). Since neutralizing antibodies are generally thought to target the critical region of the RBD analyzed here 15 , these results suggest the emergence of variant strains that might bypass acquired immunity is not a major feature of SARS-CoV-2. In addition, this supports the notion that biologic therapies targeting the RBD may show broad activity in the population. We next compared performance of multiplex panel v2.0 to v1.0 using the embedded controls, which showed similar performance (AUC = 0.90, Supplementary Fig. 5g versus 0.92, Supplementary Fig. 2c , respectively), with coPR yielding an optimal read cutoff of >16 total viral reads (Supplementary Fig. 5f ) that corresponded to a technical sensitivity of 3 viral copies/μL (Supplementary Fig. 6c ). coPR thus identified 82 positive samples (Fig. 4a and Supplementary Data 6 ), all of which were BGI-confirmed cases, to give an overall sensitivity of 84%, specificity of 100%, and accuracy of 97% (Supplementary Table 5 and Fig. 4a ). Importantly, total viral reads tracked with BGI Ct values (Fig. 4c ), and for samples with Ct < 35 (corresponding to ~12 viral copies/μL of specimen), sensitivity was similar to the test cohort at 91%. However, for samples with Ct between 35 and 37 (4–12 viral copies/μL) sensitivity dropped markedly to 44% (Supplementary Table 5 and Fig. 4a ), while at higher viral loads (Ct = 25 or ~8400 viral copies/μL) sensitivity rose to 100% (Fig. 4c ). ROC analysis of actual C19-SAR-Seq performance yielded an AUC of 0.96, sensitivity of 87% and specificity of 100%, similar to coPR (Fig. 4d ), while individual amplicons each underperformed total viral reads (AUC: 0.85–0.94; Supplementary Fig. 6d ). Our cohort was biased for samples with very low to low viral loads, which represents a small portion of the COVID-19 population 17 . This bias could lead to an underestimate of the sensitivity of C19-SPAR-Seq in the context of a large-scale population, so we mapped our sensitivity data at distinct viral loads onto the population distribution of viral loads obtained from ~4000 positive patients 17 . This showed a projected C19-SPAR-Seq sensitivity of ~97% for patients displaying >10,000 viral copies/mL (Supplementary Fig. 6e ), which encompasses ~90% of the positive population. Altogether, these results demonstrate that in high patient sample loads comprised of predominantly negative samples, C19-SPAR-Seq using coPR displays 100% specificity and >95% sensitivity at viral loads typically observed in populations. Discussion Systematic population-scale testing has been identified as an important tool in managing pandemics such as SARS-CoV-2, where large numbers of infected individuals display mild or no symptoms yet transmit disease. The scalable throughput of C19-SPAR-Seq combined with its excellent sensitivity and specificity at reasonable cost make it well-suited for this role. Data generated by large-scale routine testing of local and larger communities, with different interaction levels would provide valuable epidemiologic information on mechanisms of viral transmission, particularly when coupled to multiplex panels targeting regions of sequence variance currently in development. Indeed, while we detected no variants in our positive samples collected in the Spring of 2020, the S-RBD and S-PBS amplicons will detect the newly emergent N501Y and P681H variants 21 , 22 . In addition, the C19-SPAR-Seq platform can be readily adapted to incorporate panels tracking multiple pathogens, as well as host responses. C19-SPAR-Seq quantitation would also facilitate real-time tracking of viral load dynamics in populations that may be associated with COVID-19 expansion or resolution phases 18 . Although C19-SPAR-Seq is dependent on centralized regional facilities, it is readily coupled to saliva-based, at-home collection that exploits extensive transport infrastructure and industrialized sample processing to enable frequent widespread testing. Methods Samples collection Patient samples (Supplementary Data 3 – 6 ) were obtained from the Department of Microbiology at Mount Sinai Hospital. Patient samples used in this study were approved by the Mount Sinai Hospital (MSH) Research Ethics Board (REB): MSH REB Study #20-0078-E ‘Use of known COVID-19 status tissue samples for development and validation of novel detection methodologies’. The patient samples were de-identified prior to transfer from the Mount Sinai Hospital Microbiology Department to our research staff. The samples were excess to clinical need and considered residual samples which do not require informed consent for the secondary use of the de-identified biological materials utilized in this study. Patient samples were obtained as part of routine diagnostic testing. Total RNA extraction A step-by-step protocol describing the patient RNA extraction protocol can be found at Protocol Exchange 23 . Total RNA was extracted by using the Total RNA extraction kit (Norgen Biotek kit, Cat. #7200) for the samples in Supplementary Data 3 following the manufacturers guidelines. For all other samples (Supplementary Data 4 – 6 ), total RNA was purified in 96-well plates using RNAclean XP beads (Beckman, A66514) and a customized protocol. Briefly, 75.25 μL of patient swabs in transfer buffer were mixed with 14.5 μL of 10× SDS lysis buffer (1% SDS, 10 mM EDTA), 48 μL of 6 M GuHCl, and 7.25 μL proteinase K (20 mg/mL, ThermoFisher, 4333793), incubated at room temperature for 10 min and heated at 65 °C for 10 min prior to the addition of 145 μL of beads. Beads were washed twice in 70% ethanol using a magnetic stand and then RNA eluted into a 30 μL Resuspension buffer supplied with the kit. RNA quality was assessed using a Bioanalyzer (5200 Agilent Fragment Analyzer). HEK293T RNA was extracted using the Total RNA extraction kit (Qiagen). Synthetic Twist SARS-CoV-2 RNA (Twist Bioscience #102024-MN908947.3) was used as positive control. Reverse transcription (RT) A step-by-step protocol describing the reverse transcription protocol can be found at Protocol Exchange 23 . Total RNA was reverse transcribed using SuperScript™ III Reverse Transcriptase (Invitrogen) in 5× First-Strand Buffer containing DTT, a custom mix of Oligo-dT (Sigma) and Hexamer random primers (Sigma), dNTPs (Genedirex), and Ribolock RNase inhibitor (ThermoScientific). We followed the manufacturer’s protocol. Each reaction included: 0.5 μL Oligo-dT, 0.5 μL hexamers, 4 μL purified Total RNA, 1 μL dNTP (2.5 mM each dATP, dGTP, dCTP and dTTP), quantum satis ( qs ) 13 μL RNase/DNase free water. Samples were incubated at 65 °C for 5 min, and then placed on ice for at least for 1 min. The following was added to each reaction: 4 μL 5× First-Strand Buffer, 1 μL 0.1 M DTT, 1 μL Ribolock RNase Inhibitor, 1 μL of SuperScript TM III RT (200 units/μL) and then mixed by gently pipetting. Samples were incubated at 25 °C for 5 min, 50 °C for 60 min, 70 °C for 15 min and then stored at 4 °C. TaqMan-based RT-qPCR detection A Real-Time Fluorescent RT-PCR kit from ‘BGI’ was used according to manufacturer’s instructions (Cat no. MFG030010, BGI Genomics Co. Ltd. Shenzhen, China). Experiments were carried out in a 10 μL reaction volume in 384-well plates, using 3 μL of sample (LTRI patient samples or Twist RNA), and were analyzed using a Bio-Rad CFX384 detection system (Supplementary Data 3 , 5 , 6 ). Real-time Fluorescent RT-PCR results from ‘Seegene’ assay were provided by the Department of Microbiology diagnostic lab at Mount Sinai Hospital (Supplementary Data 4 – 6 ) (AllplexTM 2019-nCoV Assay, version 2.0, Cat no. RP10250X/RP10252W, Seegene). C19-SPAR-Seq primer design and optimization Optimized multiplex PCR primers for SARS-CoV-2 ( N , S , E and RdRP ) and human genes ( PPIB and ACTB/G ) were designed using the SPAR-Seq pipeline 8 , with amplicon size >100 bases (see Supplementary Table 1 and Supplementary Data 1 ). For the S gene, two regions were monitored, the S receptor-binding domain ( Srbd ), and S polybasic cleavage site ( Spbs ). The Universal adapter sequences used for sequencing were F: 5′-acactctttccctacacgacgctcttccgatct and R: 5′-gtgactggagttcagacgtgtgctcttccgatct). Primers were optimized to avoid primer–dimer and non-specific multiplex amplification. To assess the primers sensitivity and specificity, we performed qPCR (SYBR green master mix, BioApplied) on cDNA prepared from patient samples. Each primer was used at 0.1 μM in qPCR reaction run on 384-well plates using Biorad CFX 384 detection system. The thermal cycling conditions were as follows: one cycle at 95 °C for 2 min, and then 40 cycles of 95 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, followed by a final melting curve step. Multiplexing PCR A step-by-step protocol describing the multiplex PCR protocol can be found at Protocol Exchange 23 . The multiplex PCR reaction was carried out using Phusion polymerase (ThermoFisher). The manufacturer’s recommended protocol was followed with the following primer concentrations: all primers ( N , Spbs , Srbd , E , RdRP , and PPIB ) were at 0.1 μM for the PoC cohort (Supplementary Data 3 ), SARS-CoV-2 primers ( N , Spbs , Srbd , E and RdRP ) were at 0.05 μM, and PPIB primer was at 0.1 μM for the test and pilot cohort (Supplementary Data 4 and 5 ), all primers ( Spbs , Srbd , RdRP and ACTB/G ) were at 0.05 μM for the extended cohort (Supplementary Data 6 ). For each reaction: 5 μL 5× Phusion buffer, 0.5 μL dNTP (2.5 mM each dATP, dGTP, dCTP, and dTTP), 0.25 μL for each human primers (10 μM), 0.125 μL for each SARS-CoV2 primers (10 μM), 2 μL of cDNA, 0.25 μL Phusion Hot start polymerase, qs 25 μL RNase/DNase free water. The thermal cycling conditions were as follows: one cycle at 98 °C for 2 min, and 30 cycles of 98 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, and a final extension step at 72 °C for 5 min and then stored at 4 °C for the PoC and extended cohorts (Supplementary Data 3 , 6 ), one cycle at 98 °C for 2 min, and 35 cycles of 98 °C for 15 arcsec, 60 °C for 15 arcsec, 72 °C for 20 arcsec, and a final extension step at 72 °C for 5 min and then stored at 4 °C for the test and pilot cohorts (Supplementary Data 4 and 5 ). Barcoding PCR A step-by-step protocol describing the barcoding PCR protocol can be found at Protocol Exchange 23 . For multiplex barcode sequencing, dual-index barcodes were used 8 . The second PCR reaction on multiplex PCR was performed using Phusion polymerase (ThermoFisher). For each reaction: 4 μL 5× Phusion buffer, 0.4 μL dNTP (2.5 mM each dATP, dGTP, dCTP, and dTTP), 2 μL Barcoding primers F + R (pre-mix), 4 μL of multiplex PCR reaction, 0.2 μL Phusion polymerase, qs 20 μL RNase/DNase-free water. The thermal cycling conditions were as follows: one cycle at 98 °C for 30 arcsec, and 15 cycles of 98 °C for 10 arcsec, 65 °C for 30 arcsec, 72 °C for 30 arcsec, and a final extension step at 72 °C for 5 min and stored at 4 °C. Library preparation and sequencing A step-by-step protocol describing the library preparation and sequencing protocol can be found at Protocol Exchange 23 . For all libraries, each sample was pooled (7 μL/sample) and library PCR products were purified with SPRIselect beads (A66514, Beckman Coulter). The PoC, test, and pilot cohorts were purified as follows: ratio 0.8:1 (beads:library), and the extended cohort with 1:1 (beads:library) (Beckman Coulter). Due to NSA products in the fragment analyzer profile (Supplementary Fig. 3c ) in the test cohort and pilot cohort, we performed size selection purification (220–350 bp) using the Pippin Prep system (Pippin HT, Sage Science). Library quality was assessed with the 5200 Agilent Fragment Analyzer (ThermoFisher) and Qubit 2.0 Fluorometer (ThermoFisher). All libraries were sequenced with MiSeq or NextSeq 500 (Illumina) using 75 bp paired-end sequencing. COVID-19 (C19-)SPAR-Seq platform A step-by-step protocol describing the COVID-19 (C19-)SPAR-Seq platform protocol can be found at Protocol Exchange 23 . Our Systematic Parallel Analysis of Endogenous RNA Regulation Coupled to Barcode Sequencing (SPAR-Seq) system 8 was modified to simultaneously monitor COVID-19 viral targets and additional controls by multiplex PCR assays. For barcode sequencing, unique, dual-index C19-SPAR-Seq barcodes were used. Unique reverse 8-nucleotide barcodes were used for each sample, while forward 8-based barcodes were used to mark each half (48) of the samples in 96-well plate to provide additional redundancy. These two sets of barcodes were incorporated into forward and reverse primers, respectively, after the universal adaptor sequences and were added to the amplicons in the second PCR reaction. The C19-SPAR-Seq analysis pipeline with the algorithms used is explained in detail in Supplementary Fig. 7 with additional analytical tools described in Supplementary Fig. 8 and below in the “Methods” sections. Computational requirements for the demultiplexing step is 32 GB RAM and minimum 1 GB network infrastructure, with a Linux-operating system. Demultiplexing and mapping Illumina MiSeq sequencing data was demultiplexed based on perfect matches to unique combinations of the forward and reverse 8 nucleotide barcodes. Full-length forward and reverse reads were separately aligned to dedicated libraries of expected amplicon sequences using bowtie 24 with parameters –best -v 3 -k 1 -m 1. Read counts per amplicon were represented as reads per million or absolute read counts. The scripts for these steps are available at 25 . Filtering of low-input samples To remove samples with low amplified product, likely reflecting low input due to inefficient sample collection or degradation, before attempting to classify, we computed precision-recall curves for classifying control samples into ‘low amplification’ and ‘high amplification’ based on reads mapped to RNA amplicons but ignoring mapping to genomic sequence, if applicable. The former group comprised all controls in which individual steps were omitted (H2O controls) and the latter comprised HEK293T as well as synthetic SARS-CoV-2 RNA controls. For each PoC, test, pilot, extended runs, we obtained the total mapped read threshold (including reads mapping to both human and viral amplicons) associated with the highest F1 score, representing the point with optimal balance of precision and recall. Samples with reads lower than this threshold were removed from subsequent steps. Scripts for this step are available at 26 . SARS-CoV2-positive sample classification To assign positive and negative samples, we used negative (H2O and HEK293T) and positive (synthetic SARS-CoV-2 RNA dilutions) internal controls for each run and calculated optimum cut-offs for viral reads (total reads mapping to all three viral amplicons) by PROC which defines the threshold for optimum positive predictive value (PPV) and negative predictive value (NPV) for diagnostic tests. Thus, a sample was labeled positive if it had viral reads above the viral read threshold; negative if it had viral reads below the viral read threshold and human reads above the mapped read threshold; and inconclusive if it had both viral and human reads below the respective thresholds. Sample classification by heatmap clustering Heatmap and hierarchical clustering of viral and control amplicons, log 10 (mapped reads + 1), was used to analyze and classify all samples. Samples with a total mapped read count lower than the RNA QC threshold were labeled as inconclusive and removed before the analysis. Known positive (high, medium, and low) and negative control samples were used as references to distinguish different clusters. In addition, dilutions of synthetic SARS-CoV-2 RNA were also included as controls and analyzed across different PCR cycles and primer pool conditions. Viral mutation assessment To remove PCR and sequencing errors for the assessment of viral sequence variations, we determined the top enriched amplicon sequence. For this, firstly, paired end reads were stitched together to evaluate full length amplicons. The last 12 nucleotides of read1 sequence are used to join the reverse complement of read2 sequences. No mismatches were allowed for stitching criteria. The number of full length reads per unique sequence variation were counted for each amplicon per sample by matching the 10 nucleotides from the 3′ and 5′ end of the sequence with gene-specific primers. (scripts are available at 27 , and 28 ). The top enriched sequence variant from each sample is used for multiple alignment analysis using CLUSTALW V2.1. Non-specific amplicon assessment Single-end reads that contain the first 10 nucleotides of the illumina adaptor sequence were counted and binned into relevant forward and reverse gene specific primer pools by matching the first 10 nt of the reads with primer sequences. Relative abundance of the non-specific amplicons was quantified as percentage of the reads corresponding to non-specific amplicon per forward or reverse primer (scripts are available at 28 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data that support the findings of this study have been deposited in the Gene Expression Omnibus (GEO) at NCBI with the accession code GSE160036 . Figure 1 raw data, PoC cohort: GEO accession number GSE160031, Fig. 2 and Supplementary Fig. 2 raw data, Test cohort: GEO accession number GSE160032, Fig. 3 and Supplementary Figs. 3, 4 raw data, Pilot cohort: GEO accession number GSE160033, Fig. 4 and Supplementary Figs. 5 and 6 raw data, Extended cohort: GEO accession number GSE160034. Severe acute respiratory syndrome coronavirus 2 isolate Wuhan-Hu-1, complete genome: NCBI sequence ID: NC_045512 was used as reference for primers design and sequence analysis. Source data are provided with this paper. Code availability We provided the code for demultiplexing and mapping at 25 , quality filtering at 26 , viral mutation assessment and non-specific amplicon assessment at 27 and 28 .
A robotics platform designed by Toronto researchers to screen thousands of COVID-19 samples at once has the potential to revolutionize how labs track the spread of viruses and other pathogens, according to new findings. The study, out Wednesday in Nature Communications, found that the next-generation, ultra-high-throughput sequencing platform, called C19-SPAR-Seq, designed by researchers from the Lunenfeld-Tanenbaum Research Institute (LTRI) at Sinai Health, has a sensitivity rate greater than 95 percent in positive cases during peak onset. "Identifying positive samples quickly and accurately is critical in beating this pandemic," said Dr. Jeff Wrana, senior investigator at the LTRI and professor in the Department of Molecular Genetics at the University of Toronto. "With new and potentially dangerous variants now circulating, this is a platform that is scalable, automated and capable of analyzing thousands of COVID-19 patient samples in a single instrument run." Wrana and fellow LTRI senior investigator Dr. Laurence Pelletier, in collaboration with University of Toronto professor Dr. Ben Blencowe, credit a strong team of eager trainees who shifted from other areas of research to help develop and validate the platform, allowing for the team to go from concept to published paper in under 12 months. "The co-operation of the Mount Sinai Hospital clinical diagnostic lab was the other key ingredient to our success," said Pelletier. "To date the shared microbiology lab, headed by Dr. Tony Mazzulli, has provided access to thousands of samples." In late 2020, the team pivoted again to use the robotics platform to screen thousands of positive samples for variants by rapidly sequencing fingerprint regions of the viral genome to look for key mutations. "It has been an absolute pleasure to work with Dr. Jeff Wrana and his team at the LTRI," said Dr. Mazzulli, microbiologist-in-chief for Sinai Health and University Health Network (UHN). "His novel SPAR-Seq System is cutting-edge technology and his team's ability to sequence COVID-19 samples in real time has tremendous potential for impacting our understanding of the epidemiology and spread of novel mutants in the province." The platform is also cost-effective. The study notes it only costs about $8 USD per test when running thousands of samples at once, as the cost per sample decreases due to economies of scale. "It's extremely reliable and readily adaptable," said Javier Hernandez, a junior researcher in the Wrana lab who co-led the study with Drs. Marie-Ming Aynaud and Seda Barutcu. "The turnaround is approximately 24 hours. It's very simple as we've automated practically every step in the process. For me, it's been a very exciting thing to see my work make a difference."
10.1038/s41467-021-21653-y
Medicine
New study into a rare type of cancer in abdomen lining shows possible immunotherapy treatment
Sally Hallam et al. The transition from primary colorectal cancer to isolated peritoneal malignancy is associated with an increased tumour mutational burden, Scientific Reports (2020). DOI: 10.1038/s41598-020-75844-6 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-75844-6
https://medicalxpress.com/news/2020-11-rare-cancer-abdomen-lining-immunotherapy.html
Abstract Colorectal Peritoneal metastases (CPM) develop in 15% of colorectal cancers. Cytoreductive surgery and heated intraperitoneal chemotherapy (CRS & HIPEC) is the current standard of care in selected patients with limited resectable CPM. Despite selection using known prognostic factors survival is varied and morbidity and mortality are relatively high. There is a need to improve patient selection and a paucity of research concerning the biology of isolated CPM. We aimed to determine the biology associated with transition from primary CRC to CPM and of patients with CPM not responding to treatment with CRS & HIPEC, to identify those suitable for treatment with CRS & HIPEC and to identify targets for existing repurposed or novel treatment strategies. A cohort of patients with CPM treated with CRS & HIPEC was recruited and divided according to prognosis. Molecular profiling of the transcriptome (n = 25), epigenome (n = 24) and genome (n = 21) of CPM and matched primary CRC was performed. CPM were characterised by frequent Wnt/ β catenin negative regulator mutations, TET2 mutations, mismatch repair mutations and high tumour mutational burden. Here we show the molecular features associated with CPM development and associated with not responding to CRS & HIPEC. Potential applications include improving patient selection for treatment with CRS & HIPEC and in future research into novel and personalised treatments targeting the molecular features identified here. Background Little is known about the biology of isolated colorectal peritoneal metastasis (CPM), which although a relatively rare phenomenon is one with a high mortality rate 1 . Understanding tumour biology may identify which patients with primary colorectal cancer (CRC) are at risk of developing CPM, and which are suitable for treatment with cytoreductive surgery and heated intra-peritoneal chemotherapy (CRS & HIPEC). CRS & HIPEC (usually using an agent such as mitomycin C or more recently, oxaliplatin) aims to achieve macroscopic tumour resection with multiple visceral and peritoneal resections and ablation of microscopic disease. Five-year survival however varies widely, and morbidity and mortality are relatively high 2 . There is a need therefore to improve patient selection, allowing alternative existing or novel treatment strategies to be used for patients unlikely to respond. Primary CRC research has identified markers of response to specific treatments, for example KRAS mutation in selection for anti-EGFR mAb therapy 3 . Gene expression signatures have been developed and are in clinical use for prognostication and therapeutic stratification in breast cancer 4 , 5 , 6 , 7 . Gene expression profiling in primary CRC has identified signatures associated with the development of metastasis 6 . One small study combining a small number of CPM with a larger cohort of appendix adenocarcinoma identified a signature predictive of reduced overall survival (OS) following CRS & HIPEC; these are however two biologically distinct tumours, appendix having significantly improved prognosis 7 . The dysregulation of methylation is a key step in tumorigenesis CpG island promoter methylation (CIMP) appears to be stable between matched primary CRC and hepatic metastasis suggesting an epigenetic methylation programme is established prior to the development of metastasis 8 , 9 , 10 . Hypermethylation of KRAS, Wnt modulators, tumour suppressor genes, CIMP and hypomethylation of oncogenes are associated with an unfavourable response to chemotherapy and anti-EGFR antibodies as well as tumour recurrence and reduced OS in primary and metastatic CRC 11 , 12 , 13 , 14 , 15 , 16 . Chromosomal instability is ubiquitous in cancer, increased copy number alteration, indicative of chromosomal instability is found in metastatic CRC 17 , 18 . Lopez-Garcia et al. 19 demonstrated that the evolution of chromosomal instability is depending on cellular tolerance, either via dysregulation of TP53 or via alternate escape mechanisms such as dysfunction of BCL9L regulated caspase signalling. CRC metastatic drivers are less clearly defined, apart from TP53 which is well characterised as being present in metastatic cancer 20 . Some studies have found mutations exclusive to metastatic sites 21 , 22 , whereas others found similar patterns of mutation between primary and metastasis 23 . Studies have examined the somatic mutations in CPM and their prognostic implications. These studies are limited to individual or small panels of mutations routinely tested for in clinical practice with limited evidence to suggest which genes should be included in panel sequencing in CPM. Schneider et al. examined the KRAS and BRAF mutation status of patients with CPM who underwent CRS & HIPEC 24 . They found mutations of RAS/RAF were associated with reduced OS independent of the use of targeted anti-EGFR treatment 24 . Sasaki et al. examined the KRAS, BRAF and PIK3CA mutation status of patients with metastatic CRC, with or without CPM 25 . They found the incidence of BRAF mutation was significantly associated with the presence of CPM but not with prognosis 25 . The landscape of metastatic colorectal cancer was studied by the MSK-IMPACT 20 group which undertook panel based sequencing of 1134 metastatic colorectal cancers. Of these 39 patients were defined as “peritoneal” malignancy, it is unclear whether these were isolated peritoneal metastasis. Only 14 of these patients had metasectomy. 7 of these had peritonectomy suggesting isolated disease suitable for resection. These tumours were also not studied with matched primary tumour of origin. There is a need to improve the outcomes for patients with CPM and significant variation in survival despite patient selection for treatment using known prognostic factors. There is a paucity of knowledge concerning CPM tumour biology. Understanding tumour biology will identify patients with primary CRC at risk of developing CPM, those suitable for treatment with CRS & HIPEC or alternative existing and novel treatment strategies. This study aims to determine the landscape of gene expression, methylation, and somatic mutation profile associated with the transition from primary CRC to isolated CPM and determine the association between these and prognosis following CRS & HIPEC in order to identify therapeutic targets. Methods Patient cohorts This study obtained ethical approval from the North West Haydock Research Ethics Committee, (15/NW/0079), project ID (17/283). Participants gave informed consent. All experiments were performed in accordance with relevant guidelines and regulations Consecutive retrospective patients were recruited from an internally held database of all patients undergoing CRS & HIPEC at Good Hope hospital from 2011 to 2017. Patients with CPM (adenocarcinoma), no extra-abdominal metastasis, a complete resection (CC0) and a peritoneal carcinomatosis index (PCI) of < 12 were eligible for inclusion. The completeness of cytoreduction score describes the degree of macroscopic tumour remaining after CRS and the likelihood of benefit from intraperitoneal chemotherapy 26 . Patients with no residual tumour score CC0, residual tumour < 0.25 cm, CC1, residual tumour 0.25–2.5 cm CC2. The extent of peritoneal metastasis is described by the PCI score. A PCI of ≥ 12 is poor prognostic factor for patients undergoing CRS & HIPEC 27 . Patients were divided into two groups. CRS & HIPEC is a long operation associated with a protracted inpatient and high dependency (HDU) or intensive care (ITU) stay an associated mortality of 1–12% and morbidity of 7–63% and a prolonged post-operative recovery 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 .With palliative chemotherapy DFS is 11–13 months and therefore patients post-treatment (CRS & HIPEC) with disease free survival (DFS) < 12 months were defined as “non-responders” 38 . Patients undergoing therapy with DFS > 12 months were defined as “responders”. Patients were imaged with CT which was reported by an experienced CPM radiologist, diagnostic laparoscopy was not used, not all patients with recurrence are suitable for iterative CRS & HIPEC and so this is not a standard procedure in their follow up. Adhesions following primary excision and CRS & HIPEC may also preclude accurate assessment of peritoneal recurrence in all areas with laparoscopy. Disease recurrence was determined when confirmed by CT and MDT review. Demographic, tumour and treatment details were compared between the prognostic cohorts. For continuous variables, the students T-test was applied to normally distributed data and Mann Whitney-U to non-normally distributed data. Categorical variables were compared with the Chi-squared test or Fishers exact test. A p value of < 0.05 was considered statistically significant. DFS survival between the responders and non-responders was compared using the Kaplan Meier method. Statistical analysis was performed in IBM SPSS Statistics for Windows, Version 24.0 39 . Nucleic acid extraction DNA and RNA were extracted from histologically confirmed Formalin fixed, paraffin embedded (FFPE) scrolls using the Covaris E220 evolution focused-ultrasonicator and the truTRAC FFPE total NA Kit. All peritoneal metastases samples were taken at the commencement of surgery. Nucleic acid concentration was quantified using the Qubit 3.0 Fluorometer and Qubit RNA / DNA HS (high sensitivity) assay kit. Nucleic acid quality was measured by electrophoresis using the Agilent 2200 TapeStation Nucleic Acid System, Agilent 2200 TapeStation Software A.01.05 and the Aligent High Sensitivity RNA / DNA ScreenTape and reagents. RNA library preparation, sequencing and bioinformatics RNA library preparation was performed using the Lexogen Quant Seq 3′ mRNA-Seq Library Prep kit. RNA libraries were denatured, diluted, loaded onto a 75-cycle High output flow cell and sequenced using the NextSeq500 at 2.5–5 million reads 40 . Quality control, trimming and alignment to the reference genome, (NCBI build 37, hg19) was performed with the Partek Flow genomics suite software package (Partek, St Louis, MI, USA). The gene expression profiles of primary and CPM and responders and non-responders were compared using gene Specific Analysis (GSA) Modelling using Partek flow with a false discovery rate (FDR) of < 0.1. Gene specific enrichment analysis (GSEA) and gene expression pathway analysis was performed using Partek flow, a p value of ≤ 0.05 was considered statistically significant. CMS and CRIS classifications were performed using ‘CMScaller’ (v0.99.1) in the R package, version 2.10.2 38 , 41 , 42 . Fishers exact test was used to compare contingency between primary and CPM and responders and non-responders in IBM SPSS Statistics for Windows, Version 24.0 39 . A p value of < 0.05 was considered significant. Methylation array and bioinformatics DNA was treated with sodium bisulphite using the Zymo EZ-DNA methylation kit, according to manufacturer’s instructions. Degraded FFPE DNA was restored prior to methylation array with the Infinium HD FFPE restore kit, according to manufacturer’s instructions. Methylation array was performed according to the Infinium MethylationEPIC BeadChip Kit manufacturer’s instructions. BeadChips were imaged using the Illumina iScan system. Initial data quality was checked using GenomeStudio Methylation Module Software. Raw data was loaded into the RStudio version 3.5.0 software using the minifi package. Bioinformatics analysis was performed using the Chip Analysis Methylation Pipeline (ChAMP) R package, version 2.10.2 43 , 44 . Probes with signals from less than three functional beads, low confidence with a detection p value > 0.01, covering SNPs, non-CpG and those located on the X and Y chromosome where filtered. Beta-mixture quantile normalization (BMIQ) was applied and a singular value decomposition (SVD) performed to identify batch effects. The association between methylation and prognosis was determined using the Bioconductor R package limma and bumphunter functions. Copy number alteration calling was performed using the CHAMP CNA function with a significance threshold of, p value < p < × 10 –10 . Exome capture, high-throughput sequencing and bioinformatics DNA was sheared using the Covaris E220 evolution focused-ultrasonicator to produce a uniform 150 bp fragment size. Libraries were prepared using the TruSeq Exome Kit then denatured, diluted, loaded onto a 150-cycle High output flow cell and sequenced using the NextSeq500. Sequencing reads were assessed using FastQC. Sequences with a Phred score of < 30 were removed giving a base call accuracy of 99.9%. Sequence reads were aligned to the human reference genome, (hg19) using the Burrows–Wheeler Aligner (BWA) package 45 . SAMTools was used to generate chromosomal coordinate-sorted BAM files and Picard was used to remove PCR duplicates 46 . Somatic variants were called from matched tumour-normal samples using Strelka2 in tumour/normal mode 47 . Somatic variants were viewed, filtered and annotated in genomics workbench 48 . Mutations with a MAF of > 1% in known variant databases, (dbSNP and 100,000 genomes) were filtered. Mutations were annotated with information from known variant databases, (dbSNP and 100,000 genomes), PhastCons score and functional consequences. The prognostic groups were compared using Fischer exact test to identify potential candidate driver mutations for non-responders. Somatic mutations were entered into the IntOGen platform for further analysis 49 . The IntOGen-mutation platform incorporates a number of pipelines to identify cancer driver mutations and activated pathways 49 . The OncodriveFM pipeline identifies mutations with a high functional impact using three scoring methods (Sorting Intolerant From Tolerant, (SIFT) 50 , PolyPhen2 51 , and Mutation Assessor scores) 49 , 52 , and assesses the likelihood that such mutations are cancer drivers. The OncodriveCLUST pipeline assesses the clustering of mutations to identify relevant activated pathways 49 . MSI assessment was carried out using MSI_classifier_v3 ( ). Ethics approval and consent to participate North West Haydock Research Ethics Committee, (15/NW/0079), project ID (17/283). Results Patient cohort From 2011 to 2017 a total of n = 161 patients underwent CRS & HIPEC at University Hospitals Birmingham, n = 88 patients for metachronous CPM. Patients were excluded for the following reasons: other primary tumour (appendix, pseudomyxoma peritonei, ovarian) n = 49, synchronous colorectal cancer n = 26, no primary tumour available n = 53 CC2 resection n = 8 26 , PCI of ≥ 12 n = 20, follow up period of ≤ 12 months n = 27, leaving n = 28 patients. Complete information regarding the primary CRC pathology and treatment was available for n = 26 patients who form the basis of this study. Each patient had matched normal, primary CRC and CPM samples. Thirteen patients had a DFS of 24 months (15–72 range) following CRS & HIPEC and formed the ‘responders cohort, thirteen patients had a DFS of 6 months (2–11 range) and formed the ‘non-responders’. There were no significant differences between cohorts in demographics, primary CRC or CPM tumour, treatment or follow up (Table 1 ). No patients had neoadjuvant therapy for their primary tumour. Three patients (all in the responders group) had poorly differentiated, mucinous adenocarcinoma, one had signet ring adenocarcinoma (in the non-responders group) and all the others had moderately differentiated adenocarcinoma. Table 1 Comparison of responders and non-responders to CRS & HIPEC. Full size table Following nucleic acid extraction all patients had adequate CPM RNA for RNAseq (n = 13 responders, n = 13 non-responders), n = 25 had matched primary CRC samples. For methylation array n = 24 patients (n = 12 responders, n = 12 non-responders) had adequate DNA. As the Infinium methylation array comprises a 32-prep kit, n = 4 responders and n = 4 non-responders primary tumours were matched to these. For exome sequencing n = 24 patients (n = 12 responders, n = 12 non-responders) had adequate DNA from both the primary and CPM samples, extraction of DNA from normal tissue resulted in n = 21 samples (n = 9 responders, n = 12 non-responders). Exome sequencing Across all six sequencing runs, we obtained a median of 60X coverage (42–166) with a median uniformity of 88% (71–89). Somatic mutations identified in the primary and matched CPM cohort In the matched CPM cohort, a total of n = 244,531 somatic SNV’s were identified (CPM-primary subtraction) significantly more than found in the matched primary cohort (n = 112,420). Nine CPM samples, 9/24 (56%) had a high tumour mutational burden TMB ≥ 10 mut/Mb 53 compared with 7/24 (30%) samples in the matched primary cohort. Mutations were identified in n = 69 of n = 95 known CRC driver genes, n = 51 were shared between the primary and CPM, n = 13 were novel (supplementary table S1 ) 54 . Of the somatic variants identified in CPM, n = 58,958 (29%) were present in the primary CRC, n = 205,552 variants occurred exclusively in the CPM suggesting a significant accumulation of mutations in the transition to CPM (Fig. 1 ). OncodriveFM identified n = 265 potential driver genes with high levels of functional mutation (Q-value < 0.05) in the CPM cohort: FLNB , SPTB, PPL, TP53, PDE4DIP , RIOK2, CDC16, NUP98, CDC16 and SVEP1 (supplementary table S2 ), however these results must be treated with caution due to the bias of the hypermutator phenotype. KEGG pathway analysis of mutations demonstrated enrichment in pathways concerning the immune system, signalling, metabolism and cancer (supplementary table S1 ). In the CPM group KRAS or BRAF status was not significantly associated with prognosis (chi2 p = 1.00). Figure 1 Venn diagrams depicting the frequency of mutations exclusive to and shared between primary CRC and matched CPM and responders and non-responders. Full size image Clonality analysis with SuperFreq showed significant (Wilcoxon rank p = 0.007) differences between the responders and non-responders groups, with a median of 2 clones in the responders group of primary tumours (range 1–4) and 3 clones in the non-responders group (range 2–7). In the peritoneal metastases there were a median of 3 clones in both the responders (range 1–4) and non-responders (range 2–5) groups. Of note, in the non-responders group during clonal expansion, the dominant clone in the peritoneal metastasis group arose de-novo rather than being a prior clone that existed in the primary tumour ( Supplementary Fig. 1 , S1e primary tumours, 9/21 were MSI (47.4%) and 10/21 were MSS (52.6%) whereas in the isolated peritoneal metastasis group, 4/21 (19.0%) were MSS and 17/21 MSI (81.0%) Demonstrating that there was a significantly higher rate of MSI in the isolated peritoneal metastasis group ( p < 0.05, Chi2). Non-responders had a higher frequency of somatic mutations: 60% of all mutations in CPM cohort vs. 40%. Non-responders more commonly had a high tumour mutational burden, TMB ≥ 10 mut/Mb 53 , 56% vs. 44%. Of the somatic mutations identified in non-responders, n = 35,461 (30%) were present in responders, n = 145,089 variants occurred exclusively in non-responders, suggesting a high tumour mutational burden was associated with non-response to CRS & HIPEC (Fig. 1 ). Mutational signature analysis of the MSI tumours demonstrated a predominance of signature 5 (associated with mutational “clock” effects), signature 26 (associated with defective mismatch repair) and signature 20 (associated with defective mismatch repair). Comparison of somatic mutations in responders and non-responders identified two potential candidate genes to identify non-responders, FAM13A and PIEZO2 (Fishers exact p < 0.05, FDR = 0.53) (Table 2 ). Table 2 Potential candidate variants, non-responders to CRS & HIPEC. Full size table Differentialene expression Differential gene expression between primary CRC and matched CPM Primary CRC and matched CPM showed differential expression of n = 65 genes with an FDR < 0.1. (Fig. 2 ) Sixteen genes showed significantly decreased expression in CPM compared with primary CRC (Table 3 ). Forty-nine genes showed significantly increased expression in CPM compared with primary CRC (Table 3 ). A KEGG pathway analysis was performed to identify the enriched biological functions among the differentially expressed genes (Supplementary Table 1 ). The expression of FABP6 , an intercellular bile acid transporter, was decreased 34.30-fold in CPM. OLFM4 is a target of the Wnt/β-catenin pathway, its expression was reduced 3.77-fold in CPM. DCN and PTEN are able to initiate a number of signalling pathways including ERK and EGFR leading to growth suppression, their expression was increased 3.3-fold and 3.25 fold in CPM, this was unexpected and in contrast to the literature 55 . NF-κBIA expression was increased 3.24-fold in CPM, its upregulation may reflect increased NF-κB activity in the development of CPM 56 . Figure 2 Heatmap of differential gene expression in 100 highest genes ranked by variance between primary CRC (P, red) and colorectal peritoneal metastasis (CRS, blue). Sample type is indicated at the X axis of the heatmap with individual genes on the Y-axis. Individual IDs of each patient are below the indicators of primary or CRS sample. Gene expression as indicated by the Z-score is displayed as colour ranging from green to black to red as shown in the legend. Created in Partek Flow. Full size image Table 3 The top 10 genes with significantly altered expression (FDR < 0.1) in CPM samples compared with primary CRC samples. Full size table Gene specific enrichment analysis (GSEA) results are presented in supplementary table 5 We identified 848 upregulated gene ontology categories in CPM and 14 upregulated gene pathways. which may contribute to the pathogenesis of CPM: the mTOR pathway as well as immune pathways including the intestinal immune network for IgA production, Leukocyte transendothelial migration and the actin cytoskeleton pathway. Differential gene expression between non-responders and responders to CRS & HIPEC One hundred and forty-nine genes showed increased expression in non-responders (Fig. 3 ). Five genes showed decreased expression in non-responders, however none had a fold change ≥ 1.5 suggesting minimal difference in expression between the responders and non-responders ( Supplementary Table 2 ). KEGG pathway analysis demonstrated enrichment in endocytosis, metabolism, phagocytosis, cell movement and architecture, bacterial and viral cell infection, transcription and the expression of genes controlling apoptosis, cell cycle, oxidative stress resistance and longevity (Table 3 ). The expression of CEACAM1, a member of the carcinoembryonic antigen ( CEA ) immunoglobulin family, was increased 8.27-fold in non-responders 57 . Figure 3 Heatmap differential gene expression of top 100 genes as ranked by variance between responders (blue) and non-responders (red)Sample type is indicated at the transverse border of the heatmap with individual genes on the longitudinal border. Gene expression as indicated by the Z-score is displayed as colour ranging from green to black to red as shown in the legend. Created in Partek Flow. Full size image AXIN1 encodes a cytoplasmic protein which forms the ß-Catenin destruction complex, a negative regulator of the WNT signalling pathway 58 . AXIN1 expression was increased 5.42-fold in non-responders 59 . Gene specific enrichment analysis (GSEA) results are presented in supplementary table 6 . We identified 591 upregulated gene ontology categories in CPM and 15 upregulated gene pathways. which may contribute to the pathogenesis of CPM: Endocytosis, the adherens junction pathway and immune pathways such as those regulating the bacterial invasion of epithelial cells. Amongst the n = 51 primary CRC and CPM samples n = 29 were representative of each CMS subtype, the remaining n = 22 samples did not have a consistent pattern (Fig. 4 ). Comparison of the CMS subtypes in primary and CPM and prognostic groups revealed an apparent transition from primary CRC to CPM. No primary CRC samples were classified as CMS4 (mesenchymal subtype characterized by prominent transforming growth factor activation, stromal invasion and angiogenesis) compared to 31% of CPM ( p = 0.085). Secondly, non-responders were more commonly CMS4, 46% vs. 15% ( p = 0.005, Table 4 ). Figure 4 Sankey diagram depicting the transition in consensus molecular subtypes (CMS) from primary to CPM. CMS classifications were performed using ‘CMScaller’ (v0.99.1) in the R /Bioconductor statistics package. Classifications include CMS1 to CMS4, non-consensus samples do not have a consistent pattern of subtype label association. Primary CRC samples, classification and number are shown to the left of the diagram with CPM samples, classification and number to the right of the diagram. Fishers exact p value 0.085, values in parenthesis percentages. Full size image Table 4 CMS classification responders vs. non-responders to CRS & HIPEC. Full size table Methylation Differential methylation between primary CRC and matched CPM Thirty-two samples in total were hybridised successfully to the Illumina HumanMethylation EPIC microarrays. DMPs were called between the primary CRC and CPM. The top ranked differentially methylated probe was cg04146982, BF 34.5, adjusted p value 5.67 × 10 –16 (chr8:144,943,810–144,943,810, hg19 coordinates), which tags a CpG dinucleotide 3651 bp upstream of the transcription start site of gene Epiplakin 1, (EPPK1) 60 . EPPK1 is part of the Plakin family an important component of the cell cytoskeleton 61 . The other DMP was cg12209861, BF 7.1, adjusted p value 0.059 (chr4:37,459,078–37,459,078, hg19 coordinates), 3526 bp upstream of the transcription start site of gene Chromosome 4 Open Reading Frame 19, ( C4orf19 ). DMRs were called between primary CRC and CPM via the dmrLasso function of the CHAMP pipeline ( Supplementary Table 3 ). The top 10 most DMRs were in the region of IGF2 , ZNF461 , RASGFR1, CELF4 , ZSCAN18 , EDNRB , ZBED9, VTRNA2-1 , ZNF256 and EGFLAM. KEGG pathway analysis did not reveal any significantly enriched pathways. Comparison of CNA between primary and CPM via methylation arrays did not identify and significant differences in CNA between primary and CPM at a stringent p value of < × 10 –10 however a number of CNA were identified at a lower significance threshold, p = 2.78 × 10 –07 ( Supplementary Table 4 ).Genes showing CNA gains of known significance in patients with CPM included; TRIM3, 5, 6, 21 and 22, MT1A, 2A, 3, 4 encode proteins of the metallothionein family. Differential methylation between non-responders and responders to CRS & HIPEC The top ranked differentially methylated probe was cg07951355, BF = 6, (chr1:40,123,717) which tags an intergenic region 1076 bp before gene NT5C1A . Cg25909064, BF 4 adjusted p value 0.47 (chr11:120,081,487–120,082,345) which tags an intron of gene OAF and cg12977942, BF 4 adjusted p value 0.47 (chr5:92,839,309–92,839,309) which tags an intron of gene NR2F1-AS1 60 . Six significant DMRs ( Supplementary Table 3 ) were identified in the regions of NKX6-2, CHFR, GATA3 , IRX5 , HCK and BC019904 . KEGG pathway analysis did not reveal any significantly enriched pathways. Comparison of CNA between the CPM prognostic groups identified recurrent gene losses at chromosomes 3, 4, 14, 15, 17 and 19 ( Supplementary Table 4 ). CNA losses clustered in the RAS-MAPK-ERK signalling pathway suggesting dysregulation in non-responders. Comparison of CNA between the CPM prognostic groups identified n = 19 gene gains at chromosomes 9, 10 and 11. Genes showing CNA gains in non-responders included: SIT1, RNF38, MELK, PAX5, SHB, ZEB1, DEAF1, ANTXR, EPS8L2 and PIDD1. Discussion This study determined the gene expression, CNA, methylation and somatic mutation profile of primary CRC and matched isolated CPM to determine whether there were changes associated with the development of CPM or predicting prognosis for patients with CPM. To our knowledge, this is the first such analysis in a cohort of patients with isolated CPM suitable for treatment with CRS & HIPEC. The MSKCC cohort of metastatic cancer 20 had a diverse range of metastatic cancer, none of whom overlapped with the type we have studied, which is isolated colorectal peritoneal metastasis, with matched primary samples, suitable for cytoreduction. Within this study responders and non-responders to CRS & HIPEC were well matched by demographics, tumour stage, treatment and follow up. PCI varied between groups with responders having a median PCI of 5 (3–12) and non-responders a median PCI of 8 (2–12). A PCI of greater than 12 is associated with reduced survival following CRS & HIPEC, no significant difference is consistently found at PCI levels below this 27 . Comparison of patients with primary CRC and metachronous CPM identified biological changes associated with the transition from primary CRC to CPM. Hypermethylation, CNA and hypermutation resulted in the inactivation of tumour suppressors and oncogene activation in CPM, (TP53, VTRA2-1, TRIM proteins). These changes suggest a rapid rate of tumour growth unchecked by tumour suppressor or apoptotic mechanisms. Increased MAPK and Wnt/β-catenin pathway activation was noted in CPM. Gene expression of negative regulators of the Wnt pathway was reduced, (OLFM4, DEAFA6), negative Wnt regulators contained somatic mutations, (APC, RNF43, FAM123B and TSC1), and the MAPK marker, RASFGFR1 was hypermethylated suggesting persistent activation of MAPK and Wnt pathways. Multiple mutations of negative Wnt signalling regulators make this an attractive therapeutic target. Porcupine inhibitors mediate the palmitoylation of Wnt ligands, blocking Wnt signalling. The porcupine inhibitor LGK974 inhibits the upstream negative Wnt regulator mutant RNF43 and is a potential therapeutic target in CPM 62 . CPM contained a high proportion of MSH6 somatic mutations suggesting deficiency in the mismatch repair pathway and MSI. MSH6 mutations are commonly found in isolated peritoneal metastasis 59 . As expected for tumours with mismatch repair deficiency both the primary CRC and CPM cohort had a high tumour mutational burden, crucially this suggests they may have a good response to treatment with immune checkpoint inhibitors such as pembrolizumab 63 , a new therapeutic avenue for these difficult to treat patients. The frequency of hypermutation seen in our study (48%) was considerably higher than that observed for both the MSKCC metastatic disease cohort (5%) and the TCGA Colorectal 64 cohort (10%). The expression of genes regulating innate immunity however was downregulated, (DEFA6, DMBT1, MUC2) or altered via somatic mutations, (HLA-A antigen) suggesting immune evasion in the transition to CPM which may reduce the likelihood of successful PD-1 therapy. The expression of genes supressing invasion, migration and EMT was downregulated or hypermethylated, (MUC2, MMP26, ILK, FLNB, SPTB, PPL, and SVEP1) and those triggering these processes upregulated, (CYR61, CXCL12, CTGF, and CSTB). These changes suggest a mechanism by which CPM cells metastasise from the primary CRC. In keeping with changes in EMT regulators there appeared to be a transition in CMS subtypes towards CMS4 from primary CRC to CPM. The CMS4 subtype is an interesting therapeutic target, TGFβ signalling inhibitors and targeted immunotherapies have been trialled with success in pre-clinical models to block cross talk between the tumour microenvironment and halt disease progression of stromal rich CMS4 CRC 65 , 66 . Methylation appeared to be dysregulated in CPM with a bias towards a hypermethylator phenotype caused by somatic mutation of the TET2 tumour suppressor and CDH7 chromatin regulator. Active DNA demethylation by TET enzymes is an important tumour suppressor mechanism in a variety of cancers 67 , 68 , 69 . Downregulation of CES2, a gene known to activate the prodrug irinotecan, a chemotherapy used as part of the FOLFIRI regimen in the UK in the adjuvant treatment of primary CRC and CPM was seen in this cohort. Resistance to the treatment of primary CRC may in part explain the development of CPM. CEACAM1 expression correlates with metastasis and reduced survival in CRC and was upregulated in this cohort of patients 70 . Novel therapies in the form or CEA TCB IgG-based T-cell bispecific antibodies (Cibisatamab) may therefore be of benefit 71 . Additionally there was a downregulation of gene expression of negative regulators of the Wnt pathway, (AXIN1) and somatic mutations of key Wnt regulators, (FAM13A) and hypermethylation of MAPK and TGF-β pathway markers, (RAB8A, RAB34, FGF5 and BMP3) suggesting persistent activation of MAPK, TGF-β and Wnt in non-responders to CRS & HIPEC. A recent randomised controlled trial has called into question the use of HIPEC in CPM, PRODIGE-7 treated patients with CPM with CRS & HIPEC or CRS alone in addition to systemic chemotherapy. PRODIGE-7 suggests no added benefit from HIPEC however this study was not powered to stratify the impact of HIPEC according to PCI score, on subgroup analysis patients with a PCI of 11–15 had significantly improved median survival with the addition of HIPEC 41.6 months vs. 32.7 months p value 0.0209 72 . A relative weakness of this study is the small cohort of patients, the biological changes identified here form a starting point in identifying the tumour biology associated with the development of CPM and predicting non-responders to CRS & HIPEC. However, we have identified multiple potential targets for therapy, along with the important finding that CPM appears to be a hypermutated, hypermethylated, immune evasive cancer which allows it to be potentially targeted by emerging novel therapeutics. Our study findings have implications for the recent addition of oxaliplatin to HIPEC, as the FOXTROT study of neoadjuvant therapy in colorectal cancer showed that oxaliplatin has no effect in dMMR tumours. Conclusions Patients with colorectal peritoneal metastasis (CPM) secondary to colorectal cancer have limited survival with the best available treatments. Despite selection for treatment using known prognostic factors survival varies widely and can be difficult to predict. There is a paucity of knowledge concerning the biology of CPM, it is likely that there are additional biological markers of response to currently available as well as novel or re-purposed alternative treatments. Here we have comprehensively profiled a cohort of patients with isolated CPM and identified a number of therapeutically targetable alterations including mutations in Wnt/β catenin regulators (via Porcupine inhibitors), the mismatch repair pathway (via PD-1/CTLA-4 immunotherapy) and methylation regulators. We suggest that these are urgently investigated in a larger cohort with the development of pre-clinical models as, in particular, the finding that these patients may be sensitive to immunotherapy may radically change the therapy options available for this difficult to treat group of patients. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Abbreviations CRC: Colorectal cancer CPM: Colorectal peritoneal metastasis CRS & HIPEC: Cytoreductive surgery and heated intraperitoneal chemotherapy DFS: Disease free survival DMR: Differentially methylated regions OS: Overall survival FFPE: Formalin fixed paraffin embedded
A new study from the University of Birmingham has found that 50% of patients with a rare type of cancer that has spread into the lining of their abdomen may be suitable for immunotherapy treatment. Unfortunately for around 1% of bowel cancer patients, their cancer spreads to the lining of their abdomen (peritoneal cavity) - known as colorectal peritoneal metastasis (CPM). This type of spread in bowel cancer patients carries a very poor prognosis and so most patients do not survive beyond 12 months from diagnosis. Patients with CPM have a limited survival rate with the best available treatments. Conventional chemotherapy is ineffective, and current treatment consists of extensive surgery which does not always work. This first of its kind study funded by Good Hope Hospital Charity, found that understanding the tumor biology may identify which patients with bowel cancer are at risk of developing CPM. Results published in Scientific Reports show that by identifying the specific tumor biology of this groups of patients, they carry a specific mutation that makes them sensitive to immunotherapy. Lead author, Professor Andrew Beggs from the University of Birmingham's Institute of Cancer and Genomic Sciences, said: "We have found that approximately 50% of patients with CPM have a type of genetic change, called hypermutation. This means they may be sensitive to immunotherapy as this type of treatment has good results in other patient groups with hypermutations. "We also found potential sensitivity to a drug called a Porcupine inhibitor, based on another genetic marker identified in these patients. "This is the first study of its kind in the world for patients with CPM, and our results have shown this could provide a potentially curative option for patients given the responses we have seen to immunotherapy in other cancers." Researchers will now look to set up an international clinical trial to examine the use of immunotherapy for patients with CPM.
10.1038/s41598-020-75844-6
Other
Reptilian root canal: Study reveals infection in jaw of ancient fossil
Reisz R R et al (2011). Osteomyelitis in a Paleozoic reptile: ancient evidence for bacterial infection and its evolutionary significance. Naturwissenschaften – The Nature of Science. DOI 10.1007/s00114-011-0792-1
http://dx.doi.org/10.1007/s00114-011-0792-1
https://phys.org/news/2011-04-pain-evolution-big-toothache-reptiles.html
Abstract We report on dental and mandibular pathology in Labidosaurus hamatus , a 275 million-year-old terrestrial reptile from North America and associate it with bacterial infection in an organism that is characterized by reduced tooth replacement. Analysis of the surface and internal mandibular structure using mechanical and CT-scanning techniques permits the reconstruction of events that led to the pathology and the possible death of the individual. The infection probably occurred as a result of prolonged exposure of the dental pulp cavity to oral bacteria, and this exposure was caused by injury to the tooth in an animal that is characterized by reduced tooth replacement cycles. In these early reptiles, the reduction in tooth replacement is an evolutionary innovation associated with strong implantation and increased oral processing. The dental abscess observed in L . hamatus , the oldest known infection in a terrestrial vertebrate, provides clear evidence of the ancient association between terrestrial vertebrates and their oral bacteria. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes Introduction The rich fossil record of amniotes (extant reptiles, birds, mammals, and their extinct relatives) extends over the last 315 million years and spans three eras (Reisz 1997 ). Whereas Mesozoic dinosaurs and Cenozoic mammals often show evidence of pathology (Lucas and Schoch 1987 ; Rothschild 1997 ; Tanke and Rothschild 2002 ; Witzmann et al. 2008 ), including bite marks, healed scars, infections, and tumors, they are poorly documented in Paleozoic amniotes (Reisz 1980 ; Johnson 1988 ; Reisz and Tsuji 2006 ; Huttenlocker et al. 2010 ), the first vertebrates to diversify extensively on land. The pathology reported here was discovered in the anterior part of the lower jaw (Fig. 1 ) in the largest and presumably oldest known individual of Labidosaurus hamatus , a member of the late Paleozoic group Captorhinidae (Modesto et al. 2007 ). Captorhinids were the first reptiles to diversify rapidly and disperse globally during the Paleozoic (Müller et al. 2007 ). They range in size from 25 cm in total length in late Carboniferous (Müller and Reisz 2005 ) and Early Permian (Heaton and Reisz 1980 ) forms, and achieve total lengths up to 2.5 m in some of the Middle and Late Permian species (Dodick and Modesto 1995 ; O'Keefe et al. 2005 ). During the Early Permian, members of this clade were the most commonly occurring reptiles in the fossil record. Fig. 1 Evidence of dental and mandibular pathology in L. hamatus , a basal reptile from the Lower Permian of Oklahoma. a Skull reconstruction in right lateral view, modified from Ref. 4. Shaded area represents region of the lower jaw shown in b and c . b CMNH 76876, a right hemimandible in lateral, occlusal, and medial views. c Longitudinal CT scans of the mandible shown in b , illustrating the internal changes that occurred in the anterior region of the jaw as a consequence of the infection. Only one ( t2 ) of the three anterior teeth was functional at the time this individual died. Remnants of the first ( rt1 ) and third ( rt3 ) teeth are visible in the CT scan, and were encapsulated into the mandible by dentary bone, probably after they were broken. Tooth sockets at positions 1 ( tp1 ) and 3 ( tp3 ) have been filled with bone. The direction of infection extends posteriorly from the first tooth position to the fourth open tooth position ( ots4 ) and to the lingual and labial abscesses. It is at the level where the pulp cavity of the teeth would have been in the living organism. Scale bar = 10 mm Full size image The more derived captorhinids evolved dental and cranial specializations as part of their adaptation to omnivory and high-fiber herbivory (Reisz and Sues 2000 ). In particular they modified their dentition by attaching them very strongly to the jaws through ankylosis, and by changing dramatically the pattern of tooth replacement. The normal pattern of tooth replacement seen in most other Paleozoic tetrapods is characterized by teeth that are relatively loosely attached to the jaw bones, and continuous waves of new teeth erupting at specific tooth positions or sockets, with older teeth being partly resorbed and then shed as the new teeth erupt in the same socket (polyphyodonty). This pattern of tooth replacement is also present in extant tetrapods, including amphibians and most squamates (Edmund 1960 ). Thus, several teeth in any jaw in certain extant and fossil tetrapods can always be seen in the process of being replaced, with two teeth being present in a single tooth position: the crown of a partially resorbed older tooth from an older wave of replacement and another small tooth from the next wave of replacement growing at the base and slightly lingual to the older tooth (polyphyodonty). With continued resorption, the tooth of the previous wave of replacement is eventually shed and the younger tooth grows into full function in that tooth position (Edmund 1960 ). However, in the clade that includes captorhinids like Captorhinus , Labidosaurus , and Moradisaurinae (Fig. 2 ), the change in the pattern of dental development resulted in a dramatic decrease in tooth replacement waves, with older teeth being removed only occasionally, and by erosion, while new teeth did not erupt in the same tooth position as the older teeth. This highly modified pattern can be best seen in Captorhinus aguti , a species that developed multiple tooth rows (Bolt and DeMar 1975 ). The development of multiple tooth rows occurred by the eruption of a new series of teeth lingual to the older tooth row, with the wave of eruption extending mesially along the jaw. The older tooth row was not replaced, and instead, an additional row was added. Only the oldest tooth from an older series appears to be occasionally replaced, and only when it appears to be in the way of the new wave (de Ricqles and Bolt 1983 ). Fig. 2 Phylogeny of Captorhinidae modified from Müller et al. ( 2006 ) and Modesto et al. ( 2007 ). Previous studies (Bolt and DeMar 1975 ; de Ricqles and Bolt 1983 ; Modesto 1996 ) show that the evolution of reduced cycles of tooth replacement ( RTR ) evolved in the ancestor of Captorhinus , moradisaurines ( Labidosaurikos , Moradisaurus , Rothianiscus ), and Labidosaurus . Skull reconstructions of Romeria texana , C. aguti , Labidosaurikos meachami , and L. hamatus from Heaton ( 1979 ), this study, Dodick and Modesto ( 1995 ), and Modesto et al. ( 2007 ) Full size image The overall result is a dramatic reduction in all derived captorhinids in the replacement of old teeth with new ones. This can be seen even in the single-tooth-rowed forms like Labidosaurus and Captorhinus magnus , where there is usually no gap in the tooth row, and rarely is there any evidence of a tooth in the process of being replaced, as seen in the more basal members of the clade (Modesto 1996 ). The deep implantation and strong attachment (ankylosis) of the teeth into the jaw were clearly advantageous in these derived captorhinids. In addition, the reduction and changes in tooth replacement also allowed for the development of multiple-tooth-rowed forms through the addition of rows of teeth (Reisz and Sues 2000 ), a design that is ideally suited for increased oral processing in omnivorous and herbivorous animals like C. aguti and moradisaurine captorhinids. Careful preparation of several exquisitely preserved specimens while completing a thorough, detailed description of the cranial anatomy of the captorhinid reptile L. hamatus (Modesto et al. 2007 ) revealed a remarkable pathology in one jaw . Since several complete skulls were prepared as part of that analysis, we are confident of our interpretation that the unusual features of this specimen can be clearly attributed to modifications and damages that occurred during the lifetime of the individual, rather than due to postmortem, taphonomic, or preparatory effects. We employed traditional paleontological techniques and modern computerized tomographic scanning imagery to examine dental pathology in the Lower Permian captorhinid L . hamatus . Methods The study specimen is CMNH (Carnegie Museum of Natural History, Pittsburgh, Pennsylvania) specimen 76876, an isolated, partial right hemimandible from the Lower Permian “ Labidosaurus pocket” locality near Coffee Creek, Baylor County, TX (Modesto et al. 2007 ). CMNH 76876 was prepared manually using pneumatic airscribe equipment and pin vises. This specimen was then CT scanned using a Philips MX 8000 QuadCT scanner at Thunder Bay Regional Health Sciences Centre, ON, at 800-μm slice thickness, rendering 24 longitudinal, 44 coronal, and 359 transverse slices. Results Examination of CMNH 76876 shows that the teeth in the first and third position were clearly damaged but not replaced in the normal reptilian fashion, in which new teeth emerge from the lingual side of each empty socket. Instead, the tooth sockets were plugged with bone, with the result that fragments of the roots became encapsulated (Fig. 1c ), an unusual feature that could only occur while the organism was alive. Farther distally, three open tooth sockets were carefully prepared, and they show partly damaged interdental and strongly damaged lingual and labial walls in an otherwise perfectly preserved region of the mandible. Here, again we were able to determine that the damage developed during the lifetime of the organism, but in this case, the trabecular bone exposed in the enlarged tooth sockets and on the damaged areas around them indicates that these were caused by infection. Similarly, the lateral side of the mandible shows bone destruction in the form of a deep groove that runs posteroventrally from the tooth bearing jaw margin at the level of the damaged interdental wall between tooth sockets 5 and 6, and extends deeply below the cortical layers into the trabecular part of the bone. An internal line, which is visible in CT scans (Fig. 1c ), is seen extending from tooth position 1 to 4 and represents internal loss of bone through infection directly beneath the tooth row, and demonstrates clearly the direction of infection extending posteriorly from the first tooth position. Discussion Our extensive knowledge of the osteology and patterns of dental replacement in captorhinids, developed over several decades of study of these ancient reptiles, allows us to reconstruct the sequence of events that occurred in this individual. First, there was an initial loss of anterior mandibular teeth, possibly from a trauma, followed by a relatively slow, bony encapsulation that covered the open pulp cavity of the damaged tooth, trapping oral bacteria inside the jaw. The surrounding tissues became involved with the inflammatory reaction through the spread of pyogenic organisms, the acute localized periapical abscess slowly transforming into chronic osteomyelitis (White and Pharoah 2000 ). The inflammatory reaction extended posteriorly to the level of tooth positions 4–7. There, the osteomyelitis produced a radiolucent area, and quite possibly bony sequestra, resulting in a fistula formation, allowing for the drainage of the pus extraorally. As a consequence of the infection, teeth 4–6 (but not the tooth in position 7) were prematurely exfoliated and the bone of the jaw was irreversibly damaged by osteomyelitis. This interpretation is based on comparisons between the patterns that we see in this specimen with those of extant organisms. It is not possible to determine if this infection caused the death of the individual, but it may have been a major contributing factor, because it appears to have been an active pathology at the time of death and, in some extant lizards, oral osteomyelitis poses a serious health threat (Mehler and Bennett 2003 ). The dental abscess identified here in the Early Permian L. hamatus predates the previous record for dental pathology in a terrestrial vertebrate reported for late Cretaceous hadrosaurid dinosaurs (Moodie 1930 ) by nearly 200 million years. This presence of dental pathology in a reptile that has greatly reduced its tooth replacement pattern is particularly interesting. Among Paleozoic terrestrial vertebrates, lifelong cycles of tooth replacement represent the normal, primitive condition (Edmund 1960 ). This pattern extends to early amniotes, organisms that include the distant ancestors of most higher vertebrates such as extant mammals, birds, and reptiles, as well as dinosaurs, marine and flying reptiles. This ancient, primitive tooth replacement pattern was modified in various groups either by greatly reducing or eliminating replacement cycles (mammals and some reptiles, like the tuatara, respectively) or by disposing of dentition entirely (turtles and most birds). This evolutionary innovation also occurred within Captorhinidae, the oldest known such example in the fossil record of terrestrial vertebrates. Our knowledge of this group of ancient reptiles, one of the best known clades of early terrestrial vertebrates, allows us to place this innovation within a broader evolutionary context. The generally accepted phylogenetic relationships among Captorhinidae (Fig. 2 ) indicates that the reduction in tooth replacement cycles occurred within this family. Early basal members of the clade are small insectivorous and carnivorous predators and have the normal patterns of continual tooth replacement (Modesto 1996 ; Müller et al. 2006 , 2007 ), whereas the more derived omnivorous and herbivorous members (Sues and Reisz 1998 ) of the clade have modified and reduced the replacement cycles as part of an evolutionary strategy of developing deeply implanted teeth that are strongly ankylosed to the mandibles (Dodick and Modesto 1995 ; Jalil and Dutuit 1996 ). The subsequent development of multiple tooth rows appears to have evolved at least twice within this group and independent of each other (Dodick and Modesto 1995 ). Clearly, the multiple tooth rows in the upper and lower jaws occluding against each other created a system of oral processing that was superior to that employed by other organisms that used single rows of teeth for occlusion and oral processing (Sues and Reisz 1998 ; Reisz and Sues 2000 ; Reisz 2008 ). Interestingly, an independently evolved reduction in cycles of tooth replacement and dental occlusion for oral processing occurred in synapsids, in the line towards mammals (Rybczynski and Reisz 2001 ). However, the reduction in synapsids appears to be coupled not only with herbivory but also with the evolution of precise dental occlusion in small carnivorous and insectivorous forms, with deeply implanted teeth and deep, multiple roots (Reisz and Sues 2000 ). The obvious success of captorhinids, the first reptiles to diversify extensively and expand globally, suggests that the deep implantation and strong attachment (ankylosis) of the teeth into the jaw probably represented a significant evolutionary advantage. The reduction in tooth replacement also allowed for the evolution of multiple tooth rowed forms through the addition of rows of teeth without any replacement (Bolt and DeMar 1975 ), the first such occurrence in terrestrial vertebrates. However, if dental damage occurred in large, adult individuals, there was no readily available mechanism to replace the tooth, as would be available in the great majority of other Paleozoic amniotes that had continuous replacement cycles. Thus, the opportunity for mandibular infection from prolonged exposure to oral bacteria was much greater in this reptile than in other Paleozoic amniotes. This allows us to speculate that our own human system of partial diphyodonty, although of obvious advantage because of its precise dental occlusion and extensive oral processing, is more susceptible to infection than that of our distant ancestors that had a continuous cycle of tooth replacement. Finally, the discovery of dental and mandibular infection from bacteria in a 275million-year-old reptile indicates that interactions between terrestrial amniotes and their microbiota has a very extended history, a feature of vertebrate evolution that has begun to attract the attention of the broad scientific and medical community relatively recently (Ley et al. 2006 ; Dethlefsen et al. 2006 , 2007 ).
A reptile that lived 275-million years ago in what is now Oklahoma is giving paleontologists a glimpse of the oldest known toothache. Led by Professor Robert Reisz, the chair of the Department of Biology at the University of Toronto Mississauga, scientists found evidence of bone damage due to oral infection in Paleozoic reptiles as they adapted to living on land. Their findings, published online in the journal Naturwissenschaften – The Nature of Science, predate the previous record for oral and dental disease in a terrestrial vertebrate by nearly 200 million years. "Not only does this fossil extend our understanding of dental disease, it reveals the advantages and disadvantages that certain creatures faced as their teeth evolved to feed on both meat and plants," says Reisz. "In this case, as with humans, it may have increased their susceptibility to oral infections." The researchers investigated the jaws of several well-preserved specimens of Labidosaurus hamatus, a 275-million-year-old terrestrial reptile from North America. One specimen stood out because of missing teeth and associated erosion of the jaw bone. With the aid of CT-scanning, Reisz and colleagues found evidence of a massive infection. This resulted in the loss of several teeth, as well as bone destruction in the jaw in the form of an abscess and internal loss of bone tissue. As the ancestors of advanced reptiles adapted to life on land, many evolved dental and cranial specializations to feed more efficiently on other animals and to incorporate high-fiber plant leaves and stems into their diet. The primitive dental pattern in which teeth were loosely attached to the jaws and continuously replaced, changed in some animals. Teeth became strongly attached to the jaw, with little or no tooth replacement. This was clearly advantageous to some early reptiles, allowing them to chew their food and thus improve nutrient absorption. The abundance and global distribution of Labidosauris and its kin suggest that it was an evolutionary success. However, Reisz and his colleagues suggest that as this reptile lost the ability to replace teeth, the likelihood of infections of the jaw, resulting from damage to the teeth, increased substantially. This is because prolonged exposure of the dental pulp cavity of heavily worn or damaged teeth to oral bacteria was much greater than in other animals that quickly replaced their teeth. Reisz notes that human susceptibility to oral infection has some parallels to those of ancient reptiles that evolved to eat a diet incorporating plants in addition to meat. "Our findings suggest that our own human system of having just two sets of teeth, baby and permanent, although of obvious advantage because of its ability to chew and process many different types of food, is more susceptible to infection than that of our distant ancestors that had a continuous cycle of tooth replacement."
DOI 10.1007/s00114-011-0792-1
Nano
Reducing pesticide use with nanoparticles
Mohamed El-Shetehy et al. Silica nanoparticles enhance disease resistance in Arabidopsis plants, Nature Nanotechnology (2020). DOI: 10.1038/s41565-020-00812-0 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/s41565-020-00812-0
https://phys.org/news/2020-12-pesticide-nanoparticles.html
Abstract In plants, pathogen attack can induce an immune response known as systemic acquired resistance that protects against a broad spectrum of pathogens. In the search for safer agrochemicals, silica nanoparticles (SiO 2 NPs; food additive E551) have recently been proposed as a new tool. However, initial results are controversial, and the molecular mechanisms of SiO 2 NP-induced disease resistance are unknown. Here we show that SiO 2 NPs, as well as soluble Si(OH) 4 , can induce systemic acquired resistance in a dose-dependent manner, which involves the defence hormone salicylic acid. Nanoparticle uptake and action occurred exclusively through the stomata (leaf pores facilitating gas exchange) and involved extracellular adsorption in the air spaces in the spongy mesophyll of the leaf. In contrast to the treatment with SiO 2 NPs, the induction of systemic acquired resistance by Si(OH) 4 was problematic since high Si(OH) 4 concentrations caused stress. We conclude that SiO 2 NPs have the potential to serve as an inexpensive, highly efficient, safe and sustainable alternative for plant disease protection. Main Nanoagrochemicals are a promising tool to improve crop yield and thus global food security 1 . Silica nanoparticles (SiO 2 NPs) have been proposed for the controlled nanodelivery of silicon (Si) and other active ingredients to plants, but they have never been systematically tested for this purpose. Si from orthosilicic acid (Si(OH) 4 , also known as monosilicic acid)—the hydrolytic degradation product of SiO 2 NPs—is the only known form of Si bioavailable for plants, and it is ubiquitous in soil pore water 2 , 3 , 4 . Si(OH) 4 can promote plant growth and plant resistance against biotic and abiotic stresses 3 , 5 , thereby protecting plants against pathogen attacks or agricultural damages related to severe climate conditions 3 , 6 , 7 . The uptake and movement of SiO 2 NPs as well as other engineered nanomaterials in plants have been intensively studied in the past decade 7 , 8 , 9 , 10 . However, it is uncertain how the nanoparticles interact with leaves at the subcellular level. Direct evidence by nanometre-resolution imaging for the entrance of intact nanoparticles into leaves, or the intercellular movement of SiO 2 NPs within leaves, is mostly missing 10 . It is also not known whether SiO 2 NPs can induce resistance in plants, whether their performance differs from dissolved Si species and which molecular pathways they may induce. To fend off potential pathogens, plants have evolved disease resistance mechanisms that share mechanistic principles with the innate immunity of animals 11 . An especially interesting form of plant disease resistance is the so-called induced resistance in which the disease resistance of the plant can be enhanced by previous exposure to beneficial rhizosphere microorganisms, avirulent and virulent pathogens, or specific resistance-inducing chemical compounds 12 , 13 , 14 . A hallmark of induced resistance is its activity against a broad spectrum of pathogens. While the induction of plant disease resistance using chemical compounds is relatively well understood 12 , the benefit of using slow nano-enabled delivery systems for the same purpose has not been investigated via systematic experiments 1 , 7 . A special form of induced resistance is systemic acquired resistance (SAR) that is characterized by the spread of locally induced disease resistance to the whole plant 15 , 16 . SAR is induced in all plant parts after locally challenging the plant with a pathogen or by the local application of so-called resistance-inducing compounds. Both these treatments induce signal transduction pathways that lead to the production of signals moving to distant tissues 14 . A key signalling compound that contributes to SAR is the plant hormone salicylic acid (SA) that is responsible for the activation of pathogenesis-related (PR) genes 16 , 17 . Other factors include, for example, nitric oxide and reactive oxygen species 18 , 19 . The fact that SAR can be activated by the application of resistance-inducing compounds 12 , 13 makes SAR an attractive alternative strategy for controlling crop pests without the need for using irreversible genetic modifications or environmentally problematic pesticides. SAR-inducing compounds such as benzothiadiazole successfully enhance disease resistance, but also reduce crop yields 20 , 21 . Interestingly, Si-based compounds also seem to have the capacity to induce disease resistance via a broad range of different and partially still unknown mechanisms, including the mechanical reinforcement of defensive structures of the plant architecture, most notably the cell wall 3 , 22 , but also the activation of biochemical defences 3 , 23 . For example, biochemically, root-applied Si led to a broad-spectrum resistance against powdery mildew pathogen by increasing the activity of defence-related enzymes in leaves 24 . It is important to note that the protective effect of Si seems to have—in contrast to other biostimulants such as benzothiadiazole—no negative effects on the growth and yield of plants 3 , 25 . All this makes Si an attractive candidate to strengthen plant stress tolerance. Initial studies found that SiO 2 NPs may induce stress tolerance similar to conventional Si products, but a clear mechanistic understanding of the underlying processes is still lacking 7 , 8 , 26 , 27 . In this Article, we demonstrate the potential of SiO 2 NPs in inducing local and systemic disease resistance in the widely used model plant Arabidopsis thaliana against the bacterial pathogen Pseudomonas syringae . Silicic acid was assessed in parallel to disentangle the potential differences in the mode of action of dissolved Si species compared with SiO 2 NPs. We assessed the role of SA and reactive-oxygen-species defence-related genes, established the therapeutic concentration range of SiO 2 NPs to induce the desired beneficial effects in plants, compared the laboratory setup (infiltration of selected leaves) with the more realistic spray application and visualized the nanoparticle–leaf interactions using transmission electron microscopy (TEM), with important implications for future strategies to apply nanoscale active ingredients for slow release in leaves. SiO 2 NPs and subcellular distribution within the leaf The SiO 2 NP suspensions used for the dosing of plants (Fig. 1 ) were well dispersed with a hydrodynamic particle size of 76.7 ± 0.8 nm (average ± standard deviation) and a polydispersity index of 0.07. The primary particle size, as determined by TEM, was 54 ± 7 nm (average ± standard deviation). The interaction of the nanoparticles with the plant was assessed by TEM (Fig. 2 ) 2 d after the application of SiO 2 NPs. Preliminary experiments showed that at this time point, the SiO 2 NP-exposed plants had already developed resistance. The size range of ~50–70 nm of the nanoparticles allowed them to enter the leaf exclusively through the stomata and distribute within the large extracellular air spaces of the spongy mesophyll without penetrating any cell walls (Fig. 2 and Supplementary Fig. 1 ). The SiO 2 NPs remained within the air spaces of the leaf during the 2 d between their application and the time point of TEM observation. At the same time, the size of the nanoparticles prevented (undesirable) nanoparticle uptake into the cytoplasm as well as cell-to-cell translocation through the plasmodesmata (Fig. 2b ). This is in line with previous studies in the literature based on the nanometre-resolution imaging of nanoparticles in plants, suggesting that the cutoff for root–shoot nanoparticle translocation is at approximately <36 nm and it is <15–40 nm (basal size exclusion limits of ~3–4 nm) for cell-to-cell plasmodesmata transport 10 . Compared with the fully closed stomata in the control plants (samples were kept in the dark for fixation), the nanoparticle-treated plants showed incompletely closed stomata as the nanoparticles were stuck in between the guard cells (Fig. 2b ). Fig. 1: SiO 2 NPs under investigation. a , TEM image of the particles. b , Particle size distribution based on the TEM image analysis. c . DLS measurements of the SiO 2 NPs. The hydrodynamic radius is consistent with the primary particle size shown in a and b . PDI, polydispersity index. Averages ± standard deviations. For the DLS measurements, number of measurements N = 10. Full size image Fig. 2: TEM of SiO 2 NP distribution and physiological effects in Arabidopsis leaves. Red arrows and dots, nanoparticles. Comparison between the spray application used in the field and for local defence assays and the infiltration application used in laboratory studies. Images obtained when the plants had already developed resistance 2 d after exposure to SiO 2 NPs. a , Control leaves only treated with the buffer solution. b , TEM overview image and zoomed-in views of the stoma and cell–air space interface. False colours: red, cell wall (apoplast); green, cytoplasm (symplast); blue, spaces filled with air. SiO 2 NP-sprayed leaf at a higher resolution shows that the stomata are not tightly closed anymore due to nanoparticle uptake and clogging. Nanoparticles entered through the stomata into the air spaces of the leaf, and they were also found to be extracellularly adsorbed on the outer edge of the cell walls in the air gaps of the spongy mesophyll; they were absent in the cytoplasm (intracellular space). Higher-resolution TEM image is shown in Supplementary Fig. 1 . Full size image Exogenous application of SiO 2 NPs confers SAR The local defence responses of Arabidopsis sprayed with SiO 2 NPs or a control treatment to virulent P. syringae were quantified via bacterial growth on leaves (Fig. 3a ). Due to the lack of the avrRpt2 gene in the virulent P. syringae that is needed by the Resistance to Pseudomonas syringae protein 2 ( RPS2 ) resistance gene in Arabidopsis to induce a strong plant defence against P. syringae 28 , 29 , a severe infection would be expected. However, a pronounced infection occurred only in the control treatment. Plants sprayed with SiO 2 NPs showed an eightfold improvement in basal resistance compared with the 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)-buffer-treated control plants (Fig. 3a ), demonstrating that the SiO 2 NPs induced local defence in the plant within 24 h (the nanoparticles were applied 24 h before inoculation with the virulent P. syringae ). The number of bacteria was reduced eightfold in SiO 2 NP-treated plants compared with the control. Fig. 3: Enhanced local and systemic disease resistance in wild-type Col-0 Arabidopsis to P. syringae induced by SiO 2 NPs or Si(OH) 4 . The bacteria in the leaves were quantified 0 and 3 dpi. a , Growth of virulent P. syringae in leaves. The plants were sprayed with different treatments, and virulent P. syringae was inoculated 24 h later. b , SAR in distal leaves. Plants locally infiltrated with different treatments. 48 h later, virulent P. syringae was inoculated on untreated systemic leaves. c , SAR in distal leaves; repetition of the experimental setup in b with an additional Si(OH) 4 treatment. d , No effect of SiO 2 NPs and Si(OH) 4 on the in vitro growth of virulent P. syringae bacteria in the absence of the plant. e . Phenotype of the Arabidopsis plants. Plants pretreated with the HEPES buffer (control), SiO 2 NPs or Si(OH) 4 (1,000 mg SiO 2 l –1 each). Note that the yellow leaves in the plant exposed to Si(OH) 4 coincide with the upregulated expression of the oxidative stress marker gene shown in Fig. 4c . In a – d , all the experiments were performed twice with comparable results. Bars and whiskers are averages and standard deviations; N = 3; one-way analysis of variance (ANOVA); post hoc least significant difference; P < 0.01. Full size image The systemic responses of wild-type Arabidopsis plants to SiO 2 NPs and dissolved Si species are reflected in the inhibited bacterial growth, as shown in Fig. 3b,c . The positive control showed that plants previously infiltrated with the avirulent P. syringae , which is known to induce SAR, expectedly contained tenfold less virulent P. syringae compared with magnesium chloride (MgCl 2 )- or HEPES-preinfiltrated plants (Fig. 3b,c ). Remarkably, treating local leaves with SiO 2 NPs led to comparable systemic protection against virulent P. syringae as observed in the positive avirulent P. syringae control (Fig. 3a ), which is equal to >90% bacterial inhibition. It is highly unlikely that a local response to SiO 2 NPs or Si(OH) 4 in the distal tissue has caused this resistance because of the observed distribution and very slow dissolution of SiO 2 NPs (Fig. 2 and Supplementary Fig. 1 ) and the passive transport 30 and high reactivity of Si(OH) 4 . This shows that treating Arabidopsis with SiO 2 NPs induced local and systemic resistance to P. syringae . It is well known that Si(OH) 4 improves plant defences against different plant pathogens such as fungi, bacteria and viruses 5 , 7 . We, therefore, also tested SAR in response to Si(OH) 4 (Fig. 3c ) and found that treatment with Si(OH) 4 was able to induce SAR. These results suggest that Si(OH) 4 released from SiO 2 NPs is at least partially responsible for the SAR-inducing ability of SiO 2 NPs and that the SiO 2 NPs can act as a slow-release source for Si(OH) 4 . Measuring the exact amount of free Si(OH) 4 directly in Plantae is challenging due to the low concentrations and fragile equilibrium of the dissolved Si(OH) 4 and Si oligomers and the solid SiO 2 species 2 , 4 , 31 ( Supplementary Information , ‘Details on Si(OH) 4 analytics’). We, therefore, resorted to direct TEM imaging of the nanoparticles in the plants; at high resolution, abundant intact SiO 2 NPs were observed in the stomata 2 d after the SiO 2 NP treatments (Fig. 2 ). This demonstrates that the plants could not degrade the nanoparticles at the time point of inoculation with virulent P. syringae (in all the assays, the nanoparticles were applied at least 24 h before inoculation). The slow nanoparticle dissolution is in line with the slow dissolution kinetics of the SiO 2 NPs measured previously in water (half-life of ~66 d at pH 7) 32 . To test whether SiO 2 NPs and Si(OH) 4 have a direct toxic effect on bacterial growth, virulent P. syringae was cultivated in vitro in the presence or absence of SiO 2 NPs or Si(OH) 4 at the lowest fully effective dose of SiO 2 NPs at 100 mg l −1 . At these concentrations that induced strong defence in plants, neither SiO 2 NPs nor Si(OH) 4 alone harmed the growth of the virulent P. syringae bacteria (Fig. 3d ), demonstrating that SiO 2 NPs induce resistance in plants by activating the defence responses in plants and not by directly inhibiting the bacterial growth. Dose dependence of SAR response The SAR was further tested in response to different concentrations of SiO 2 NPs or Si(OH) 4 ; for additional validation, a second bacterial growth quantification method 33 based on bacterial DNA was used (Fig. 4a,b and Supplementary Table 1 ). Treatment with SiO 2 NPs at a concentration of 25 mg SiO 2 l –1 already resulted in a partial reduction of 29% of bacterial growth in systemic leaves, and treatment with 100 mg ml – 1 resulted in maximum protection (>90%) compared with the positive control plants preinfiltrated with avirulent P. syringae (Fig. 4a ). As the series of concentrations shown in Fig. 4a , higher concentrations of SiO 2 NPs exceeding 1,600 mg SiO 2 l –1 led to increased bacterial infection and were thus less effective in activating SAR. Pretreatment with a concentration of 5 mg SiO 2 l –1 of Si(OH) 4 (concentration normalized to SiO 2 l −1 for the sake of comparability) led to a reduction of 81% in the bacterial numbers compared with the positive control. Maximum protection with a reduction similar to the control plants preinfiltrated with avirulent P. syringae was achieved at concentrations between 20 and 320 mg SiO 2 l –1 . A higher concentration of 640 mg SiO 2 l –1 was less effective, and a concentration of 2,560 mg SiO 2 l –1 was ineffective in inducing SAR, demonstrating a detrimental effect of higher concentrations of Si(OH) 4 on SAR induction. Fig. 4: SiO 2 NPs confer SAR in a dose-dependent manner. Distal leaves of wild-type Col-0 Arabidopsis treated with the control, SiO 2 NPs or Si(OH) 4 . a , SAR in plants locally infiltrated with different treatments. After 48 h, virulent P. syringae was inoculated on untreated systemic leaves. The bacteria in the leaves were quantified 0 and 3 dpi. b , qPCR transcript levels of oprF gene from virulent P. syringae using DNA templates extracted from the inoculated leaves. c , RT–qPCR transcript levels of the oxidative stress marker gene AtHSP17.4C1 in response to different treatments. Plants were locally infiltrated with different treatments. Leaves sampled 48 h after treatments. Reference gene, At4g26410 ( expG ). Bars and whiskers are averages and standard deviations; N = 3; one-way ANOVA; post hoc least significant difference; P < 0.05. All the experiments in a – c were performed twice with comparable results. Full size image The data in Fig. 4 served to establish a dose–response relationship between SAR and the SiO 2 NP concentration (Fig. 5a ). Using a standard log-logistic dose–response model, the dynamic range and the effective concentration at 50% bacterial inhibition (EC50) was determined as 0.4 ± 0.04 mM Si (average ± standard deviation) for SiO 2 NPs (that is, 24 mg SiO 2 l –1 ; Supplementary Fig. 2 shows the residual analysis and Supplementary Table 2 lists the fitting parameters) in a range of 25–100 mg SiO 2 l –1 . For spraying, the EC50 may be similar to the injected SiO 2 NPs, as both the local (sprayed) and the systemic (injected) assays at 100 mg SiO 2 l –1 resulted in disease resistance (Fig. 3 and Fig. 6a,b ). Fig. 5: Dynamic range for SAR induced in distal leaves by SiO 2 NPs in A. thaliana , and model summarizing the observed plant-defence-enhancing actions of SiO 2 NPs and Si(OH) 4 . a , Data from Fig. 4a . SiO 2 NP-triggered dose-dependent bacterial inhibition 3 d after infection of wild-type A. thaliana with virulent P. syringae . The EC50 value was 0.40 ± 0.04 mM Si (average ± standard deviation) for SiO 2 NPs (that is, 24 mg SiO 2 l –1 ). Above the dynamic range, the bacterial infection can increase again (Fig. 4a ). Six data points at 0 mM Si are not shown due to the nature of the log axis, but they are apparent in the detailed residual analysis shown in Supplementary Fig. 2 and Supplementary Table 2 . C Si , Si concentration in mM. b , SiO 2 NPs act by (1) slowly releasing Si(OH) 4 into cells, triggering SA, and thus local defence and SAR; (2) clogging stomata, triggering SA and subsequent defences. Absence of intracellular nanoparticles confirmed by electron microscopy (Fig. 2 and Supplementary Fig. 1 ). c , Si(OH) 4 instantly diffuses into cells, triggering SA and subsequent local defence and SAR. However, the instant uptake causes overdose, stress and compromised defences. Both the mechanisms are shown after treatment with the same amount of SiO 2 equivalents (1,000 mg SiO 2 l −1 ). SA, plant hormone regulating SAR and PR-1 / 5 gene expression; PR-1 / 5 , genes encoding PR proteins 1 and 5; HSP17.4C1 , heat shock protein and oxidative stress marker gene. Full size image Fig. 6: SiO 2 NPs induce disease resistance based on SA-dependent pathway. Experiments in Arabidopsis wild-type Col-0 and sid2 . The bacteria in the leaves were quantified 0 and 3 dpi. a , A. thaliana wild-type Col-0 and sid2 were locally infiltrated with different treatments. After 24 h of these treatments, virulent P. syringae was inoculated. b , SAR in the distal leaves of wild-type Col-0 and mutant sid2 . Plants were locally infiltrated with different treatments. After 48 h of these treatments, virulent P. syringae was inoculated. c , d , RT–qPCR analysis of the gene expression of the SA-regulated genes AtPR-1 ( c ) and AtPR-5 ( d ) in response to different local treatments of wild-type Arabidopsis . Leaves sampled 48 h after treatments. Reference gene, At4g26410 ( expG ). Bars and whiskers are averages and standard deviations; N = 3; one-way ANOVA; post hoc least significant difference; P < 0.02. All the experiments in a – d were performed twice with comparable results. Full size image The results based on counting bacterial colonies were confirmed by estimating the bacterial biomass based on a quantitative PCR (qPCR) analysis of the bacterial outer membrane protein gene oprF (Fig. 4b ). The bacterial DNA levels were in good agreement with the bacterial colony counting results shown in Fig. 4a , which is in line with a previous research that compared the two techniques 33 . In contrast to SiO 2 NPs, higher concentrations of Si(OH) 4 adversely affected the phenotype of the treated plants (Fig. 3e ). At a concentration of 1,000 mg SiO 2 l –1 , the leaves of the plants treated with Si(OH) 4 showed signs of chlorosis (yellowing), whereas the leaves of the plants treated with SiO 2 NPs looked healthy (Fig. 3e ). This different behaviour at higher concentrations prompted us to further investigate the negative effect of higher concentrations of SiO 2 NPs and Si(OH) 4 . The expression level of the heat shock protein AtHSP17.4C1 , a molecular marker for oxidative stress 34 , was analysed by qPCR with reverse transcription (RT–qPCR). The HSP17.4C1 transcript levels were determined in response to avirulent P. syringae , SiO 2 NPs or Si(OH) 4 (100 and 1,000 mg SiO 2 l –1 ; Fig. 4c ) 2 d after the treatments. Treatment with avirulent P. syringae caused a minor increase (2.7-fold) in the AtHSP17.4C1 expression compared with the control. Similarly, treatment with SiO 2 NPs led to a 1.6-fold (100 mg SiO 2 l –1 ) and twofold (1,000 mg SiO 2 l –1 ) increase in transcript abundance relative to the control treatment that was not statistically significant. However, treatment with higher concentrations of Si(OH) 4 caused stress, as the transcript levels of the oxidative stress marker gene HSP17.4C1 were induced ninefold at a concentration of 100 mg SiO 2 l –1 and 18-fold at 1,000 mg SiO 2 l –1 . SiO 2 NP-mediated SAR depends on SA The plant hormone SA plays a core regulatory role in plant immunity 35 . Thus, we tested the ability of SiO 2 NPs to induce local disease resistance and SAR in an Arabidopsis mutant defective in SA biosynthesis (SA induction–deficient 2 ( sid2 ) 36 ) to check if SiO 2 NPs confer SAR via SA-dependent defence pathway or not. Notably, neither Si(OH) 4 nor SiO 2 NPs induced local disease resistance or SAR in sid2 mutant plants, while they induced basal disease resistance and SAR in wild-type plants (Fig. 6a,b ), demonstrating that SA-dependent defence signalling is essential for Si(OH) 4 - and SiO 2 NP-induced disease resistance. To further support this result, we next quantified the expression of the SA-responsive marker genes PR protein 1 ( PR-1 , gene AtPR-1 ) and PR-5 (gene AtPR-5 ) in wild-type plants (Fig. 6c,d ). Similar to treatment with avirulent P. syringae , treatment with Si(OH) 4 and SiO 2 NPs resulted in an up to 30-fold and 6-fold increase in the transcript abundance of AtPR-1 (Fig. 6c ) and AtPR-5 (Fig. 6d ), respectively, compared with the control treatments. Hence, both Si(OH) 4 and SiO 2 NPs activated SA-dependent defence reactions. Although SiO 2 NPs triggered lower AtPR-1 and AtPR-5 expression levels in comparison with avirulent P. syringae –infiltrated plants and Si(OH) 4 -treated plants, the inducing effect was sufficient to confer SAR. Implications on the mode of action of leaf-applied SiO 2 NPs The pathosystem involving Arabidopsis and the hemibiotrophic bacterial pathogen P. syringae offers an ideal model to investigate the effect of SiO 2 NPs and Si(OH) 4 on plant defence. Our results (summarized in the model in Fig. 5b,c ) show that the protective effect of SiO 2 NPs and Si(OH) 4 is based on the ability to induce basal resistance and SAR (Fig. 3a–c ) and not on the direct toxic effects as neither SiO 2 NPs nor Si(OH) 4 inhibited bacterial growth (Fig. 3d ). Our data are in line with the initial results that suggested that Si(OH) 4 and sometimes SiO 2 NPs can protect plants from different plant pathogens 7 , 26 , 27 , 37 ; nevertheless, here we show that the mechanism had no toxic effect on the pathogen, but rather induced the defences of the plant. Both SiO 2 NPs and Si(OH) 4 induce SAR in a dose-dependent manner, leading to bacterial inhibition of >90% compared with the control plants treated only with the HEPES buffer or MgCl 2 . These results are consistent with the previous results suggesting that SiO 2 NPs and Si(OH) 4 function in a dose-dependent manner in plants and animals 26 , 27 , 38 . However, instead of the previously proposed pesticidal action of SiO 2 NPs, we show here that the nanoparticles caused an increase in the plant defence. Our data suggest that the SiO 2 NPs used in the present study can be successfully used to slowly release Si(OH) 4 to the plant from within the spongy mesophyll (Fig. 2 ) in close direct interaction with the diffusion layer on the plant cell walls, which is at least partially responsible for the SAR-inducing ability of SiO 2 NPs. Water (vapour) secreted from the plant cell wall or the plant-induced dissolution of SiO 2 NPs linked to increased secretory activity 10 (exudates) may have promoted the further dissolution of Si(OH) 4 . Based on the release rates of SiO 2 NPs that were determined earlier under conditions optimized for dissolution in a continuously depleted ultrapure water system (half-life of ~66 d at pH 7) 32 , a maximum of ~13% particles could have dissolved within 48 h of SiO 2 NP exposure. Si-containing reaction byproducts of the nanoparticle synthesis were ruled out to play a notable role in the induction of defence ( Supplementary Information , ‘Si reaction byproducts’). The maximum released Si(OH) 4 from SiO 2 NPs could, therefore, explain the bacterial inhibition; however, it cannot fully explain the lack of oxidative stress responses and higher bacterial DNA levels for SiO 2 NPs in the plants (Fig. 4b ). Probably, the absence of peak Si(OH) 4 concentrations resulted in lower Si(OH) 4 toxicity for both bacteria and plants. Other effects such as modulated evapotranspiration due to the blockage and incomplete closure of the stomata by the nanoparticles (Fig. 2 ), which can cause SA-related responses similar to drought stress 39 , and the close interaction of the nanoparticles with cells in the spongy mesophyll may play an important role; this is in line with earlier research about stomata as ports of entry for pollutants and nanoparticles 40 , 41 . The exact relative contribution of each effect remains to be elucidated in follow-up studies. It is important to note that the cell walls in the mesophyll air spaces have very thin, or lack, cuticular waxes 10 , and therefore, it is in contrast to the external leaf surface; a direct interaction of the nanoparticles can take place with the cell wall and thus the apoplast transport system including the xylem. Irrespective of the detailed mechanism of the nanoparticles, this is of importance for any nanoagrochemical application aiming at the slow release of active ingredients, because nanoparticles in the extracellular spongy mesophyll air spaces (Fig. 2 and Supplementary Fig. 1 ) can interact with the leaf for extended periods without being washed away by rain. High concentrations of Si(OH) 4 caused the chlorosis of leaves indicative of stress (Fig. 3e ). An increased expression of the oxidative stress marker gene AtHSP17.4C1 (ref. 34 ) confirmed stress in the Si(OH) 4 treatment at 100 and 1,000 mg SiO 2 l –1 , as the transcript levels of AtHSP17.4C1 were more strongly induced compared with avirulent P. syringae or SiO 2 NP treatments (Fig. 4c ). Together, these data show that Si(OH) 4 was more toxic to plants than SiO 2 NPs. Hence, impaired SAR in plants treated with higher concentrations of Si(OH) 4 (Fig. 4a ) might be linked to enhanced oxidative stress, consistent with the fact that higher levels of nitric oxide and reactive oxygen species were shown to impair the induction of SAR 19 , 23 . For SiO 2 NPs, no substantial increase in the oxidative stress marker gene was found (Fig. 4c ). Impaired SAR for SiO 2 NPs occurred only at very high concentrations in the gram per litre range (Fig. 4a ), probably due to the excess release of Si(OH) 4 causing oxidative stress or the highly intense clogging of the stomata (Fig. 2 ) that disrupted evapotranspiration. While the low polydispersity index measured by dynamic light scattering (DLS) (Fig. 1c ) indicates well-dispersed SiO 2 NP suspensions even at higher concentrations, heteroaggregation with mucilage in the stomata (upon contact with the leaf) and probably homoaggregation (at higher nanoparticle concentrations) appeared to promote the clogging of the stomata (Fig. 2a , red arrows). These results are in line with ref. 8 , according to which SiO 2 NP concentrations up to 1,000 mg SiO 2 l –1 were not phytotoxic despite the uptake of SiO 2 NPs into the root system of A. thaliana . Our results are also consistent with the initial studies 42 , 43 that found better effects of SiO 2 NPs on plant growth than conventional silica fertilizers. In conclusion, the application of SiO 2 NPs can reduce the risk of overdosage. Our data demonstrate that SiO 2 NPs- and Si(OH) 4 -mediated SAR acts via the activation of the SA-dependent defence pathway, which is a key component of the basal disease resistance and SAR 44 , 45 . Neither SiO 2 NPs nor Si(OH) 4 induced resistance in sid2 that has a defect in the SA biosynthesis (Fig. 6a,b ). The induction of resistance by SiO 2 NPs was comparable to the effect of Si(OH) 4 at intermediate concentrations, although the soluble fraction of Si(OH) 4 in this treatment was far lower as the particles dissolved only partially in the plant, if at all (Fig. 2 ), suggesting that SiO 2 NPs can induce SA-dependent defence pathways as intact particles. Furthermore, the expression levels of two SA-responsive marker genes, namely, AtPR-1 and AtPR-5 encoding PR-1 and PR-5 , respectively, were induced in response to SiO 2 NPs and Si(OH) 4 (Fig. 6c,d ). These results are in line with ref. 46 , who reported that the exogenous application of Si(OH) 4 induced SA biosynthesis in leaves exposed to the fungal pathogen Erysiphe cichoracearum . In addition, Si-primed tomato plants were protected against Ralstonia solanacearum via the upregulation of SA-controlled defence gene expression 47 . Although SiO 2 NPs triggered lower AtPR-1 and AtPR-5 expression levels than the plants infiltrated with avirulent P. syringae and Si(OH) 4 -treated plants, the achieved level of expression was sufficient to confer a full SAR response. Conclusions The present results show that low concentrations of SiO 2 NPs efficiently protect the widely used model plant Arabidopsis from infection by the bacterial pathogen Pseudomonas , and they revealed the mode of action of SiO 2 NPs compared with the dissolved counterpart, Si(OH) 4 . The protective effect of SiO 2 NPs is mediated by the activation of SA-dependent plant immunity responses and is partially based on the slow release of Si(OH) 4 from nanoparticles entering through the stomata and distribution within the spongy mesophyll and probably partially by intact nanoparticle-induced SA-dependent responses. Compared with direct Si(OH) 4 application, SiO 2 NPs proved to be safer for the plant. They did not cause phytotoxicity even at concentrations tenfold higher than the minimal dose needed for plant protection and therefore have a broader therapeutic range than Si(OH) 4 . The lowest fully effective dose (100 SiO 2 l –1 ) is promising because it corresponds to an extrapolated field dose of only 3 kg SiO 2 ha –1 , corresponding to more than 1,000-fold material savings compared with the solid bulk SiO 2 treatments. This calculation assumes a typical 300 l ha – 1 application (conventional aqueous spray volumes for pesticide application equipment 48 ), and an uncertainty factor of 100 for the concentration. Contrary to previous assumptions about the ability of nanoparticles to penetrate the cuticle, SiO 2 NP intake was clearly restricted to the stomata and the extracellular spongy mesophyll, confirming our hypothesis that the leaf cuticle represents an impermeable barrier to nanoparticles 10 , which is in line with earlier fundamental research 49 . The spongy mesophyll is an attractive target for the long-term deposition of slow-release nanoagrochemicals. Future research should extend the investigations to a broader spectrum of defence-related genes with other plant pathogens and the biomechanical quantification of the physical effects of nanoparticles that affect leaf permeability and may trigger the SA-related responses. To further advance SiO 2 NPs as nanobiostimulants and fertilizers, which should be the case with every material or organism used in agriculture, the long-term effects of SiO 2 NPs to occupationally exposed agricultural workers and non-target organisms, such as beneficial soil microorganisms or bees, must be thoroughly analysed before broad commercial application. The potential risks of nanoagrochemicals and possible strategies for risk mitigation have been thoroughly reviewed previously 1 , 50 , 51 . Amorphous SiO 2 NPs have already been approved by the Food and Drug Administration as they are generally regarded as safe, and they are in use as dietary additives (E551) 52 in a broad range of foodstuffs such as table salt. The daily intake of nanoscale silica from food is estimated to be 1.8 mg kg – 1 (ref. 53 ). Our own initial experiments with Caenorhabditis elegans nematodes used as model non-target microorganisms (Supplementary Fig. 3 ) have shown an ~36-fold lower ecotoxicity of SiO 2 NPs compared with liquid Si(OH) 4 preparations that are in use for plant nutrition since decades. Thus, compared with currently used treatments, the present SiO 2 NPs alone, or in combination with other active ingredients, promise to offer a cost-effective, consumer-safe strategy that is tracelessly degradable and a sustainable alternative to protect plants against pathogens via the controlled induction of SAR, without any negative effects on yield or non-target organisms associated with the action of previously described plant biostimulants or pesticides. Methods Plant growth conditions A. thaliana seeds were grown on Jiffy soil substrates (powered by Tref, Jiffy Products International). Two A. thaliana strains were grown: wild-type Columbia (Col-0) plants that carry an RPS2 locus responsible for the recognition of P. syringae strains expressing the avirulent gene avrRpt2 (refs. 28 , 29 ) and an A. thaliana mutant defective in SA biosynthesis ( sid2 (ref. 36 )). The seeds sown on the soil were kept at 4 °C for 2 d and then transferred to the growth chamber (RMC Tableaux SA). The plants were grown in a 12 h photoperiod with 60% relative humidity, with a day temperature of 22 °C and a night temperature of 18 °C (photon flux density, 100 µmmol m –2 s –1 ). The transplanted seedlings were covered with transparent plastic domes for 2–3 d to allow the seedlings to adapt to the new soil. Four- to five-week-old plants were used in the experiments, because previous experiments had shown that under the abovementioned growth conditions, this is the optimal age of the plant to induce SAR 54 . Culture of P. syringae pv. tomato P. syringae pv. tomato bacteria were prepared by inoculating a single colony in 10 ml King’s B medium (1.5 g K 2 HPO 4 , 1.5 g MgSO 4 ·7H 2 O, 20 g tryptone and 10 ml glycerol per litre of water; Sigma-Aldrich; purity ≥99%) containing the appropriate antibiotics. A virulent and an avirulent strain of P. syringae were grown: P . syringae DC3000 (virulent P. syringae ) and P. syringae DC3000 expressing the avirulent gene avrRpt2 recognized by the A. thaliana RPS2 locus and inducing SAR (avirulent P. syringae ). The virulent P. syringae bacteria strain served to induce a strong infection with P. syringae in the plants. The avirulent P. syringae strain served as a positive control to induce SAR and thus an actively suppressed bacterial growth in the A. thaliana plants via recognition of the bacterial avrRpt2 gene by the plant’s RPS2 gene (refer to ref. 29 for a detailed description of the pathosystem). The virulent P. syringae was grown with rifampicin (25 μg ml –1 ) and the avirulent P. syringae was grown with kanamycin (50 μg ml –1 ) and rifampicin (25 μg ml –1 ). After overnight incubation in a shaker at 28 °C in the dark (Kuhner LT-W Lab Therm Table Top Incubator Shaker, Adolf Kühner AG), the cells were centrifuged at 3,000 r.p.m. for 10 min, and the pellet was suspended in 10 mM MgCl 2 . The cell density was calculated by measuring the light absorption of the liquid culture using a spectrophotometer (BioPhotometer, Eppendorf) at the absorption wavelength of 600 nm and by counting the colonies plated on King’s B agar (raw data are publicly available 55 ). Inoculation procedures for local disease resistance For a local disease resistance assay, three leaves per A. thaliana plant were inoculated with the virulent P. syringae bacteria, and the plants were incubated under the standard A. thaliana growth conditions described above. The inoculation with the virulent P. syringae bacteria was operationally defined as 0 d post inoculation (dpi). After inoculation, leaf discs (4 mm) were collected from the inoculated leaves at 0 and 3 dpi using a cork borer (three leaf discs from different plant leaves per sample). The leaf discs were ground and homogenized with pestles in 10 mM MgCl 2 and the undiluted (0 dpi) or the 1,000-fold diluted (3 dpi) homogenates were plated on King’s B agar plates (King’s B medium as above with 15 g l –1 agar). The plates were incubated at 28 °C in the dark for 48 h. Then the bacterial colonies were counted (raw data are publicly available 55 ). Inoculation procedures for SAR assays For an SAR assay, three leaves of four- to five-week-old wild-type Col-0 plants were infiltrated with 10 mM MgCl 2 (negative control) or the avirulent P. syringae bacteria at 10 6 colony-forming units (CFU) per millilitre in 10 mM MgCl 2 (positive control). After 48 h, the distal leaves were inoculated with the virulent P. syringae bacteria (10 5 CFU ml –1 ). The inoculation with the virulent P. syringae bacteria was operationally defined as 0 dpi. Leaf discs (4 mm) were collected from the distal leaves at 0 and 3 dpi using a cork borer (three leaf discs from different plant leaves were analysed three times for each treatment). The leaf discs were ground in 10 mM MgCl 2 , and the undiluted (0 dpi) or 1,000-fold diluted (3 dpi) homogenates were plated on King’s B agar and incubated at 28 °C for 48 h in the dark (SalvisLab incubator). Then the bacterial colonies were counted (raw data are publicly available 55 ). For details about this procedure, refer to ref. 18 . Plant treatments The SiO 2 NPs (25, 100, 400 and 1,600 mg SiO 2 l –1 at pH 7) and Si(OH) 4 (5, 20, 80, 100, 320, 640 and 2,560 mg SiO 2 l –1 at pH 7 from an aqueous potassium silicate stock solution; K 2 O:SiO 2 of 1:2.60; SiO 2 content, 20.8 wt%; MonDroguiste) were prepared in sterile, distilled water in HEPES buffer (1 mM, pH 7, 99.5%; Sigma-Aldrich). The Si(OH) 4 concentrations were expressed in mg SiO 2 l –1 to facilitate a direct comparison of the effects of dissolved Si(OH) 4 and solid SiO 2 NPs without having to take into account the different molecular weights. For the local disease resistance assay, the plants were sprayed with these chemicals 24 h before inoculation with virulent P. syringae . For the SAR assays, all these chemicals were injected abaxially (from the bottom of the leaf) into Arabidopsis plant leaves 2 d before inoculation using 1 ml needleless sterile disposable syringes. SiO 2 NPs and subcellular distribution within the leaf The SiO 2 NPs were synthesized and characterized according to a previously established procedure 31 , 32 adapted from an earlier work 56 . Briefly, one equivalent of tetraethyl orthosilicate (10 ml, >99%; Sigma-Aldrich) was added to an equilibrated reaction mixture at 70 °C containing two equivalents of ultrapure water (Milli-Q, 18.2 MΩ arium 611 DI, Sartorius Stedim Biotech), and absolute ethanol (81 ml) as a solvent under basic conditions (2.93 ml of 25% NH 3 ). The particles resulting after 3 h of hydrolysis and polycondensation of tetraethyl orthosilicate were washed by three steps of centrifugation (15,000 × g for 15 min, where g is the Earth’s gravitational acceleration) in ultrapure water and five or more steps of dialysis through a membrane with a 14 kDa molecular weight cutoff (regenerated cellulose, Carl Roth). Several batches of particles with hydrodynamic diameter in the range of 64.8–76.7 nm were prepared using an identical procedure to prevent artefacts due to suspension aging (size variability between batches, 5.2 nm). DLS was used to quantify the hydrodynamic particle size and surface charge of the diluted samples (1% v/v; NanoBrook Particle Size Analyzer 90Plus, Brookhaven; scattering angle, 90° at 1 min acquisition; raw data are publicly available 55 ). Inductively coupled plasma–optical emission spectroscopy and gravimetry served to quantify the SiO 2 concentration (methods described in ref. 31 ). For the particle characterization and to analyse the effects of SiO 2 NPs, Si(OH) 4 and control treatments in the leaves, we used TEM. The particle size distribution was established using the ImageJ software (version 1.52n) analysis of the TEM micrographs (raw data are publicly available 55 ). The plants were pre-fixed in 4% glutaraldehyde solution, gently stained in the dark with 1% OsO 4 solution that was centrifuged beforehand to remove potential precipitates, dehydrated using an ethanol series and embedded in polymer resin (AGAR Low Viscosity Kit, Plano) without further staining according to a procedure described in detail in ref. 57 . The correct position of the stomata to cut the cross-sections were identified by light microscopy examination of semi-thin resin sections before ultramicrotoming. The TEM images were taken on an FEI Tecnai Spirit instrument at an acceleration voltage of 120 kV (resolution, 2,048 × 2,048 pixels; Veleta CCD camera, Olympus). Besides the cropping and adjustment of brightness and contrast, the micrographs were not further processed; unprocessed raw data are publicly available 55 . DNA extraction The plant leaf samples (five leaf discs from different inoculated plant leaves per sample) were frozen in liquid nitrogen and were homogenized using a ceramic mortar and pestle. The total DNA was extracted with a Plant DNA Mini Kit (peqlab, VWR). More information about the sample preparation is available in Supplementary Information , ‘Details on DNA extraction’. RNA extraction and complementary DNA synthesis The plant leaf samples (ten leaf discs taken from different infiltrated plant leaves per sample) were flash frozen in liquid nitrogen, and the total RNA was extracted with the Spectrum Plant Total RNA Kit (Sigma Life Science). One microgram of the total RNA was used for complementary DNA synthesis using the Omniscript Reverse Transcription Kit (Qiagen). More information about the sample preparation is available in the Supplementary Information , ‘Details on RNA extraction and complementary DNA synthesis’. qPCR To validate the SAR response based on the bacterial colony counts, the bacteria were also quantified via the outer membrane protein oprF gene of P. syringae in the inoculated leaves (raw data are publicly available 55 ) based on a previously established method 18 , 33 . For this bacterial DNA quantification, a reaction mixture for qPCR was prepared with 7.5 μl of 2× SensiMix SYBR Hi-ROX Mastermix (no. QT605-05, Bioline, Meridian Bioscience), 5 μl plant DNA and 0.5 μl of each primer (Supplementary Table 1 ) at a concentration of 10 μM in a final volume replenished with water to 15 μl in magnetic induction cycler (Mic) tubes (Bio Molecular Systems). The runs were performed on a Mic qPCR machine (Bio Molecular Systems). The conditions for the qPCR were as follows: initial denaturation for 10 min at 95 °C followed by 40 cycles (95 °C for 15 s, 62 °C for 1 min and 72 °C for 30 s). The final PCR products were analysed by a melting point analysis. The qPCR analysis software for the melting curve analysis and amplification efficiency calculation was micPCR v. 2.8.13 (Bio Molecular Systems). This software is designed to meet the minimum information for publication of quantitative real-time PCR experiments (MIQE) 58 specifications and automatically performs the qPCR analysis based on the real-time runs. Five leaf discs from different plant leaves were sampled for each replicate, frozen in liquid nitrogen and immediately processed for DNA extraction. The bacterial DNA levels of the bacterial oprF gene in Arabidopsis plants were calculated using At4g26410 ( expG ) as a reference gene 33 and the comparative cycle threshold method (2 (-ΔΔCt) ) 59 . For oxidative stress and SA-responsive plant transcript levels, leaf discs were flash frozen in liquid nitrogen and stored at –80 °C for <24 h before being processed for RNA extraction and complementary DNA synthesis. Three independent technical replicates (ten leaf discs taken from different plant leaves) were used per treatment. The reaction mixture for RT–qPCR contained 7.5 μl of 2× SensiMix SYBR Hi-ROX Mastermix (no. QT605-05, Bioline, Meridian Bioscience), 5 μl of complementary DNA (corresponding to 25 ng RNA) and 0.5 μl of each primer (Supplementary Table 1 ) at a concentration of 10 μM in a final volume replenished with water to 15 μl in Mic tubes (Bio Molecular Systems). Runs were performed on a Mic qPCR machine (Bio Molecular Systems). The conditions for the qPCR and the analysis of the final PCR products by melting point analysis were analogous to the above bacterial DNA quantification. The final PCR products were analysed by a melting point analysis. The transcript levels of the oxidative stress marker (At3g46230; HSP17.4C1 ) 34 and the SA-responsive genes AtPR-1 and AtPR-5 in Arabidopsis plants were calculated with At4g26410 ( expG ) as the reference gene 60 and the comparative cycle threshold method (2 (–ΔΔCt) ) as mentioned above. The expG gene was selected because another study 60 specifically recommended expG as one of the top five reference genes to be used in biotic stress studies due to its high stability under such conditions. This high stability was confirmed in a previous work of our laboratory 61 and another work 33 . In the present study, the stable expression of expG is reflected in the very small variation in the cycle in which fluorescence can be detected in qPCR termed quantitation cycle ( C q ) for expG . For example, in the PR-1 expression experiments (Fig. 6c ), the average C q ranged from 23.19 to 23.93 for all the different testing conditions, with an average relative error of only 0.63% (ref. 55 ). All the amplification efficiencies were very close to two and with good comparability between the reference gene and the target gene. For example, in Fig. 6c , the average amplification efficiency of expG and AtPR-1 across all the different treatment conditions (1.949 ± 0.011 versus 1.962 ± 0.027, averages ± standard deviations) differed by only 0.7% (ref. 55 ). All the statistical tests hereinafter were performed using the IBM SPSS Statistics software (version 22). Ecotoxicity of SiO 2 NPs and Si(OH) 4 to C. elegans larvae The ecotoxicity assays were conducted on the larval stage one (L1) nematodes of the C. elegans wild-type (ancestral; N2) genotype. Synchronized C. elegans larvae were grown according to a previously established protocol 62 (raw data are publicly available 55 ). A known number of larvae (~70) per replicate were then exposed to 0, 25, 125, 250, 500, 750, 1,000, 1,500 or 2,000 mg SiO 2 l –1 of SiO 2 NPs or Si(OH) 4 in 96-well plates (Corning Costar no. 3596). A 0.1% NaN 3 solution served as the positive control. As a food source for the nematodes, the wells contained 10 µl of living Escherichia coli (strain OP50; final optical density at 600 nm and 1 a.u.; ~5 × 10 8 cells ml −1 ). The total volume per well was 100 µl, and the final pH installed in the phosphate-buffered saline test solutions was 7.4. After incubating the nematodes at 20 °C for 48 h in the dark, the surviving larvae were counted under a stereo microscope at ×20 magnification. The resulting number of mobile nematode larvae was subtracted from the initially incubated number of larvae to calculate the percentage of immobile nematodes. The EC50 values were calculated using a numerically fitted standard log-logistic dose–response model (Levenberg–Marquardt iteration algorithm, Origin 2016, build 9.3.2.903, OriginLab; Supplementary Fig. 3 ). The experiment comprised 12 biological replicates for each treatment and was repeated twice with comparable results. Data availability The datasets that support the findings of the current study are available in the Zenodo repository with the identifier . Additional data related to this study are available from the corresponding authors upon reasonable request.
Researchers at the Adolphe Merkle Institute and the Department of Biology at the University of Fribourg have discovered how certain silica nanoparticles could act as a traceless, degradable, and highly efficient treatment against some plant pathogens. One of the biggest challenges facing agriculture today is the extensive use of fertilizers and pesticides. With an increasing number of products banned or considered dangerous for human and animal health, the need for substitutes is acute. One approach is to stimulate plants' own immune response to pathogen attacks. Silicic acid, which naturally occurs in soil, is known to provoke such responses in plants, and amorphous silica nanoparticles can release this substance in small amounts. These nanoparticles, which are also naturally present in many food crops such as cereals, are more common than most people think. They are part of food grade silica (SiO2), otherwise known as E551 on labels and packaging, and used for decades in a variety of products such as table salt, pills, or protein powders to avoid clumping. Increased resistance With this in mind, the Fribourg-based researchers aimed to create an environmentally safe nano-agrochemical for the targeted delivery of silicic acid and to stimulate plant defense. They synthesized silica nanoparticles with similar properties to those found in plants. To test their efficiency, they applied the nanoparticles on Arabidopsis thaliana (thale cress), a widely used plant model, infected with the bacterial pest Pseudomonas syringae, another model organism. The results showed that their nanoparticles can boost resistance against the bacteria in a dose-dependent manner by stimulating the plant's defense hormone, salicylic acid (which is also the active ingredient in aspirin). The researchers also investigated the interactions of the nanoparticles with plant leaves. They were able to show that nanoparticle uptake and action occurred exclusively through the leaf pores (stomata) that allow the plants to breathe. The nanoparticles did not distribute further in the plants, and the particles degrade without leaving a trace in the presence of water, an important consideration for environmental and food safety. Compared to free silicic acid, which is already used in crop protection, the silica nanoparticles caused less stress to the plants and to other soil microorganisms due to the slow release of the silicic acid. The study, published in the top-ranking journal Nature Nanotechnology, shows that silica nanoparticles could serve as an inexpensive, highly efficient, safe, and sustainable alternative for plant disease protection. Future research could extend the investigations to a broader spectrum of plant pathogens according to the researchers such as other bacteria, insects, or viruses. They emphasize though that before any broad application of nanoparticles as nano-biostimulants and -fertilizers, a thorough analysis is needed to assess the potential long-term fate of silica nanoparticles in the environment.
10.1038/s41565-020-00812-0
Medicine
Human health may be at risk from long-term exposure to air pollution below current air quality standards and guidelines
Long term exposure to low level air pollution and mortality in eight European cohorts within the ELAPSE project: pooled analysis, BMJ (2021). DOI: 10.1136/bmj.n1904 Journal information: British Medical Journal (BMJ)
http://dx.doi.org/10.1136/bmj.n1904
https://medicalxpress.com/news/2021-09-human-health-long-term-exposure-air.html
Abstract Objective To investigate the associations between air pollution and mortality, focusing on associations below current European Union, United States, and World Health Organization standards and guidelines. Design Pooled analysis of eight cohorts. Setting Multicentre project Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) in six European countries. Participants 325 367 adults from the general population recruited mostly in the 1990s or 2000s with detailed lifestyle data. Stratified Cox proportional hazard models were used to analyse the associations between air pollution and mortality. Western Europe-wide land use regression models were used to characterise residential air pollution concentrations of ambient fine particulate matter (PM 2.5 ), nitrogen dioxide, ozone, and black carbon. Main outcome measures Deaths due to natural causes and cause specific mortality. Results Of 325 367 adults followed-up for an average of 19.5 years, 47 131 deaths were observed. Higher exposure to PM 2.5 , nitrogen dioxide, and black carbon was associated with significantly increased risk of almost all outcomes. An increase of 5 µg/m 3 in PM 2.5 was associated with 13% (95% confidence interval 10.6% to 15.5%) increase in natural deaths; the corresponding figure for a 10 µg/m 3 increase in nitrogen dioxide was 8.6% (7% to 10.2%). Associations with PM 2.5 , nitrogen dioxide, and black carbon remained significant at low concentrations. For participants with exposures below the US standard of 12 µg/m 3 an increase of 5 µg/m 3 in PM 2.5 was associated with 29.6% (14% to 47.4%) increase in natural deaths. Conclusions Our study contributes to the evidence that outdoor air pollution is associated with mortality even at low pollution levels below the current European and North American standards and WHO guideline values. These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines, and standards, and future assessments by the Global Burden of Disease. Introduction Epidemiological cohort studies have consistently found associations between long term exposure to outdoor air pollution and a range of morbidity and mortality endpoints. Concentrations of health relevant regulated pollutants, including fine particles and nitrogen dioxide, have decreased in the past decades in developed countries. Recent evaluations by the World Health Organization and the Global Burden of Disease study have suggested that health effects might persist at these lower concentrations. 1 2 3 However, there is uncertainty about the shape of the concentration-response function at the low end of the air pollution concentration distribution, related to the scarcity of observations at the lowest concentrations. Associations with mortality at low pollution levels in large populations were primarily investigated in a few North American studies, specifically the Canadian census cohort, the Canadian Community Health survey, the US Medicare cohort, and the US National Health Interview Survey study. 4 5 6 7 8 9 10 All the studies found associations below the current annual average US standard of 12 µg/m 3 and WHO guideline value of 10 µg/m 3 for fine particles with an aerodynamic diameter of <2.5 µm (PM 2.5 ), but only two studies were able to adjust for detailed individual lifestyle factors. 7 9 Most of the studies suggested a steeper concentration response function at the lowest levels, but the National Health Interview Survey study 9 suggested little association below about 5 µg/m 3 . Most studies focused primarily on PM 2.5 , whereas increasing evidence shows that pollutants related to local combustion sources, including nitrogen dioxide and black carbon, might be relevant to health. Few studies have assessed the mortality effects of long term exposure to ozone. Within the project Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) we assessed associations of low level air pollution concentrations with natural and cause specific mortality. Low level air pollution was defined as concentrations below current European Union limit values, US Environmental Protection Agency national ambient air quality standards, or the 2005 WHO air quality guidelines. We investigated PM 2.5 , nitrogen dioxide, ozone, and black carbon at a fine spatial resolution. To have sufficient statistical power to detect associations at low exposure levels, we pooled data from eight European cohorts with information on important individual risk factors, including smoking and body mass index. Methods Study population The eight cohorts were selected from six European countries (see supplementary figure): Sweden (Stockholm county), Denmark (Copenhagen and Aarhus, and nationwide), France (nationwide), the Netherlands (four cities), Germany (Ruhr and Augsburg areas), and Austria (Vorarlberg region). All the cohorts, except the Danish cohort, 11 were previously part of the European Study of Cohorts for Air Pollution Effects (ESCAPE). 12 Not all ESCAPE cohorts were included, either because of relatively high annual air pollution concentrations or because the data could not be pooled. Several of the included cohorts (ie, from Sweden, Denmark, the Netherlands, and Augsburg, Germany) combined multiple original cohorts, termed subcohorts. All cohorts and subcohorts included general population samples and specific subgroups, such as Danish nurses (DNC cohort). Most cohorts were from large cities and surrounding regions. Supplementary appendix section 1 describes the cohorts in more detail. Recruitment of most of the cohorts was in the 1990s or 2000s (supplementary table S1). To pool data, we used a common codebook to harmonise individual and area level covariates and outcome variables between cohorts. Information on covariates was only available at baseline. Assessment of exposure to air pollution We assessed air pollution concentrations at the baseline residential address of the study participants using land use regression models, described in detail elsewhere. 13 Briefly, we estimated 2010 annual mean PM 2.5 , nitrogen dioxide, black carbon, and (warm season) ozone concentrations using the European Environmental Agency AirBase routine monitoring data (PM 2.5 , nitrogen dioxide, and ozone) and ESCAPE monitoring data (black carbon). Predictors were satellite derived and chemical transport model air pollutant estimates at 10×10 km, and fine scale land use and road traffic data. Western Europe-wide models were developed on a 100×100 m grid and were assigned to the participants using their geocoded residential address. The PM 2.5 , nitrogen dioxide, black carbon, and ozone models generally explained a large fraction of measured spatial variation of the annual average concentration: 72%, 59%, 54%, and 69%, respectively. In the ELAPSE paper on exposures 13 we reported performance of the models by comparing with the external ESCAPE measurements by cohort (typically 20 sites for PM 2.5 and 40 sites for nitrogen dioxide). The root means square error of this comparison was between 1.0 µg/m 3 and 1.7 µg/m 3 for PM 2.5 , except for the French cohort, where the value was 3.3 µg/m 3 . For nitrogen dioxide, the root means square error was between 5 µg/m 3 and 7 µg/m 3 , except for the nationwide French cohort (12 µg/m 3 ). Differences were therefore modest, and some of the variability was probably related to the small number of sites in each ESCAPE area. To enable time varying exposure analysis, we extrapolated concentrations to every year of follow-up using the estimated annual concentrations from the Danish eulerian hemispheric model, which models monthly average concentrations across Europe at 26×26 km spatial resolution back to 1990. 14 Mortality data Mortality was defined based on the underlying cause of death recorded on death certificates in mortality registries as ICD-9 and ICD-10 (international classification of diseases, ninth and 10th revisions, respectively) codes. We analysed mortality from natural causes (ICD-9: 001-779; ICD-10: A00-R99) and cause specific mortality for cardiovascular disease (ICD-9: 400-440; ICD-10: I10-I70), ischaemic heart disease (ICD-9: 410-414; ICD-10: I20-I25), cerebrovascular disease (ICD-9: 430-438; ICD-10: I60-I69), respiratory disease (ICD-9: 460-519; ICD-10: J00-J99), chronic obstructive pulmonary disease (ICD-9: 490-492, 494, 496; ICD-10: J40-J44, J47), diabetes (ICD-9: 249-250; ICD-10: E10-E14), and cardiometabolic diseases (cardiovascular disease or diabetes). The end of follow-up for mortality was until 2011-15, depending on the cohort (supplementary table S1). Statistical analysis We analysed the associations between air pollution and mortality using Cox proportional hazards models stratified by sex and cohort or subcohort, with age as underlying timescale. Censoring occurred at the time of the event of interest, death from other causes, emigration, loss to follow-up for other reasons, or end of follow-up, whichever came first. Strata for cohorts or subcohorts were applied because of concerns about differences not fully accounted for by the available covariates and departures from the proportional hazard assumption. 15 We specified three confounder models a priori, with an increasing level of adjustment for individual and area level variables. Model 1 included age (as the timescale), sex (strata), and year of enrolment. Model 2 further included smoking status, duration and intensity of smoking (linear and squared for intensity), body mass index, marital status, and employment status. Model 3 further expanded model 2 with neighbourhood or municipal level mean income in 2001. We determined models 2 and 3 based on the ESCAPE confounder models 12 and detailed sensitivity analyses, in which we balanced the need to adjust for a comprehensive set of covariates and the availability of these covariates for most participants. Model 3 was considered the main model. In addition to the main linear model, we assessed the shape of the concentration-response association between air pollution and mortality using both natural cubic splines with three degrees of freedom and the shape constrained health impact function (SCHIF). The SCHIF method assesses several different shapes of the association as variations of sigmoidal functions to produce biologically plausible concentration-response functions, resulting in an “optimal” and “ensemble” of all fitted shapes. 16 The SCHIF shapes are smoother and less affected by sparse data than natural cubic splines. We also performed analyses with exposure grouped into quarters for natural, cardiovascular, and respiratory mortality. Furthermore, we performed several sensitivity analyses. The associations with linear models were analysed in subsets of concentrations by excluding observations above specific values. We evaluated cut-offs including current EU limit values (25 μg/m 3 PM 2.5 , 40 μg/m 3 nitrogen dioxide), US Environmental Protection Agency national ambient air quality standards (12 μg/m 3 PM 2.5 ), and WHO air quality guidelines (10 μg/m 3 PM 2.5 , 40 μg/m 3 nitrogen dioxide). To disentangle the effect of individual pollutants, we specified linear models of two pollutants for all combinations of the four pollutants. We did not specify three pollutant models because of the high correlations between pollutants within cohorts. As some potential confounders were not available in all cohorts, we tested the sensitivity of our findings by adjusting for additional variables such as education and performing a leave one out cohort analysis. We assessed effect modification by covariates available in all cohorts and subcohorts. Because we stratified for sex in our main model, we changed the formulation to include sex as a covariate in the model. We then added pollution as an interaction variable, as with the other effect modifiers (restoring strata for sex as in the main model). We assessed the sensitivity of our findings to using the exposure in 2010 by analysing the back extrapolated concentrations at each cohort’s baseline year and by time varying exposures from enrolment to end of follow-up. Residential history was incorporated in the time varying exposure analyses. In the time varying analysis, one and five year period strata were used in the Cox models to account for time trends in mortality and air pollution. Because air pollution and noise might be correlated, we conducted additional adjustment for road traffic noise. Details on assessment of noise exposure are provided elsewhere. 17 We used multiple imputation by chained equations 18 to fill in missing values for confounders, provided that a cohort had information for a variable for part of the cohort (supplementary appendix, section 2). Analyses were performed in R (version 3.4.0). 19 Supplementary appendix, section 3, lists the packages used in the analyses. Patient and public involvement As we used existing cohorts recruited more than a decade ago, we could not involve patients in the design of the study and the paper. We will prepare press releases and share the findings through publications, talks, and social media, addressing larger audiences that include members of the public, patients, health professionals, and stakeholders. Results Population and exposure characteristics Cohorts differed in all characteristics, supporting the analysis using strata for cohorts and subcohorts ( table 1 and supplementary tables S1 and S2). Observations were pooled from 381 036 participants. Owing to missing covariate data, 325 367 participants were included in the main analysis. The Austrian VHM&PP cohort contributed 45% of participants. Nearly all participants were exposed to PM 2.5 levels below the EU limit value (25 µg/m 3 ), more than 50 000 were exposed to levels below the US Environmental Protection Agency national ambient air quality standards (12 µg/m 3 ), and more than 25 000 were exposed to levels below the WHO air quality guidelines (10 µg/m 3 ). More than 310 000 participants were exposed to nitrogen dioxide levels below the EU limit values and WHO air quality guidelines (40 µg/m 3 ). Table 1 Characteristics of study populations from eight European cohorts. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline Large north to south upward gradients in exposure to air pollution were observed between cohorts ( figure 1 and table S3). Variations in black carbon and nitrogen dioxide within cohorts were especially substantial. Contrast for ozone was low within cohorts. Fig 1 Annual average exposure at participant (n=325 367) addresses. In boxes, the boundary closest to zero indicates 25th centile and furthest from zero indicates 75th centile. Lines in boxes represent the median and whiskers are 5th and 95th centiles. Dashed lines for fine particulate matter (PM 2.5 ) indicate World Health Organization air quality guidelines (10 µg/m 3 ), US Environmental Protection Agency national ambient air quality standards (12 µg/m 3 ), and EU limit value (25 µg/m 3 ). Dashed lines for nitrogen dioxide indicate WHO air quality guidelines (40 µg/m 3 ) and WHO health risks of air pollution in Europe (HRAPIE) health impact quantification threshold (20 µg/m 3 ). Cohorts were ordered from north (top) to south (bottom). CEANS cohorts are from Stockholm county, Sweden, DCH from Copenhagen and Aarhus, Denmark, DNC from Denmark nationwide, EPIC-NL from four cities in the Netherlands, HNR from the Ruhr area, Germany, E3N from France nationwide, KORA from the Augsburg area, Germany, and VHM&PP from the Vorarlberg region, Austria Download figure Open in new tab Download powerpoint PM 2.5 was moderately to highly correlated with black carbon and nitrogen dioxide within most cohorts (supplementary table S4). Black carbon and nitrogen dioxide were highly correlated in most cohorts. Ozone was negatively correlated with PM 2.5 and especially nitrogen dioxide and black carbon in all cohorts, with a particularly high negative correlation in the large Austrian cohort. The within cohort correlation is important since strata were used for cohorts and subcohorts in the epidemiological analysis. Associations with mortality Main analysis Associations between PM 2.5 , nitrogen dioxide, and black carbon and almost all outcomes were significantly positive in linear analysis ( table 2 ). Effect estimates for PM 2.5 were similar for deaths from natural causes and cardiovascular disease and lower for deaths from respiratory disease but similar for nitrogen dioxide and black carbon. The highest hazard ratios were found for deaths due to diabetes, with wider confidence intervals owing to a small number of deaths. Associations were significantly negative for ozone and all outcomes, related to the negative correlation between ozone and the other pollutants (participants with high exposure to ozone had low exposures to PM 2.5 , black carbon, and nitrogen dioxide). Table 2 Risk of death associated with exposure to air pollution in 325 367 participants from eight European cohorts. Values are hazard ratios (95% confidence intervals) unless stated otherwise View this table: View popup View inline Figure 2 and supplementary figure S1 show the concentration-response functions for PM 2.5 , nitrogen dioxide, and deaths from natural causes using natural splines. Associations for natural deaths were observed over the full range of exposures. Associations tended to be steeper at low concentrations, levelling off at high concentrations. At the extremes of the distribution, patterns occurred that were difficult to interpret, related to large uncertainty about the shape of the curve as indicated by wide confidence intervals. For the association between PM 2.5 and deaths due to respiratory disease and chronic obstructive pulmonary disease, the pattern was difficult to interpret as a decreasing trend occurred associated with relatively frequently occurring exposures. Concentration-response functions for cause specific mortality were in general similar to those for natural deaths, indicating mostly supralinear curves, with associations remaining at low levels ( figure 2 and supplementary figures S2-S6). The analyses of exposure grouped into quarters confirmed the linear to supralinear curves in the splines, except for PM 2.5 and deaths due to respiratory disease (supplementary table S5). Fig 2 Natural cubic splines (three degrees of freedom) for associations between exposure to air pollution and deaths due to natural causes, cardiovascular disease, and respiratory disease. Purple shaded areas represent 95% confidence intervals. Histogram of exposure added to illustrate sparse data regions. Dashed lines for fine particulate matter (PM 2.5 ) indicate World Health Organization air quality guidelines (10 µg/m 3 ), the primary US Environmental Protection Agency standard (12 µg/m 3 ), and the secondary US EPA standard (15 µg/m 3 ), all as annual averages. For nitrogen dioxide the red lines indicate the WHO suggested limit for burden of disease quantification (20 µg/m 3 ) and EU limit value and WHO air quality guidelines (40 µg/m 3 ). Patterns at the extremes are difficult to interpret owing to wide confidence intervals Download figure Open in new tab Download powerpoint The SCHIF shapes were generally in agreement with the shapes of the natural splines, indicating that evidence still exists for an association at low levels and the positive associations between pollution and natural and cause specific mortality are generally steeper at the low end of the distribution for PM 2.5 , nitrogen dioxide, and black carbon (supplementary figures S7-S12). The SCHIF shapes for nitrogen dioxide and deaths due to respiratory disease and chronic obstructive pulmonary disease suggest a flatter slope at low than at high concentrations. Sensitivity analyses Table 3 shows the hazards ratios for natural deaths observed in subsets of successively lower air pollution concentrations. Associations remained positive and statistically significant for PM 2.5 even when all observations higher than 12 µg/m 3 were removed from the analysis. The hazard ratios for participants with exposures below 10 µg/m 3 was similar to those for all observations but with wider confidence intervals. For nitrogen dioxide, associations remained significantly positive below 20 µg/m 3 , well below current standards. Similar patterns were found for cause specific mortality ( table 3 ), with wider confidence intervals associated with smaller number of deaths. Table 3 Subset analysis of risk of death associated with exposure to air pollution View this table: View popup View inline Associations for PM 2.5 and nitrogen dioxide were attenuated but remained significant after adjustment for each other and for ozone as well as for black carbon in models of two pollutants for deaths due to natural causes and cardiovascular disease (supplementary tables S6 and S7). For deaths due to respiratory disease, only associations for nitrogen dioxide were robust to adjustment for other pollutants (supplementary table S8). The negative association with ozone attenuated towards unity but remained statistically significant. Air pollution concentrations have decreased substantially in Europe since the 1990s (supplementary figure S13). Exposures to PM 2.5 especially were substantially higher at baseline than in 2010 (supplementary figure S14); exposures to nitrogen dioxide and black carbon were moderately higher at baseline (supplementary appendix, section 7). When exposure to baseline air pollution was used instead of the 2010 exposure, hazard ratios especially for PM 2.5 were found to be smaller than in the main analysis, although still statistically significant (supplementary table S9). Hazard ratios in the time varying exposure analyses were similar to those of the analysis using the 2010 exposure (supplementary table S10). As time trends were different across Europe, different trends were specified for each cohort. Natural spline analysis conducted in time varying exposure analyses supported the findings that mortality associations remained at low levels and were not associated with the use of the 2010 exposure as an exposure estimate (supplementary figure S15). Further adjustment for education, diet, and occupational status did not affect effect estimates obtained with the main model (supplementary table S11). Effect estimates were not (PM 2.5 ) or were mildly (nitrogen dioxide, black carbon) affected by exclusion of specific cohorts, such as the large Austrian VHM&PP cohort (supplementary figure S16). The hazard ratio for ozone was substantially closer to unity when excluding the Austrian cohort. Effect estimates for deaths due to natural causes and cardiovascular disease were only mildly attenuated by additional adjustment for road traffic noise (supplementary table S12). The negative ozone associations were attenuated to unity in models of two pollutants (adjusting the ozone association for one of the other pollutants, one at a time) without the Austrian cohort and with additional adjustment for road traffic noise (supplementary tables S13 and S14). Hazard ratios were unchanged after using multiple imputation to estimate missing covariate data in the full study population (supplementary table S15). Indications of effect modification were found for sex (higher hazard ratio in men; supplementary figure S17), smoking status for PM 2.5 (higher hazard ratio in current smokers, but also significant associations in never smokers), age for nitrogen dioxide (higher hazard ratio in those aged <65 years). Effect estimates remained significant in all strata of participants except those with a very low body mass index (<18.5). Discussion By performing targeted analyses within a large European pooled cohort with detailed data on individual lifestyle covariates, we found significant positive associations between residential exposure to PM 2.5 , nitrogen dioxide, and black carbon and deaths due to natural causes, cardiovascular disease, and respiratory disease. For these pollutants, we generally observed associations that were stronger at low exposure levels. Subset analyses documented that these associations remained even at levels for PM 2.5 and nitrogen dioxide below current EU limit values, US Environmental Protection Agency national ambient air quality standards, and WHO air quality guidelines. Comparison with other studies The estimated hazard ratio for mortality associated with PM 2.5 in our study is larger than the estimate from the ESCAPE study, 12 estimates from recent North American administrative cohorts 5 6 8 20 and a recent Danish study, 21 and estimates from meta-analyses 22 23 24 (supplementary table S16), but almost identical to the results of the Canadian community health survey study. 7 The recent WHO systematic review documented heterogeneity in PM 2.5 effect estimates between studies, attributed to study location, level, and composition of particulate matter and to methodological differences. 24 In our cohort, similarly to the Canadian community health survey, individual lifestyle data were available, which are missing in large administrative cohorts. The sensitivity analysis using PM 2.5 estimates at baseline year of exposure showed clearly smaller effect estimates. These estimates are more in line with effect estimates reported in a recent systematic review, suggesting that the effect estimate using the 2010 concentrations as exposure variable might be overestimating the true effect estimate. Effect estimates from time varying exposure analyses were similar to those of our main analyses. Our effect estimates for nitrogen dioxide were also higher than those in recent meta-analyses (supplementary table S12). Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. When we applied two methods allowing non-linear concentration-response functions and linear analyses in subsets of exposure, we found no indication of a level below which no association was found. The finding of associations at low levels is consistent with that of several other recent cohort studies. 4 5 6 7 8 25 26 27 28 29 The steeper slope of the PM 2.5 mortality association at low levels is consistent with previous North American studies. 4 5 6 7 8 In some other cohort studies, the shape of the PM 2.5 function was sublinear 30 31 or linear. 25 26 27 In a comprehensive meta-analysis that combined evidence from a large number of cohorts, a supralinear association was observed for PM 2.5 . 32 Considerably less evidence is available about associations between low level nitrogen dioxide and mortality. In two large administrative cohorts in Canada and the Netherlands, associations with nitrogen dioxide were also found well below the WHO guideline value. 5 33 Our study found associations at levels twofold lower than the current WHO guideline values for long term exposure to nitrogen dioxide. Models of two pollutants showed that both PM 2.5 and nitrogen dioxide were associated with mortality. Whereas PM 2.5 primarily reflects pollution transported over large distances, nitrogen dioxide reflects local fossil fuel combustion sources, especially motorised traffic. Our results for ozone did not confirm previously reported positive associations with mortality. 34 This might be related to the very small range of ozone exposure within our study, rendering our study less informative for assessing health effects of ambient ozone. The negative associations we found in single pollutant models might reflect the high negative correlation with especially nitrogen dioxide and black carbon. Ozone and nitrogen dioxide are negatively correlated because when ozone is close to combustion sources (eg, major roads), it reacts with nitric oxide emitted from the combustion source to form oxygen and nitrogen dioxide. Ozone therefore tends to be low near roadways, whereas black carbon emitted by traffic is high. Nitrogen dioxide is in part directly emitted from traffic and in part formed by the atmospheric reactions, so it is also high near roadways. In models of two pollutants and models adjusting for noise, the negative associations with ozone were attenuated to unity, especially when excluding the large Austrian cohort and adjusting for nitrogen dioxide and noise together. The very high negative correlation between ozone and nitrogen dioxide in the Austrian cohort renders models of two pollutants difficult to interpret. Strengths and limitations of this study An important strength of ELAPSE is the pooling of data from multiple European cohorts with detailed information on individual covariates (eg, smoking, body mass index), which allowed for more statistical power and analysis of the shapes of concentration-response functions. A part of the pooling process was an extensive, highly standardised procedure of harmonisation of individual and small area level variables between all cohorts. Another strength of the study is that we used state-of-the-art models to enable a uniform assessment of exposure to air pollution at a fine, 100×100 m scale for all four pollutants. Compared with the ESCAPE study, 12 we had longer follow-up time. A limitation of our study is the use of the 2010 exposure in our main analyses. The rationale for using the 2010 exposure was that in earlier years we did not have enough monitoring stations in Europe to develop the fine spatial scale models for PM 2.5 . The 2010 exposure represents exposure towards the end of follow-up for most cohorts. Given the downward trends in air pollution, a concern is that 2010 exposure might not correctly reflect the long term exposure leading to increased mortality. Previous studies, however, have documented that spatial contrasts in nitrogen dioxide and black carbon remain constant for at least a decade, 35 36 37 38 supporting the use of 2010 exposures in the analysis. Our sensitivity analyses with time varying exposure resulted in similar findings to the main model. For PM 2.5 mainly, northern European cohorts contributed to the effect estimates in the lowest exposure range and we therefore could not distinguish between the characteristics of the particulate matter mixture or the population characteristics affecting the steeper slope at low levels. More overlap was found in exposure to nitrogen dioxide and black carbon between cohorts and we observed steeper slopes at low levels as well. The difficulty in interpreting the non-linear function observed for deaths due to respiratory disease could be related to competing risks of death that we did not account for in our study. Bias from exposure misclassification and residual confounding cannot be excluded. As exposure status was determined fully independently from the outcome, misclassification is likely non-differential and thus biased towards the null. We adjusted for several commonly used potential confounders, and adjustment for socioeconomic status might reduce confounding by other risk factors. The coding for the causes of the death in the current study were brought in line with the previous ESCAPE analyses. These differ slightly from the ones used by the Global Burden of Disease. Owing to that, we do not expect major differences. Conclusions Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines and standards, and future assessments by the Global Burden of Disease. What is already known on this topic In the framework of the update of the World Health Organization air quality guidelines, systematic reviews of studies of the effect of long term exposure to major outdoor air pollutants (fine particles, nitrogen dioxide, and ozone) have been done Findings showed that long term exposure to ambient air pollution was significantly associated with natural and cause specific mortality, but associations at concentrations below current limit values were not well understood What this study adds Long term exposure to outdoor air pollution was positively associated with mortality even at levels well below the EU limit values, US Environmental Protection Agency national ambient air quality standards, and WHO air quality guidelines for fine particles and nitrogen dioxide This new evidence supports reconsideration of existing guideline values and standards The finding of associations at low levels of air pollution and mortality also supports policies to reduce air pollution below current legal limit values Ethics statements Ethical approval All included cohort studies were approved by the medical ethics committees in their respective countries. Data availability statement No additional data available. Acknowledgments We thank Marjan Tewis for compiling the pooled cohort and Richard Burnett for supplying the code for the shape constrained health impact function and commenting on its application. Footnotes Contributors: MS performed the statistical analysis and wrote the original draft of the manuscript. GH wrote and reviewed the manuscript and edited the original version. SR, KK, and ES created the statistical analyses strategy and scripts for the statistical analyses. JC performed the statistical analysis and exposure assessments. KdH performed the exposure assessments. MS, GW, SR, KK, BB, GH, and ES conceived and designed the study. BB and GH are principal investigators of the Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) project. All authors have read and revised the manuscript for important intellectual content and contributed to the interpretation of the results. All authors have approved the final draft of the manuscript. GH is the study guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. GH and ES contributed equally to the manuscript. Funding: This work was supported by Health Effects Institute (HEI) research agreement (grant No 4954-RFA14-3/16-5-3). Research described in this article was conducted under contract to the HEI, an organisation jointly funded by the US Environmental Protection Agency (EPA) (assistance award No R-82811201) and certain motor vehicle and engine manufacturers. The contents of this article do not necessarily reflect the views of HEI, or its sponsors, nor do they necessarily reflect the views and policies of the EPA or motor vehicle and engine manufacturers. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from the Health Effects Institute for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years, no other relationships or activities that could appear to have influenced the submitted work. The corresponding author affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. Dissemination to participants and related patient and public communities: We plan to prepare press releases and share the findings through publications, talks, and social media. We will address larger audiences that include members of the public, patients, health professionals, and stakeholders. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
Long-term exposure to air pollution appears to still be linked to higher mortality despite the existence of air quality standards that restrict levels of pollution, suggests a study published online in The BMJ today. Researchers found evidence of higher death rates amongst people who had been exposed to more air pollution even though the levels were allowed under current official standards. Previous studies have found an association between long term exposure to outdoor air pollution such as those in the form of fine particles in the air (known as particulate matter or PM2.5) and nitrogen dioxide (NO2) and poor health or death. Air pollution concentrations have fallen substantially in Europe since the 1990s, but it is unclear whether there still is a link between pollution and ill health or death at concentrations of pollution that are below current permitted limits. Therefore, an international team of researchers led by the Institute for Risk Assessment Sciences at Utrecht University in the Netherlands, set out to investigate if there was an association between low levels of air pollution concentrations and natural and cause specific deaths. Low level air pollution was defined as concentrations below current limit values as set by the European Union, US Environmental Protection Agency and the World Health Organization (WHO) air quality guidelines. The researchers analysed data on eight groups of people within six European countries—Sweden, Denmark, France, the Netherlands, Germany and Austria—totalling 325,367 adults collectively. Their study, known as the Effects of Low-Level Air Pollution: A Study in Europe (ELAPSE) recruited participants in the 1990s or 2000s. Of the 325,367 participants who were followed up over an almost 20-year period, around 14.5% (47,131 people) died during the study period. Analysis of the results showed that people who had higher exposure to particulate matter (PM2.5), nitrogen dioxide, and black carbon were more likely to die. An increase of 5 µg/m3 (a concentration measure of particulate matter) in PM2.5 was associated with a 13% increase in natural deaths while the corresponding figure for a 10 µg/m3 increase in nitrogen dioxide was 8.6%. Associations with PM2.5 and nitrogen dioxide were largely independent of each other. Moreover, associations with PM2.5, nitrogen dioxide, and black carbon remained significant at low to very low concentrations. For people who were exposed to pollution levels below the US standard of 12 µg/m3, an increase of 5 µg/m3 in PM2.5 was associated with a 29.6% increase in natural deaths. People exposed to nitrogen dioxide at less than half the current EU standard of 40 µg/m3, a 10 µg/m3 increase in nitrogen dioxide was associated with a 9.9% increase in natural deaths. This is an observational study, and as such, can't establish cause. The study also has some limitations, say the authors, such as the fact that it focused on exposure in 2010 which was towards the end of the follow-up period for most participants and, given the downward trend in air pollution, this measure might not exactly reflect the concentrations experienced during follow-up. However, this was a large study from multiple European groups of people with detailed information provided. As such, the authors conclude: "Our study contributes to the evidence that outdoor air pollution is associated with mortality even at levels below the current European and North American standards and WHO guideline values. "These findings are therefore an important contribution to the debate about revision of air quality limits, guidelines and standards, and future assessments by the Global Burden of Disease [study]."
10.1136/bmj.n1904
Medicine
Study reveals mechanism by which a circadian clock molecule leads to lung fibrosis
Qixin Wang et al, Circadian clock molecule REV-ERBα regulates lung fibrotic progression through collagen stabilization, Nature Communications (2023). DOI: 10.1038/s41467-023-36896-0 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-36896-0
https://medicalxpress.com/news/2023-04-reveals-mechanism-circadian-clock-molecule.html
Abstract Molecular clock REV-ERBα is central to regulating lung injuries, and decreased REV-ERBα abundance mediates sensitivity to pro-fibrotic insults and exacerbates fibrotic progression. In this study, we determine the role of REV-ERBα in fibrogenesis induced by bleomycin and Influenza A virus (IAV). Bleomycin exposure decreases the abundance of REV-ERBα, and mice dosed with bleomycin at night display exacerbated lung fibrogenesis. Rev-erbα agonist (SR9009) treatment prevents bleomycin induced collagen overexpression in mice. Rev-erbα global heterozygous (Rev-erbα Het) mice infected with IAV showed augmented levels of collagens and lysyl oxidases compared with WT-infected mice. Furthermore, Rev-erbα agonist (GSK4112) prevents collagen and lysyl oxidase overexpression induced by TGFβ in human lung fibroblasts, whereas the Rev-erbα antagonist exacerbates it. Overall, these results indicate that loss of REV-ERBα exacerbates the fibrotic responses by promoting collagen and lysyl oxidase expression, whereas Rev-erbα agonist prevents it. This study provides the potential of Rev-erbα agonists in the treatment of pulmonary fibrosis. Introduction Idiopathic pulmonary fibrosis (IPF) is a chronic interstitial lung disease characterized by progressive lung scar tissue formation that is typically accompanied by impaired lung function and difficulty breathing 1 . The onset of pulmonary fibrosis is usually initiated by the dysregulation of tissue repair mechanisms which can be induced by various causes, such as air pollution (asbestos), antineoplastic drugs, and respiratory viral infections such as influenza A virus (IAV) and even coronavirus (SARS-CoV-2) infection 2 , 3 . In previous decades, rigorous basic studies have improved our understanding of pro-fibrotic pathogenesis and developed many candidates for anti-fibrotic therapy. However, there are no effective therapeutics for IPF, and the detailed molecular mechanism of fibrogenesis is still poorly understood 4 , 5 , 6 . Currently, nintedanib and pirfenidone are the only Food and Drug Administration (FDA)-approved drugs for the treatment of pulmonary fibrosis, which only serve to slow the progression of pulmonary fibrosis 7 . Investigating new promising molecular pathways involved in fibrogenic responses is urgently needed, and Rev-erbα has become a promising candidate 8 , 9 . REV-ERBα is a transcriptional repressor that regulates mRNA transcriptions involved in circadian rhythms, metabolism, and inflammatory responses 10 , 11 , 12 , 13 . Oscillations in circadian rhythm are controlled by the competition of two nuclear receptors, REV-ERBα, and retinoic acid-like orphan receptor alpha (RORα) 14 . REV-ERBα inhibits the transcription and translation of circadian locomotor output cycles kaput (CLOCK)/brain and muscle ARNT-like 1 ( BMAL1 , also known as ARNTL ), which will form a heterodimer and bind to E-box and promote the transcription/translation of either core clock molecules or downstream targets 15 . For regulating BMAL1 and CLOCK expression, RORα competes with REV-ERBα to bind with ROR response elements (ROREs) to activate the transcription of BMAL1 and CLOCK 15 forming an auto-feedback system with REV-ERBα and providing stability and precision to molecular clock regulation. Interestingly, the downstream gene targets of E-box include various fibrotic markers such as α-smooth muscle actin (αSMA) and vimentin (VIM) 16 . Moreover, the removal of REV-ERBα has been associated with increased risks of lung inflammation and premature senescence, which has been confirmed by our and others’ previous studies 17 , 18 , 19 . Circadian clock molecules are identified as essential mediators of pulmonary injuries with various causes, such as cigarette smoke (CS) and IAV 20 , 21 , 22 , 23 . Previous studies have described the importance of circadian molecules in key cell subtypes, including club cells, alveolar macrophages, and fibroblasts, in the lung microenvironment in response to injury and inflammatory mediators 8 , 19 , 24 , 25 . Previous findings showed that CS exposure and IAV infection-induced lung injuries are associated with disruption of the circadian clock and impaired lung function, survival rate, and daily ambulatory activity 26 , 27 . Various studies to date demonstrate the fundamental interactions of core clock molecules, such as REV-ERBα or BMAL1, with lung inflammatory responses and the development of chronic obstructive pulmonary disease (COPD) by CS exposure 23 . Currently, only one study has shown that REV - ERBα deficiency in lung fibroblasts exaggerates bleomycin-induced lung fibrogenesis 8 . However, the mechanism and role of REV-ERBα in lung fibrogenesis via collagen synthesis and its regulation during IAV infection are not known. Stabilization of collagen fibers is regulated by lysyl oxidase, a copper-dependent amino oxidase, via crosslinking the extracellular matrix proteins (collagen and elastin), thereby preventing collagen degradation 28 . Our previous study has identified the potential of REV-ERBα in regulating epithelial-mesenchymal transition (EMT) and fibroblast differentiation induced by CS and TGFβ 27 . We, therefore, hypothesize that REV-ERBα is important in regulating fibrotic progression in the lungs, by targeting collagen synthesis and its stabilization pathways. Here we show, the abundance of REV-ERBα is decreased during fibrogenesis, and loss of REV-ERBα augments the fibrotic responses caused by IAV infection. Furthermore, enhanced REV-ERBα activity/abundance will reduce abnormal collagen accumulation by inhibiting the expression of lysyl oxidases during myofibroblast differentiation. Results Dysregulated protein abundance of REV-ERBα, COL1A1 and LOX were observed in IPF patients compared with healthy controls It is well known that excessive extracellular matrix (ECM) protein production occurs during fibrosis and is deposited within the lesion areas. Human lung sections with verified pathology were purchased from Origene Inc. All the healthy controls were within the normal limits with 100% normal area and at least 80% alveoli area, while the IPF samples were composed of at least 40% lesion area (Supplementary Table 1 ). As shown in Fig. 1a , we observed high expression of type 1 collagen (COL1A1) over the injured tissue area and elevated lysyl oxidase (LOX) protein in IPF patients compared with healthy controls. Both COL1A1 and LOX were highly expressed among ECM in the lesion tissues in IPF samples, whereas limited COL1A1 and Lox were expressed in healthy controls. Consistent with previous data, we observed the diminished protein abundance and distribution of REV-ERBα in the fibrotic lesions from IPF samples, whereas REV-ERBα was highly expressed in the nuclei of healthy controls with limited protein abundance observed in the cytoplasmic area. Similar results were observed in the healthy and lesion area in IPF samples as well (Fig. 1b ). A decreased trend of REV-ERBα was found in the lesion area compared to the healthy sections, and upregulation of COL1A1 and LOX was found in the lesion area compared to the healthy area. Compared to the control groups, the protein abundance of REV-ERBα was decreased in the healthy area (Control: 21.294% vs. healthy area from IPF: 11.296%) from IPF samples, and slightly increased protein levels of COL1A1 (Control: 37.074% vs. healthy area from IPF: 52.604%) and LOX (Control: 30.439% vs. healthy area from IPF: 40.836%) in the healthy areas from IPF samples compared to control group were observed (Fig. 1 ). A previous study has identified that REV-ERBα is fundamental in IPF progression 8 , and we determined how Rev-erbα affects the development of pulmonary fibrosis. Fig. 1: Decreased REV-ERBα protein abundance and increased protein levels of COL1A1 and LOX in IPF lungs compared to healthy control. Healthy control and IPF formalin fixed-paraffin embedded (FFPE) lung samples were purchased from Origene Inc. Healthy controls contained 100% normal lung architecture with 85% alveoli surface area. IPF patient samples contained at least 50% lesion surface area. The protein abundance of REV-ERBα, COL1A1, and LOX were visualized and determined by IHC. a The comparisons of protein distribution and abundance were performed between healthy control and IPF patient ( n = 10 per group), b or between the healthy area and lesion area from the same IPF patient. The images were taken, and the positive stained area was calculated by ImageJ ( n = 5 per group). Data were shown as mean ± SEM, unpaired t -test was used for a and b . Bar size: 50 µm. (* p < 0.05, *** p < 0.001; scale bar: 50 μm). Full size image Circadian clock genes, including REV-ERBα were dysregulated in bleomycin-induced fibrosis To understand the expression of Rev-erbα and related circadian genes in the in vivo model of fibrosis, we treated C57BL/6J wild-type (WT) mice with bleomycin (1.5 units/kg) to induce fibrosis and determine the gene expression of circadian and fibrotic-related genes. According to previous studies, most of the fibrotic markers were dysregulated significantly at day 14, and there was no significant difference at day 14, 21, and 28 post-injury 29 , 30 , 31 . Another report also described that variable outcomes appeared after day 21 post-injury, and even recovered to baseline 32 . Hence, we have selected day 14 post-injury as our end-time point for bleomycin-induced lung injury. After 14 days of bleomycin-induced lung injury, we found decreased gene expression of REV-ERB α (Gene symbol: NR1D1 ), REV-ERB β (Gene symbol: NR1D2 ), RORα (Gene symbol: RORA ), CLOCK , CRY1/2 , PER1/2/3 and DBP (Fig. 2 a, b and Supplementary Fig. 1 ). There was no change in gene expression of BMAL1 (Gene symbol: ARNTL ), or NFIL3 transcript level (Supplementary Fig. 1 ). As expected, gene expression of fibrotic markers, such as COL1A1 , COL1A2 , COL3A1 , COL5A2 , TGFB1 , TGFB2 , VIM1 , FN1 , and MMP2 was increased at 14 days post bleomycin injury (Fig. 2 a, b and Supplementary Fig. 1 ). Decreased expression of gene levels of OCLN , TJP1 , TJP3 , and CDH1 were also observed, while SMAD2 and TJP2 showed no significant changes in the bleomycin group compared with PBS control (Fig. 2 a, b and Supplementary Fig. 1 ). Similarly, we also observed the decreased protein expression of REV-ERBα in the bleomycin-treated group, as well as increased protein levels of LOX (total and activated) and COL1A1 (Fig. 2c ). Fig. 2: Altered circadian and profibrotic mRNA and protein expressions were observed in bleomycin induced fibrotic responses. Lungs from C57BL/6J WT mice (Combined male and female ( n = 2–3 each) for analysis) dosed with bleomycin at day 14 were snap-frozen and used for RNA isolation. RNA isolated from lung homogenates were used to identify the circadian and profibrotic related gene expressions by our customized nanostring panel through nCounter SPRINT Profiler. The transcripts levels of RNA targets (Normalized Count) were normalized and visualized by nSolver software. a Dysregulated genes are shown as a heatmap with circadian genes on top and profibrotic genes on the bottom. b Selected gene expressions were shown as a bar graph ( n = 6 mice per group). c Proteins isolated from lung homogenates were used to detect the abundance of REV-ERBα, LOX, Activated LOX, and COL1A1. Represented blots are shown here, and protein expression fold change was calculated based on the normalization of β-ACTIN ( n = 4–6 mice per group). Data were shown as mean ± SEM, unpaired two-side t -test was used for b and c . (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. PBS). Full size image Exacerbated fibrotic progression and lung injury induced by bleomycin dosed at night Since REV-ERBα expression occurs in circadian oscillation, the expression of REV-ERBα starts to increase from 6 a.m. (ZT0) and starts to decrease at 6 p.m. (ZT12) 19 . Thus, we dosed the mice at the beginning of the day (lights on) and night (lights off) cycles to determine whether the oscillation of REV-ERBα expression affects the fibrotic progression induced by bleomycin injury (Fig. 3 ). Interestingly, we found that mice treated with bleomycin at 7 p.m. exhibited exacerbated body weight loss compared to those dosed at 7 a.m. from days 11–14 post-injury (Fig. 3a ). In addition, mice dosed at 7 a.m. had a 100% survival rate, whereas only 75% of the mice dosed at 7 p.m. survived (Fig. 3a ). We also noticed that mice dosed with bleomycin at both 7 a.m. and 7 p.m. showed dramatic lung injury, and mice dosed at 7 p.m. showed more injury area in lung sections compared to mice dosed at 7 a.m. (Fig. 3a and Supplementary Fig. 2a ). Fig. 3: The health status of mice, circadian genes and fibrotic genes and protein expressions were affected by bleomycin injury in different time points (7 a.m. vs. 7 p.m.). C57BL/6J WT female mice were used for testing. a The body weights and survival rate were monitored until day 14 post-injury. ( n = 3–5 mice per group, * p < 0.05, ** p < 0.01 vs. bleomycin 7 a.m. group). Lungs were harvested, and H&E staining was performed to identify the injured area percentage. b RNA was isolated from lungs homogenates, and gene expression analysis was conducted using customized nanostring panel through nCounter SPRINT Profiler, and transcripts levels were normalized and visualized by nSolver software. The dysregulated genes were shown as a heatmap with circadian and profibrotic genes. Selected gene expressions were shown as bar graph ( n = 3 mice per group). c Proteins isolated from lung homogenates were tested via western blot (REV-ERBα, LOX, Activated LOX, LOXL2, and COL1A1) represented blots were showing and change fold was normalized to β-ACTIN ( n = 4–5 mice per group). d Lung sections were used for IHC, and the abundance and localization of COL1A1 and LOX were detected ( n = 3–4 mice per group). Data were shown as mean ± SEM, two-way ANOVA followed Tukey’s multiple comparisons test was performed in a (Body weight change (%)) and one-way ANOVA followed Šídák’s multiple comparisons test was used in a (injured area (%); and b – d ). Bar size: 1000 µm in a , and 25 µm in d . (* p < 0.05, ** p < 0.01, *** p < 0.001 between groups; ## p < 0.01 vs. Bleo 7 a.m. group; &&& p < 0.001 vs. PBS 7 a.m. group). Full size image We also investigated the gene expression from mouse lungs dosed with bleomycin at different times of day (Fig. 3b ). Interestingly, the gene expression of REV-ERB α/β ( NR1D1 / NR1D2 ) was decreased after bleomycin injury during the night (dusk), whereas no significant difference was observed when dosed during the day (dawn) (Fig. 3b ). The other circadian genes inhibited by REV-ERB α/β , such as BMAL1 ( ARNTL ), and CLOCK , were differentially decreased during the day and had no change during the night (Supplementary Fig. 2 b, c ). The REV-ERBα/β competitor RORA showed decreased expression levels after bleomycin injury in either daytime or nighttime. We also observed the decreased gene expression levels of PER1/2 and CRY1/2 either bleomycin was dosed during the daytime or nighttime (Supplementary Fig. 2 b, c ). The gene expression of fibrotic markers, such as COL1A1 , COL5A2 , FN1 , and SERPINE1 was significantly upregulated when dosed at night (Fig. 3b and Supplementary Fig. 2 b, c ). Bleomycin injury increased VIM regardless of the time of dosing (Fig. 3b ). The gene expression of COL1A2 and COL3A1 was upregulated after bleomycin dosing, and there was no time of a day difference during nighttime or daytime (Supplementary Fig. 2 b, c ). The gene expression of tight junction proteins responsible for cell-cell interaction: TJP1 and TJP3 showed significant downregulation when bleomycin was dosed at both day and nighttime, and TJP3 showed further decreased during nighttime dosing (Supplementary Fig. 2 b, c ). To further identify the expression level of target genes, we have detected the protein expression by western blot and IHC (Fig. 3 c, d ). Similarly, higher protein expression of REV-ERBα was observed in PBS 7 p.m. group compared to the PBS group at 7 a.m., and bleomycin injury significantly downregulated the protein abundance of REV-ERBα, whereas no changes in 7 a.m. groups (PBS vs. Bleo) (Fig. 3c ). We also observed an increasing trend of protein level of total LOX after dosing bleomycin at either 7 a.m. or 7 p.m., while activated LOX only showed an increasing trend when bleomycin was dosed at 7 p.m. (Fig. 3c ). The significantly increased protein abundances of LOXL2 and COL1A1 were observed in mice dosed with bleomycin at 7 p.m. while non-significant increase when dosed at 7 a.m. (Fig. 3c ). Except for the western blot, we also detected the protein abundance and localization of LOX and COL1A1 via IHC (Fig. 3d ). Bleomycin dosed at 7 p.m. showed higher protein levels of COL1A1 and LOX, especially in the injured area compared to mice dosed with bleomycin at 7 a.m. (Fig. 3d ). Rev-erbα agonist attenuated the collagen overexpression during bleomycin-induced fibrogenesis Decreased abundance of REV-ERBα has been noticed after bleomycin injury, we treated mice with Rev-erbα agonist (SR9009, 100 mg/kg, intraperitoneally (i.p.)) for 14 days to determine it’s protective potential against fibrotic progression (Fig. 4 and Table 1 ). During 14 days post bleomycin injury, there was a significant reduction in body weight starting from day 1, and there is no significant difference between bleomycin and bleomycin + SR9009 groups (Fig. 4a ). Surprisingly, only a 60% survival rate was observed in mice that received bleomycin + SR9009 while there was no death in the bleomycin-treated group (Fig. 4a ). From the H&E stained sections, we identified that bleomycin-induced significant lung injury, and SR9009 treatment helped alleviate the injury but without significant difference (Fig. 4b ). Since we were interested in how REV-ERBα is involved in pro-fibrotic progression, we measured the gene and protein expressions of fibrotic markers in the lungs (Fig. 4 c– e and Supplementary Fig. 3 ). Although the ACTA2 gene level was not significantly increased after bleomycin injury, the bleomycin + SR9009 group showed a significantly reduced expression of the ACTA2 gene (Fig. 4c ). We have noticed the significant upregulation of collagens ( COL1A1, COL1A2, COL3A1, COL4A1, COL4A2, COL5A1 , and COL5A3 ), and SR9009 treatment helped to reduce the levels of COL1A1 , COL1A2 , and COL5A1 without significant difference, and the gene level of COL4A1 was significantly downregulated after SR9009 treatment (Fig. 4c ). Gene expression of lysyl oxidases ( LOX , LOXL1 , LOXL2 , and LOXL4 ) were significantly increased after bleomycin injury, but SR9009 treatment did not help reduce the gene abundances (Fig. 4c and Supplementary Fig. 3 ). Other ECM proteins, such as ELN and FN1 , were upregulated after bleomycin injury and SR9009 treatment helped to lower the transcript level but without a significant difference (Fig. 4c and Supplementary Fig. 3 ). As potential regulators of ECM remodeling and dysregulated repair, there were upregulated gene levels of TGFB1 , TGFBR1 , and TGFBR2 after bleomycin injury, but no difference was observed between bleomycin and bleomycin+SR9009 treatment groups (Fig. 4c and Supplementary Fig. 3 ). Based on the gene expression results, we have performed a pathway analysis. Most significantly, ECM degradation, collagen biosynthesis and modification, and ECM synthesis were activated after bleomycin injury and slightly inhibited by SR9009 treatment (Table 1 ). Hence, we focused on how protein levels of collagen were affected. Fig. 4: Rev-erbα agonist (SR9009) treatment helped to reduce the collagen overexpression occurred in bleomycin induced lung fibrosis. C57BL/6J WT mice (equal number of male and female mice) were dosed with bleomycin for 14 days, and SR9009 was given via i.p. injection at a dose of (100 mg/kg) daily. a The body weights and the survival rate was monitored until day 14 post-injury ( n = 8–12 mice per group). b Lungs were harvested, and H&E staining was performed to identify the injured area percentage ( n = 8 mice per group). c RNA was isolated, and gene expression analysis was conducted using nCounter Fibrosis panel via nCounter SPRINT Profiler, and transcripts levels were normalized and visualized by nSolver. The dysregulated genes focused on collagen dynamics and ECM remodeling were shown as a heatmap and selected gene expressions were shown as bar graphs ( n = 8 mice per group). d Proteins isolated from lung homogenates were detected via western blot (COL1A1, COL4A1, LOXL2, and Activated LOX), represented blots were shown and fold change was normalized to β-ACTIN ( n = 8 mice per group). e Lung sections were stained by COL1A1 and COL4A1 via IHC, and the abundance and localization were determined by ImageJ ( n = 8 mice per group). Data were shown as mean ± SEM, multiple unpaired t -test was used for a , and one-way ANOVA followed Šídák’s multiple comparisons test was used in b – d . Bar size: 1000 µm in b and e , ×4 magnification, and 50 µm in e , ×20 magnification. (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. PBS group; # p < 0.05, ### p < 0.001 vs. Bleo group). Full size image Table 1 Dysregulated pathways after Bleomycin injury with or without SR9009 in C57 mice Full size table Since we have observed the decreased transcript levels of multiple collagens in bleomycin + SR9009 group compared to bleomycin group, we tested the protein abundances of COL1A1, COL4A1, LOX, and LOXL2 as well (Fig. 4 d, e ). Similarly, we have observed increased protein levels of COL1A1 and COL4A1, and non-significant decreased trends of COL1A1 and COL4A1 after SR9009 injection (Fig. 4 d, e ). Interestingly, we have noticed a decreased trend of activated LOX, and a significantly decreased level of LOXL2 in the bleomycin + SR9009 group compared to the bleomycin group (Fig. 4d ). From IHC staining, we observed overexpression protein levels of COL1A1 and COL4A1 in the injured sections from both the bleomycin group and the bleomycin + SR9009 group. However, the positive distribution of COL1A1 and COL4A1 were inhibited by SR9009 injection, and abundances of both collagens were slightly decreased in the bleomycin + SR9009 group compared to bleomycin group (Fig. 4e ). There was no significant sex-dependent difference between bleomycin group and bleomycin + SR9009 group, hence we have combined male and female mice for further analysis. Rev-erbα deficiency exaggerated IAV-induced lung injury To directly understand the role of Rev-erbα in pulmonary fibrogenesis and lung injury, we infected WT and Rev-erbα Het mice with IAV (10 3 PFU) for 15 days to induce lung injury and fibrotic responses (Fig. 5 ). At 6–10 days post infection (p.i.), we found that IAV-induced weight loss was exacerbated in Rev-erbα Het mice compared with WT (Fig. 5a ). We also monitored locomotor activity after IAV infection, and observed reduced ambulatory counts during the nighttime at 5 days to 9 days p.i. The locomotor activity showed no change during the daytime, and there was no significant difference between WT and Rev-erbα Het mice (Supplementary Fig. 4 ). After 15 days p.i., we collected the serum to detect IAV-specific antibody (IgG2a and IgA) levels. Both WT and Rev-erbα Het mice infected with IAV showed detectable levels of IgG2a and IgA in serum, and Rev-erbα Het mice showed higher levels of IgG2a and IgA compared to WT mice (Fig. 5a ), most likely because of the highest level of infection. We also looked at the viral replication on 2 days and 4 days p.i. There was a significant increase in the viral titer at 4 days p.i., compared to 2 days p.i., while there was no significant difference in viral harboring and replication in the lungs between WT and Rev-erbα Het mice (Supplementary Fig. 5 ). Fig. 5: IAV induced lung injury and profibrotic responses exaggerated in Rev-erbα Het mice compared to WT mice. WT and Rev-erbα Het mice were infected (10 3 PFU/mouse) with IAV or PBS control for 15 days. a Body weights were monitored during infection, and virus-specific antibodies in serum were detected by ELISA ( n = 5–19 mice per group, * p < 0.05, ** p < 0.01, *** p < 0.01 vs. IAV infected WT mice). b During sacrifice, lung mechanics (resistance, compliance, and elastance) were measured. ( n = 3–4 mice per group). c H&E stained lung sections were used to analyze the injured area induced by IAV infection. Regions within the black squares were shown with ×20 magnification ( n = 4–6 mice per group). Data were shown as mean ± SEM, two-way ANOVA followed Tukey’s multiple comparisons test was performed in a (bodyweight change (%)), multiple unpaired t -test was used for a (virus-specific antibodies titer), one-way ANOVA followed Šídák’s multiple comparisons test was used in b , c , unpaired two-side t -test was used in b (Resistance IAV-WT vs. IAV Rev-erbα Het; Elastance PBS-WT vs. IAV-WT; Compliance PSB-WT vs. IAV-WT and IAV-WT vs. IAV Rev-erbα Het). Bar size: 1000 µm in c (×4 magnification), and 50 µm in c (×20 magnification) (* p < 0.05, ** p < 0.01, *** p < 0.001 between groups; # p < 0.05, ## p < 0.01 vs. IAV infected WT mice). Full size image Furthermore, we determined the lung mechanical properties (airway resistance, elastance, and compliance). We have observed increased resistance and elastance, as well as decreased compliance after IAV infection compared with PBS control, both in WT and Rev-erbα Het mice. Intriguingly, the Rev-erbα Het mice exhibited increased resistance, elastance, and decreased compliance compared with WT mice in response to IAV infection (Fig. 5b ). The H&E-stained lung sections showed IAV infection-induced dramatic lung injury with scaring progression in the alveoli of both WT and Rev-erbα Het mice. More importantly, larger injury areas were observed in Rev-erbα Het mice infected with IAV compared with IAV infected WT mice (Fig. 5c ). Rev-erbα deficiency aggravated dysregulated gene expression during IAV-induced fibrogenesis After sacrificing the mice at 15 days p.i., we collected the lung tissues for RNA expression analysis (Figs. 6 and 7 ). At the gene transcript level, the most significantly dysregulated genes due to Rev-erbα knockdown were downregulated. There are a significant number of genes that were dysregulated when infected with IAV in both WT and Rev-erbα Het mice. Intriguingly, IAV infection in Rev-erbα Het mice led to an upregulation of significantly dysregulated gene transcripts compared to WT mice infected with IAV, which suggests that the altered gene expressions were not due to genotype differences (most dysregulated genes are decreased), but that Rev-erbα affects the specific gene expression during lung injury induced by IAV (Fig. 6a ). Following the cutoff filters (10% fold change with p value < 0.05) used for volcano plots, we also analyzed the gene cluster via Venn diagrams analysis. Compared to WT mice treated with PBS, a total of 67 genes were dysregulated because of the genotype difference (vs. Rev-erbα PBS group) (Fig. 6b ). In WT mice infected with IAV, 486 genes were significantly altered, and 430 genes were similarly altered in both WT and Rev-erbα Het mice. Intriguingly, 71 genes started to show a significant difference in Rev-erbα Het mice infected with IAV with no change in WT mice, and 56 genes showed a significant difference in WT mice with no change in Rev-erbα Het mice infected with IAV (Fig. 6b ). By comparing IAV vs. PBS in the same genotype (WT IAV vs. WT PBS, and Rev-erbα Het IAV vs. Rev-erbα Het PBS), a total of 414 genes were commonly dysregulated when infected with IAV (Fig. 6c ). Specifically, 121 genes were statistically dysregulated in Rev-erbα Het mice infected with IAV, and 72 genes showed a significant difference only in WT groups (PBS vs. IAV). The detailed gene lists corresponding to each comparison are listed in Supplementary Data 1 . Based on the dysregulated gene lists, we identified pathways modified by IAV and Rev-erbα, which included collagen dynamics, EMT, TGFβ, myofibroblast regulation, as well as M1/M2 macrophage activation. These pathways were upregulated after IAV infection and were further exacerbated when Rev-erbα was diminished (Table 2 ). Additionally, collagen biosynthesis and modification, ECM degradation, and ECM synthesis were among the most upregulated pathways in Rev-erbα Het IAV-infected mice compared to IAV-infected WT mice (Table 2 ). Hence, we focused our study on the alteration of specific genes/proteins related to collagen biosynthesis, modification, and degradation. Fig. 6: IAV infection induced dysregulation of profibrotic gene expression exacerbated in Rev-erbα Het mice. WT and Rev-erbα Het mice (equal number of male and female mice) were dosed with IAV (10 3 PFU) for 15 days, and lungs were homogenized for RNA isolation. Gene expression analysis was conducted using nCounter Fibrosis Panel via nCounter SPRINT Profiler. RNA expressions were normalized and analyzed via nSolver software and ROSALIND service. a The dysregulated gene expressions between groups were shown as volcano plots, the cut off filter is at least 10% change (up or downregulation), and p < 0.05. b , c Overlapping gene expression changes among groups were shown by Venn diagrams with the same cutoff line used for volcano plots. d The overview of gene expression focused on collagen dynamics were shown as a heatmap, and the selected gene transcript levels (collagens and lysyl oxidases) were shown as a bar graph separately. Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in d , unpaired two-side t -test was used in d ( COL1A1 PBS-WT vs. IAV-WT and COL3A1 PBS-WT vs. IAV-WT). ( n = 6 mice per group; * p < 0.05, ** p < 0.01, *** p < 0.001 between groups; ## p < 0.01 compared with IAV infected WT group). Full size image Fig. 7: IAV infection induced dysregulation of profibrotic progression exacerbated in Rev-erbα Het mice. WT and Rev-erbα Het mice (equal number of male and female) infected (10 3 PFU/mouse) with IAV for 15 days, and lungs were separated for RNA/protein isolation, or fixed with 10% formalin for FFPE sections. a The protein abundance of COL1A2, VIM and activated LOX were measured by western blot. Representative blot images were shown. Different targets were run on the same membrane: COL1A2, VIM and activated LOX were probed in the same membrane and β-ACTIN was used as an endogenous control ( n = 5–6 mice per group). b The localizations of COL1A1 and LOX were determined by immunohistochemical staining, and red arrows were used to indicate the regions of interest. The positive staining area was calculated via ImageJ ( n = 4–6 mice per group). c RNA isolated from lung homogenates was used to measure the gene expression ( COL1A1 , FN1 , TJP1 and TGFB1 ) via qRT-PCR, and GAPDH was used as an endogenous gene for normalization ( n = 5–6 mice per group). Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in a – c . Bar size: 50 µm in b . ( n = 4–6; * p < 0.05, ** p < 0.01, *** p < 0.001 between groups; # p < 0.05, ## p < 0.01 compared with IAV infected WT group). Full size image Table 2 Dysregulated pathways after IAV infection in both WT and Rev-erbα Het mice Full size table Absence of Rev-erbα exacerbated the activated collagen stabilization and modification during IAV-induced fibrogenesis To further determine the role of Rev-erbα on collagen dynamics, we measured gene expression related to collagen modification, ECM markers, matrix metalloproteinases (MMPs), and TGFβ pathways (Fig. 6d and Supplementary Fig. 6 ). We noticed that collagens were significantly upregulated after IAV infection in both WT and Rev-erbα Het mice at the gene expression level. In particular, COL1A1 , COL1A2 , COL3A1 , and COL5A1 , were significantly increased in IAV-infected Rev-erbα Het mice compared with the WT IAV group (Fig. 6d and Supplementary Fig. 6a ). Interestingly, we observed decreased COL14A1 and COL16A1 in IAV-infected mice, but there was no difference between the WT IAV group and Rev-erbα Het IAV group (Fig. 6d ). We also noticed that lysyl oxidases ( LOX , LOXL1 , and LOXL2 ) were upregulated only in IAV-infected Rev-erbα Het compared to PBS-treated Rev-erbα Het mice, and there were no significant difference between IAV infected and PBS-treated WT mice (Fig. 6d ). In addition, we noticed the upregulation of other ECM-related genes, such as FN1 , ELN , VIM , ITGA4 , ITGA9 , and HSPG2 , and genes responsible for focal adhesion such as LAMA3 and OCLN were downregulated after IAV infection in either genotype, and there is no significant difference between IAV infected WT and Rev-erbα Het mice (Fig. 6d and Supplementary Fig. 6a ). One of the key genetic pathways activated during fibrotic progression is the TGFβ pathway, and we found increased activation of TGFβ signaling following IAV infection, which showed increased TGFB1 , TGFB1I1 , TGFBR1 TGFBR2 , SMAD2 , SMAD3 , and SMAD4 . However, there was no significant difference between the IAV WT and the IAV Rev-erbα Het groups (Supplementary Fig. 6b ). Since we observed increased collagen abundance, we also determined the expression of related collagenases, MMPs (Supplementary Fig. 6c ). Increased gene expression of MMP2 , MMP12 , MMP14 , and MMP12 was observed only in Rev-erbα Het mice infected with IAV compared with PBS in the same genotype. The gene transcript levels of MMP2 and MMP14 showed a significant increase in Rev-erbα Het mice infected with IAV compared to WT mice infected with IAV (Supplementary Fig. 6c ). Other MMPs, such as MMP 9 , MMP8 , and MMP3 , showed decreased gene transcript levels when IAV infection occurred, and there is no difference between the two genotypes. The inhibitor of MMPs: TIMPs, showed increased TIMP2 in Rev-erbα Het IAV group compared to Rev-erbα Het PBS group (Supplementary Fig. 6c ). Lack of Rev-erbα augments collagen overexpression during IAV-induced fibrogenesis Since we observed that type 1 collagen and lysyl oxidases were upregulated in gene transcript levels, we also tested the protein abundance and localization (Fig. 7 ). Overall, from lung homogenates, we found an increasing trend of type 1 collagen (COL1A2) without statistical significance. Meanwhile, we noticed a significant increase in LOX and VIM in Rev-erbα Het mice infected with IAV compared to either the Rev-erbα Het PBS group or the WT mice infected with IAV (Fig. 7a ). Further, we looked at the protein abundance and localization of COL1A1 and LOX. We performed IHC staining of COL1A1 and LOX (Fig. 7b ). The distribution of COL1A1 in PBS treated group, either WT or Rev-erbα Het mice, was around the small airways or bronchial. When IAV infection occurred, COL1A1 was augmented in the injured area, mainly around the alveoli. Furthermore, no COL1A1 was observed in alveoli in the PBS-treated group (Fig. 7b ). However, for the protein expression of LOX, relatively lower level of LOX were observed in IAV-infected WT mice, while the abundance of LOX were increased in Rev-erbα Het mice infected with IAV, primarily localized to the injured area (Fig. 7b ). Since lysyl oxidase is responsible for collagen stabilization via crosslinking the collagen fibers, the co-localization of LOX and collagen in areas of injury was observed as expected (Fig. 7b , red arrow). We also applied the qRT-PCR to detect the gene expression fold change, and we noticed a similar trend compared with NanoString analysis (Fig. 7c ). The gene transcript level of COL1A1 , FN1 , and TGFB1 showed an increasing trend after IAV infection, and Rev-erbα knockdown exacerbated the upregulation. The gene expression of TJP1 was decreased in the WT group with IAV infection (Fig. 7c ). Rev-erbα agonist attenuated the TGFβ-induced abnormal collagen stabilization and fibrotic responses in lung fibroblasts To determine the role of Rev-erbα in the abnormal collagen modification via lysyl oxidase, we treated primary adult human lung fibroblasts (HLF) and human fetal lung fibroblasts (HFL1) with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 μM) and antagonist (SR8278, 20 μM) for 2 days (Fig. 8 and Supplementary Figs. 7 and 8 ). Fig. 8: Rev-erbα agonist inhibits TGFβ induced fibroblast differentiation and antagonists exacerbate it. Human primary lung fibroblast were treated with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 µM) or antagonist (SR8278, 20 µM) for 2 days. a Protein was isolated for western blot analysis (αSMA, COL1A1, LOX, and Fibronectin (FN)). Represented blots are shown with densitometry analysis ( n = 3–4 cells per group). b Immunofluorescence staining showed the distribution and protein abundance of COL1A1 and αSMA, DAPI was used for nuclear staining (×20). Relative fluorescence intensity was calculated in ImageJ, as fluorescence intensity per cell ( n = 4 cells per group). c RNA was isolated for gene expression measurement via qPCR ( ACTA2 , COL1A1 , COL4A1, FN1, LOX , LOXL1, LOXL2 , and NR1D1 ). GAPDH was used as an endogenous control for RNA and protein fold change normalization ( n = 4 cells per group). Data are shown as mean ± SEM, one-way ANOVA followed Šídák’s multiple comparisons test was used in a – c , unpaired two-side t -test was used in a (COL1A1 Ctrl vs. TGFβ, LOX TGFβ vs. TGFβ + GSK4112, FN TGFβ vs. TGFβ + SR8278). d Schematic demonstrating how both Rev-erbα agonist and antagonist regulates ECM deposition in lung fibroblast induced by TGFβ, and the schematic is created with Biorender.com. Bar size: 50 µm in b . (* p < 0.05, ** p < 0.01, *** p < 0.001 vs. Ctrl group; # p < 0.05, ### p < 0.001 vs. TGFβ group). Full size image We previously found that Rev-erbα agonist, GSK4112, inhibited the myofibroblast differentiation induced by TGFβ 27 . Here we noticed that TGFβ induced myofibroblast differentiation was inhibited by GSK4112, while exacerbated by SR8278 (Fig. 8 ). Significantly, GSK4112 inhibited the TGFβ induced overexpression of αSMA and COL1A1 in protein levels (Fig. 8 a, b ), and TGFβ upregulated gene levels of ACTA2 , COL1A1 , FN1 , and LOX were inhibited by GSK4112 treatment (Fig. 8c ). In addition, treatment of SR8278 exacerbated the TGFβ upregulated COL1A1 and FN in protein levels (Fig. 8a ), and augmented the TGFβ increased transcript levels of COL1A1 , COL4A1 , FN1 , LOX , and LOXL2 (Fig. 8c ). Interestingly, we found that both GSK4112 and SR8278 increased the gene level of NR1D1 (Fig. 8c ). Based on our results, we concluded that Rev-erbα agonist helped to attenuate the TGFβ-induced fibroblast differentiation and collagen overexpression, while Rev-erbα antagonist exacerbated it (Fig. 8d ). We also tested our hypothesis in HFL1 (Human fetal lung fibroblast) (Supplementary Figs. 7 and 8 ). GSK4112 treatment showed a significantly increased gene transcript level of NR1D1 while SR8278 showed no change in HFL1 (Supplementary Fig. 7 ). In addition, GSK4112 inhibited TGFβ - induced ACTA2 and slightly decreased gene expression of COL1A1 and FN1 without significant difference in TGFβ + GSK4112 group compared to TGFβ treatment alone (Supplementary Fig. 7 ). Intriguingly, we also found that GSK4112 alleviated the upregulated gene expression of lysyl oxidases ( LOX , LOXL1 , and LOXL2 ) (Supplementary Fig. 7 ). We also measured the protein abundance of COL1A1 and LOX. Similarly, GSK4112 suppressed the upregulated protein level of LOX induced by TGFβ. In contrast to the gene expression results, TGFβ-induced COL1A1 protein was significantly inhibited by GSK4112 treatment, and the protein fibers overexpressed by TGFβ were also significantly repressed by GSK4112 (Supplementary Fig. 8 ). Treatment with Rev-erbα antagonist (SR8278) showed no significant effects on TGFβ-induced fibroblast differentiation or collagen stabilization in HLF1 (Supplementary Fig. 8 ). Rev-erbα agonist and antagonist exacerbated the TGFβ-induced epithelial-mesenchymal transition (EMT) in lung epithelium We have also treated primary human small airway epithelial cells (SAEC) and human bronchial epithelial cell line (BEAS-2B) with TGFβ (2 ng/ml) with or without Rev-erbα agonist (GSK4112, 20 μM) and antagonist (SR8278, 20 μM) for 2 days (Supplementary Figs. 9 and 10 ). SAEC treated with TGFβ showed activated EMT tendency via increased VIM , LOXL2 , and COL1A1 , as well as decreased CDH1 , TJP1 , and OCLN . The treatment of GSK4112 and SR8278 showed exacerbated gene dysregulation of both epithelial and mesenchymal markers (Supplementary Fig. 9a ). Protein levels of COL1A1 and VIM were upregulated by TGFβ and the upregulation was prevented by GSK4112 (Supplementary Fig. 9b ). Similar to HLF, both agonist and antagonist treatment showed increased gene expression of NR1D1 . In BEAS-2B, increased gene levels of COL1A1 and FN1 were noticed after TGFβ treatment, and GSK4112 attenuated the upregulation, whereas SR8278 exacerbated the gene levels of FN1 with significance and COL1A1 without significance (Supplementary Fig. 10 ). A significantly increased LOX gene level was observed after GSK4112 or SR8278 treatment compared to the TGFβ group. Either GSK4112 or SR8278 inhibited increased LOXL1 by TGFβ treatment. TGFβ inhibited the gene expressions of LOXL2 and ACTA2 , and SR8278 treatment eliminated the downregulation, but there was no effect with GSK4112 treatment (Supplementary Fig. 10 ). Discussion Pulmonary fibrosis is a lethal chronic lung disease without effective therapeutic options, and the pathogenesis of fibrogenesis remains unclear 6 , 33 , 34 . Recent studies have demonstrated the novel role of the circadian molecular clock in the pathobiology of chronic lung diseases and highlighted the potential for circadian clock-based therapeutics 23 , 35 . Targeting specific circadian clock genes has been implicated with anti-fibrotic potential in vitro in cells or in vivo in mouse models of lung injury 8 , 36 , 37 , 38 . In previous studies, Rev-erbα deficiency exacerbated the EMT induced by CS and fibrogenesis induced by bleomycin, and Rev-erbα agonist inhibited the fibroblast differentiation induced by TGFβ 8 , 27 . In this study, we have characterized the Rev-erbα abundance in human IPF patients histologically, as well as in a bleomycin mouse model, and we found the decreased REV-ERBα protein abundance, especially in IPF lesion areas and bleomycin-induced fibrogenesis. Based on our results, the lower protein abundance of REV-ERBα in the healthy portion of IPF patients could promote the progression of fibrogenesis toward a lesion phenotype. Since the abundance of REV-ERBα is in circadian oscillation, mice dosed with bleomycin when REV-ERBα expression naturally starts to decrease (dark phase/nighttime) exhibited higher mortality and exacerbated fibrotic progression compared with those dosed in the daytime. We have administrated the Rev-erbα agonist (SR9009) to mice treated with bleomycin, and we noticed that SR9009 injection helped ease the collagen overexpression during bleomycin-induced fibrogenesis. We also analyzed whether diminished REV-ERBα exacerbated the fibrotic progression induced by IAV infection. Our results show that Rev-erbα regulated collagen stabilization via lysyl oxidase, and its agonist prevented TGFβ induced overexpression of collagen. Circadian clock molecules RORα (nuclear receptor), REV-ERBα, BMAL1, and CLOCK, have been implicated in the crosstalk between inflammation and lung tissue injuries 19 , 22 , 39 , 40 . The critical circadian molecules: BMAL1 and CLOCK form a heterodimer that binds to E-Box and subsequently promotes the expression of Rev-erbα. Rev-erbα binds to RORE to prevent the expression of BMAL1 and CLOCK, while RORα activates the expression of BMAL1 and CLOCK. Both RORE and E-box are associated with EMT 41 , 42 , which is initiated at the early stage of fibrosis. Hence, the regulators of RORE and E-box (RORα, REV-ERBα, BMAL1, and CLOCK) are equally critical in fibrogenesis. In our results, we noticed that the gene level of BMAL1 ( ARNTL ) in the bleomycin model depended on the time of dosing. Decreased BMAL1 was observed when dosed during the day, while an increasing trend of BMAL1 transcript level was observed when treatment occurred during the nighttime. Similar time-dependent changes also occurred with CLOCK expression. Upregulated BMAL1 was identified in fibrotic mouse lungs induced by TGFβ transfection, and BMAL1 silencing helped to inhibit the fibrotic progression induced by TGFβ in the lung epithelium 36 . In the same report, TGFβ transfected into mouse lungs decreased the gene levels of REV-ERB α . REV-ERBα was also shown to be inhibited during fibrotic progression 36 . These published results agree with our data that show decreased REV-ERBα and increased BMAL1 expression in bleomycin-induced fibrosis. Bleomycin-induced downregulation of REV-ERBα and increased BMAL1 during the night might be one of the reasons for fibrotic progression and exacerbation. Interestingly, the gene expression of CLOCK showed very similar results to BMAL1 gene alterations. It is known that CLOCK disruption exacerbates fibrotic progression, which partially agrees with our data on night dosing that CLOCK level showed lower expression during the night 37 . In the same study, bleomycin dosing at night showed more collagen deposition in the injured area, which supports our results 37 . Another study demonstrated that infection with IAV that occurred during the night showed worse body weight loss, higher mortality, and more severe tissue injury 22 . Our data also indicate the possibility that targeting BMAL1 or CLOCK as a potential candidate might need to consider the dosing time, and inhibition or activation of BMAL1 or CLOCK might be time-dependent. Our results showed decreased REV-ERBα after bleomycin injury during the nighttime. The expression of REV-ERBα starts to decrease naturally during the night (dusk) and dosing with bleomycin-induced significantly decreased REV-ERBα levels could dampen the basal expression of REV-ERBα which might result in worse fibrotic phenotypes and health status. Since Rev-erbα is a key component of circadian molecular clock that shows a rhythmic expression 21 , i.e., the level of REV-ERBα starting to decrease from 6 p.m. could result in less protection against bleomycin-induced lung inflammation in mice dosed at ZT13, while increasing oscillation of REV-ERBα during the day could attenuate the inflammatory response caused by bleomycin dosed at ZT1. As mentioned before, dosing with bleomycin at night exacerbated the collagen deposition in the lungs, which agrees with our gene and protein expression results 37 . Surprisingly, we have observed the augmented protein expression of LOX in the lung injured area when dosed at 7 p.m. compared to the 7 a.m. group, and LOX is responsible for crosslinking collagen fibers to prevent the degradation of collagens 43 , 44 . To measure the REV-ERBα expressions in IPF patients, we stained for REV-ERBα in pulmonary fibrotic lesion areas. We observed a decreased abundance of REV-ERBα especially in the lesion area, while REV-ERBα was fully expressed in the healthy samples. Currently, limited studies directly report the expression levels of REV-ERBα ( NR1D1 ) in IPF patients or bleomycin-induced fibrosis 45 , 46 . The single-cell RNA sequencing comparison between IPF and healthy patients identified the significant downregulation of NR1D1 in ATII cells in IPF patients compared to healthy controls 45 . Our results showed significantly decreased REV-ERBα protein abundance, especially within the injured areas, which partially agrees with the previous study. Another study reported that the REV-ERBα mRNA level was decreased in bleomycin-induced lung fibrosis in young mice, as well as in the naturally aged mice lungs 46 . Our results show decreased REV-ERBα after bleomycin injury. Moreover, our study shows that bleomycin-induced downregulation of REV-ERBα occurs only during the nighttime. The level of REV-ERBα was unchanged when dosing in the daytime. Decreased REV-ERBα has been reported as a cause of exacerbation of fibrotic progress in either mouse or human lung fibroblasts 8 . Our results and previous publications suggest that REV-ERBα is inhibited during fibrogenesis and that decreased REV-ERBα either by transgenic methods or natural circadian oscillation exacerbates the fibrotic progression and worsens the lung injury. We administered Rev-erbα agonist (SR9009) into mice dosed with bleomycin, and tested the protective effect of Rev-erbα agonist against fibrosis. We did not observe any difference in body weight decline between bleomycin alone and bleomycin with SR9009, however, we observed a lower survival rate in mice when received SR9009 post bleomycin injury. It has been proven that Rev-erbα agonist could increase body weight loss and fat mass loss 12 . Moreover, SR9009 has been shown to decrease cell viability and dysregulate cellular metabolism 47 . There are clinical reports describing that body weight loss and lower body mass index could worsen IPF progression and even lower the survival probability 48 , 49 . SR9009 accelerated body weight loss could be one of the reasons for the higher death rate in the bleomycin + SR9009 group compared to the bleomycin group. However, other side-effects of SR9009 might be contributing to the cause of death as well. More detailed studies should be conducted to understand the molecular mechanism of the off-target effects of SR9009 during fibrogenesis. Besides the side effects of SR9009, injection of the agonist helped inhibit the collagen contents at the gene and protein levels, which agrees with our results from the cell model. Our and other results showed that SR9009 had specificity in regulating the overexpression of collagen and helped to prevent fibrogenesis while the side effects need further investigation for the pre-clinical trial. Previously, we have shown that Rev-erbα was associated with fibrotic responses during IAV infection in Rev-erbα Het mice, which led to fibrogenesis. After 15 days p.i., we noticed that Rev-erbα Het mice infected with IAV showed worse health status. The exaggerated upregulation of lung elastance was observed in Rev-erbα Het mice, demonstrating that Rev-erbα deficiency exacerbated the fibrotic progression functionally. To support our hypothesis, we have measured multiple fibrotic markers, such as type 1/3/5 collagens and lysyl oxidases ( LOX , LOXL1 , and LOXL2 ), which were only significantly upregulated by IAV in Rev-erbα Het mice. A previous study proved that Rev-erbα knockdown could exacerbate the fibrotic response by increasing αSMA protein expression 8 . Collagens are equally important in pulmonary fibrosis as αSMA, both of which are overexpressed during fibrogenesis and induce irreversible scarring. Our results elaborate on the previous reports and demonstrate that Rev-erbα is essential in regulating αSMA and correlated with collagen expression. As we mentioned before, Rev-erbα starts to decrease naturally during the nighttime, and it has been identified that IAV infection during the night is associated with worse health outcomes in mice, as well as higher mortality and more severe lung injury compared to daytime infection 22 . Another study reported that dosing bleomycin during the night increased collagen deposition compared to dosing during the day 37 , which also concurred with our findings here. Our data support previously published results and provide a possible explanation for why IAV infection at nighttime induces worse lung injury and higher mortality than during the day, when Rev-erbα starts to decrease. Our conclusion raises a possibility that working during the night shift could be more vulnerable to environmental hazards, which could contribute to developing fibrosis. To understand the correlated signaling pathways involved with Rev-erbα in IAV infection-induced pulmonary fibrotic responses, we analyzed the directed enrichment scores to determine the related pathway. We noticed an exacerbated upregulation of multiple biological processes, such as collagen biosynthesis and modification, ECM degradation and synthesis, M2 macrophage activation, myofibroblast regulation, TGFβ pathway, and EMT. The most abnormal activation was in collagen synthesis and modification pathways, and we found the exaggerated upregulation of lysyl oxidases in Rev-erbα Het mice compared with WT mice infected with IAV. Both gene and protein expressions of lysyl oxidases were upregulated in Rev-erbα Het mice infected with IAV, but not in WT mice infected with IAV. Lysyl oxidases are known to stabilize the collagen fibers via crosslinking to prevent collagen degradation and improve tissue scarring 43 , 44 . Other than collagen, lysyl oxidases are also responsible for crosslinking elastin, which was further upregulated in Rev-erbα Het mice infected with IAV. Besides the collagen stabilization and synthesis, we also determined the expression of related collagenases (i.e., MMPs). We found the exacerbated increased MMP2 , MMP12 , and MMP14 in Rev-erbα Het mice infected with IAV. The substrates of MMP2, MMP12, and MMP14 include gelatin, type 1 and 4 collagen, and elastin; 50 Upregulated MMPs could be a self-regulating method for digesting the overexpressed ECM. Other MMPs, such as MMP9 and MMP8, which are responsible for digesting gelatin, collagen, and elastin, showed downregulation after IAV infection. The balance of MMPs as ECM regulators during fibrosis progress needs more detailed studies to understand how MMPs are involved in collagen dynamics, particularly during episodes of fibrogenesis. Our previous study showed the therapeutic potential of Rev-erbα agonist in preventing EMT induced by cigarette smoke (CS) and fibroblast differentiation induced by TGFβ 27 . In this study, we found that Rev-erbα agonist treatment can prevent the abnormal collagen modification induced by TGFβ, and inhibits the overexpression of collagen. A previous study showed that Rev-erbα agonist could attenuate the fibrotic responses in vivo, ex vivo, and in vitro by measuring traditional fibrotic markers: ACTA2 and COL1A1 8 . Our results further support the role of Rev-erbα in the fibrotic response. Rev-erbα agonist prevents the overexpression of collagen 1 and 4, lysyl oxidase, fibronectin, and αSMA, whereas Rev-erbα antagonist augments it. We found that Rev-erbα agonist treatment suppressed mRNA and protein expression of collagen caused by TGFβ with significance. Our results further described that Rev-erbα involvement in fibrotic progression might be through lysyl oxidase, which is known for stabilizing collagen content. It has been shown that SR8278 could promote myogenesis in myoblasts, but it has a very poor half-life: 0.17 h 51 , 52 . Similarly, we noticed that the Rev-erbα antagonist (SR8278) exacerbates the myofibroblast differentiation by augmenting the expression of collagen, lysyl oxidase, and fibronectin. Surprisingly, either Rev-erbα agonist or antagonist exacerbated the EMT induced by TGFβ in SAEC, while the cell-type specific role of Rev-erbα in vitro in lung cells needs further investigation. From our results, we showed a decreased Rev-erbα abundance during lung fibrogenesis, loss of Rev-erbα could exacerbate the fibrotic process induced by IAV infection via collagen-lysyl oxidase interaction, and pharmacological activation of Rev-erbα prevented the overexpression of collagens. Our and others studies demonstrate that mice dosed with bleomycin at night show worse fibrotic progress than during the day, which might be the result of decreasing Rev-erbα levels 37 . Based on our and others findings, circadian clock is critically involved in disease development, and night shift workers could face a higher chance of fibrotic disease development 22 , targeting the clock molecule Rev-erbα might be one potential therapeutic strategy to overcome the risk. Current FDA-approved anti-fibrosis drugs: Nintedanib and Pirfenidone, are not targeting the lysyl oxidase mediating collagen stabilization 53 , 54 . Our findings suggest that Rev-erbα agonists possess great potential in protecting fibrogenesis by disrupting collagen fibers. Currently, there is a drug, PXS-5505: Pan-Lysyl Oxidase Inhibitor, that is in phase 1 clinical trials for myelofibrosis 55 . Rev-erbα agonists also preserve the possibility of treating pulmonary fibrosis, while the first generation of agonist: GSK4112 has poor pharmaceutical properties, and new Rev-erbα ligands are needed 12 , 51 . Based on the chemical structure of GSK4112, there are new agonists currently designed and available, such as SR9009, SR9011, GSK2945, SR12418, GSK2667, and GSK5072 12 . We have proven that SR9009 daily injection can prevent the EMT in lungs induced by 10 days of CS exposure 27 . Moreover, SR9009 attenuated liver fibrosis in mice with inhibited collagen expression 56 . Other agonists, such as SR12418, GSK5072, and GSK2667, have been identified that can inhibit inflammatory responses in THP1 cells 57 , 58 . Although there are disadvantages of Rev-erbα agonists in in vivo models, such as short half-life and off-target effects 47 , 51 , numerous reports including this study demonstrate the fundamental role of Rev-erbα in lung injury, and Rev-erbα agonists can prevent lung inflammation and injury induced by CS or IAV infection. A detailed study of the anti-fibrotic properties of different Rev-erbα agonists is needed to identify a proper agonist with suitable pharmaceutical characteristics for in vivo study, and even clinical trials. Overall, Rev-erbα abundance was decreased in fibrotic progression, and naturally reduced Rev-erbα exacerbated the fibrogenesis. Rev-erbα deficiency exaggerated the fibrotic responses and lung injury induced by IAV infection, and Rev-erbα was involved in the activation of collagen stabilization via lysyl oxidase during the fibrotic progression caused by IAV. Treatment with Rev-erbα agonist can prevent the induction of collagen-lysyl oxidase interactions and stabilization. Our results support the fundamental role of Rev-erbα in fibrogenesis development. Rev-erbα agonists offer promising potential in preventing collagen overexpression and may help break down collagen fibers by inhibiting lysyl oxidase overexpression. Investigating other circadian clock molecules in fibrogenic progression might help us understand the molecular mechanism as well as discover novel therapeutic targets for treating pulmonary fibrosis. Methods Ethical approval The experiments performed in this study were approved by the Animal Research Committee of the University of Rochester, University of Rochester Institutional Biosafety Committee, and the ethical standards from United States Animal Welfare Act and NIH. Human lung tissue slides declaration Human lung samples (formalin fixed-paraffin embedded (FFPE) blocks, both Normal and IPF patients) were purchased from Origene (OriGene Technologies Inc). The detailed patient information and Sample/Label IDs are listed in Supplementary Table 1 . Lung sections were prepared from the FFPE blocks with 5 μm thickness using a microtome. The sections are used for immunohistological chemistry (IHC) staining. Animals and treatments Rev-erbα global heterozygous (Rev-erbα Het) mice (male and female mice, 2–4 months old) were purchased from Jackson laboratory (Strain #:018447), and adult C57BL/6 wild-type (WT, male and female mice, 2–4 months old) were bred in the vivarium at the University of Rochester Medical Center. Before treatment, mice were transferred to the inhalation core facility and allowed 1 week acclimatization period. The mice were housed on a 12/12 h light–dark cycle with ad libitum access to water and food. WT C57BL/6 mice were used for bleomycin dosing. Mice received 1.5 units/kg for 14 days, and bleomycin (Cat#1076308, Sigma) was delivered by oropharyngeal inhalation, after anesthetizing with isoflurane. During 14 days of dosing, Rev-erbα agonist (SR9009, Cat#554276, Sigma) was injected intraperitoneally (i.p.) between 11 a.m.–12 p.m. every day, at the dosage of 100 mg/kg body weight. SR9009 was prepared in 15% Kolliphor EL (Cat#: C5135, Sigma) as described previously 27 . The mice were sacrificed 14 days post-dosing, and the lungs were snap-frozen for further analysis. For IAV dosing, mice were anesthetized with isoflurane, and a total amount of 10 3 plaque-forming units (PFU)/mouse of influenza A/Puerto Rico 8/1934 H1N1 virus (PR8) was given to mice intranasally 59 . A total of 3 female mice were housed individually ad libitum food and water were supplied in a special cage with a running wheel connected to an automatic counter. Mice were accommodated in the wheel running cage for 1 week, and counters were adjusted during this week. The locomotor activity was monitored from day 0 to day 14, and the mice were sacrificed on day 15 post-infection (p.i.). During 15 days of infection, body weights were monitored daily. A separate group of mice was placed in cages with the running wheel assembled, and the running wheel was connected to an automatic counter. Each cage has one mouse with access to regular water and food. The locomotor activity was recorded during 14 days of infection. During sacrifice, mice were anesthetized with pentobarbital (100 mg/kg) via i.p. injection. Lung function parameters (resistance, compliance, and elastance) were measured during the sacrifice via the Flexivent FX1 Legacy system (Scireq) following the manufacturer’s instructions. Each measurement was performed 3 times per animal. Mice lungs were also inflated with 1% low melting agarose and fixed with 10% formalin overnight for histological staining. Bleomycin and IAV dosing was regularly conducted between 11 a.m.–1 p.m. and sacrificed during a similar time. The time of day bleomycin dosing (7 a.m. and 7 p.m.) was performed in C57BL/6 female mice, and mice were sacrificed at the same time of day as their respective dosing. During 14 days, body weight was monitored. Another group of mice was dosed with an equal volume of PBS as the control group. Viral titer in lungs and IgG2a and IgA in serum measurement Mice were sacrificed at 2 and 4 days p.i., lungs were collected and snap frozen for preparing the lung homogenates for viral titer measurement according to our previous publication 60 . Mice were sacrificed at 15 days p.i., and whole blood was collected through the posterior vena cava vein. Serum was separated from whole blood by centrifugation (12,000 × g , 10 min at room temperature). The IAV-specific IgG2a and IgA antibodies in serum were determined using ELISA via serial dilution as described in our previous publication 60 . Cell culture and treatment Primary human lung fibroblast (Cat# CC-2512) and small airway epithelial cells (SAEC) (Cat# CC-2547) were purchased from Lonza. Lung fibroblasts were cultured in FGM-2 Fibroblast Growth Medium (Cat# CC-3132), and SAEC were cultured in SABM Small Airway Epithelial Cell Growth Basal Medium (Cat# CC-3119). Cells were seeded into 6 well plates for the treatment of 2 ng/ml TGF-β with or without 20 μM GSK4112 (Cat#: 3663; TOCRIS) and SR8278 (Cat#: S9576 Sigma) for 2 days. Human Fetal Lung fibroblast (HFL-1, Cat#: CCL-153) and human bronchial epithelial (BEAS-2B, Cat#: CRL-9609) cells were purchased from the American Type Culture Collection (ATCC) and stored in liquid nitrogen. The cells were thawed and cultured in DMEM/F12K medium (Cat#:113-20033; Thermo Fisher Scientific) with 1% Penicillin-Streptomycin-Glutamine (Cat#: 103-78016; Thermo Fisher Scientific), and 10% FBS (Cat#: 10082147; Thermo Fisher Scientific) for HFL-1 and 1% Penicillin-Streptomycin-Glutamine, 5% FBS for BEAS-2B. Cells were maintained under 5% CO 2 and 95% humidity. Before treatment, HFL-1 cells were starved in serum-free DMEM/F12K medium for 12 h, and BEAS-2B cells were serum-deprived in DMEM/F12K medium with 1% FBS. Then, the cells were treated with 2 ng/ml TGF-β with or without 20 μM GSK4112 (Cat#: 3663; TOCRIS) and SR8278 (Cat#: S9576 Sigma) for 2 days. After treatment, the cells were either lysed for protein/RNA quantification or fixed with 4% paraformaldehyde for immunofluorescence staining. RNA isolation and qRT-PCR Frozen lungs or cells were homogenized and lysed by QIAzol reagent (Cat#:79306, Qiagen), and mixed with chloroform for 10 s. The mixtures were centrifuged at 12,000 × g for 30 min, at 4 °C. Then, the aqueous phase was transferred into a new tube. Equal volumes of isopropanol were added to the samples and mixed universally, then incubated at −20 °C for 2 h. The mixtures were centrifuged at 15,000 × g for 15 min at 4 °C, and the supernatants were removed. A total 1 ml 75% EtOH was added to wash the RNA pellet and then spun down at 15,000 × g , for 30 min at 4 °C. The EtOH was removed and the RNA precipitates were resuspended in 50 μl of RNase-free water. The concentrations and qualities of all the samples were quantified by Nano-drop spectrophotometer (ND-1000, NanoDrop Technologies). Equal amounts of RNA samples were used for reverse transcription via RT2 First Strand Kit (Cat# 330401, Qiagen) and real-time PCR quantification based on SYBR green expression master-mix (Cat# 330509, Qiagen). The primers used in this study were purchased from BioRad: COL1A1 (Mouse, qMmuCED0044222), FN1 (Mouse, qMmuCEP0054113), TJP1 (Mouse, qMmuCID0005277), TGFB1 (Mouse, qMmuCED0044726), NR1D1 (Mouse, qMmuCID0014284), ARNTL (Mouse, qMmuCED0049609), CLOCK (Mouse, qMmuCED0046959), GAPDH (Mouse, qMmuCEP0039581), COL1A1 (Human, qHsaCEP0050510), ACTA2 (Human, qHsaCIP0028813), FN1 (Human, qHsaCEP0050873), LOX (Human, qHsaCED0043469), LOXL1 (Human, qHsaCED0044245), LOXL2 (Human, qHsaCED0044522), and GAPDH (Human, qHsaCEP0041396). A qRT-PCR thermal cycle is 10 min at 95 °C, 40 cycles of 95 °C, 15 s, and 60 °C, 1 min, the fluorescence intensity was checked at the end of 60 °C incubation. A melting curve was performed for a quality check of cDNA amplification. The BioRad CFX96 qPCR machine was used, and the change fold was calculated based on 2 −ΔΔCt methods with GAPDH as the endogenous control. NanoString measurement RNA samples isolated from lungs were used for NanoString measurement with a total of 100 ng RNA for each group. Our customized codeset (circadian genes and fibrotic markers) was used for bleomycin treatment groups, and nCounter Fibrosis Panel was used in Bleomycin + SR9009-treated group as well as IAV infected groups. All the RNA samples were mixed with the master mix and incubated at 65 °C for 16 h for RNA hybridization. All the samples were loaded into a NanoString running cartridge, and profiling reading was performed by nCounter SPRINT Profiler (NanoString Technologies, Inc.). All the gene expressions were normalized by nSolver 4.0 software, and normalized counts were used for data representation. The RLF files generated by the profiler were uploaded to ROSALIND ( ) for advanced analysis to generate volcano plots and pathway direct enrichment scoring. The significantly dysregulated genes were filtered and uploaded to an online tool ( ) to generate the Venn diagram and overlapped dysregulated gene list. Protein isolation and Western blot Snap frozen lung lobes or cells were lysed in RIPA buffer with a protease inhibitor cocktail, and the protein concentrations were measured by Pierce BCA Assay Kit (Cat#: 23227, Thermo Fisher Scientific). A total 20 µg protein for each sample was used for analysis. The protein samples were separated by 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE), and then transferred to a nitrocellulose membrane (Cat# 1620112, BioRad). The membranes were then blocked with EveryBlot Blocking Buffer (Cat#: 12010020, BioRad) for 20 min, and incubated with primary antibody diluted in blocking buffer overnight at 4 °C. Primary antibodies used here included anti-REV-ERBα (1:1000, 13418, Cell Signaling), anti-COL4A1 (1:1000, ab227616, Abcam), anti-LOXL2 (1:1000, ab197779, Abcam), anti-E-Cadherin (1:1000, 3195, Cell Signaling), anti-Fibronectin (1:1000, ab, Abcam), anti-vimentin (1:1000, ab92547, Abcam); anti-COL1A2 (1:1000, NBP2-92790, Novus Biologicals), anti-COL1A1 (1:1000, NBP1-30054, Novus Biologicals), anti-activated LOX (1:1000, NB100-2527, Novus Biologicals) for Fig. 7 only, and anti-LOX (1:1000, ab174316, abcam). Then, the primary antibody was removed, and the membranes were washed with Tris-buffered saline containing 0.1% Tween 20 (TBS-T) 3 times, 10 min each. Then, membranes were incubated with secondary antibody (goat-anti-rabbit, 1:5000, #1706515, BioRad) for 1 h at room temperature. The membranes were then washed with TBS-T 4 times, 15 min each. The membranes were developed with Pierce ECL Western Blotting Substrate (Cat#: 32106, Thermo Scientific), and the signals were detected by Bio-Rad ChemiDoc MP imaging system Densitometry was calculated using ImageLab software (BioRad), and fold changes were calculated based on PBS groups, with normalization to β-actin (1:2500, ab20272, Abcam) for mice and GAPDH (1:1000, ab9482, Abcam) for human samples. H&E staining Lung sections (5 µm) were prepared through the microtome, then deparaffinized and rehydrated with xylene, and 100, 95, and 70% EtOH. Then, the sections were stained with hematoxylin for 1 min, rinsed with water for 5 min, and blued with 0.1% ammonia-water for 10 s. The slides were washed with running water for 10 min. Then, the slides were incubated with 95% EtOH for 1 min, then stained with Eosin for 1 min, and quickly washed with 95% EtOH. Then, the slides were sequentially dehydrated with 95%, 100% EtOH, and xylene. Then, all the slides were mounted with Permount, ×4 and ×20 pictures were taken with a light microscope (Nikon ECLIPSE Ci), and the total injured area was measured via ImageJ. Immunohistological chemistry (IHC) staining Lung sections (5 µm) were deparaffinized and rehydrated via xylene, 100, 95, and 70% EtOH, and washed with water for 5 min. Slides were incubated with antigen retrieval solution (Cat#: S1699, Dako, Denmark) at 95 °C for 30 min. Then, the slides were cooled to room temperature and washed with TBS + 0.25% triton-100 (wash buffer) 2 times, 5 min each. Sections were then blocked with 10% normal goat serum and incubated with anti-COL1A1 (1:100, NBP1-30054, Novus Biologicals), anti-Lox (1:100, NB100-2527, Novus Biologicals), anti-Col4A1 (1:200, ab227616, Abcam), and anti-Rev-erbα (1:100, NBP1-84931, Novus Biologicals) at 4 °C overnight. Slides were washed with wash buffer 10 min, 2 times, then incubated with 0.3% hydrogen peroxide for 15 min. Slides were washed with TBS 10 min, 2 times, and washed with wash buffer 5 min, 3 times. Slides were incubated with secondary antibody (1:1000, ab7090, Abcam) at room temperature for 1 h. Then, washed with wash buffer 10 min, 2 times, and developed with DAB Quanto Chromogen and Substrate (Cat#: TA-125-QHDX, Thermo Fisher Scientific) for 10 min. Excess DAB substrate was washed away with water, and counter stained with hematoxylin. Then, the sections were dehydrated and mounted for light microscopy (×20 and ×40 with Nikon ECLIPSE Ci and ×4 with BioTek Cytation 5). All the antibodies were prepared in 10% normal goat serum. ImageJ was used to calculate the positively stained area percentage via color deconvolution. Immunofluorescence (IF) staining Cells were seeded in chamber slides, treated with TGFβ and Rev-erbα agonist/antagonist for 2 days, and then fixed with 4% paraformaldehyde for 15 min. The slides were washed with TBS for 10 min, 2 times, stored at 4 °C, and then blocked with 10% normal goat serum. Cells were incubated anti-COL1A1 (1:100, NBP1-30054, Novus Biologicals) and anti-αSMA (1:200, A2547-2ML, Sigma Life) Sciences at 4 °C overnight, and washed with TBS 3 times, 10 min each. The chamber slides were then incubated with goat anti-rabbit IgG (H + L) secondary antibody Alexa Fluor 488 (1:1000, Catalog # A-11008, Thermo Fisher) and goat anti-Mouse IgG (H + L) Cross-Adsorbed Secondary Antibody-Alexa Fluor 488 (1:1000, Catalog # A-11001, Thermo Fisher) for 1 h at room temperature. Then, cells were washed with TBS 3 times, 15 min each, and the slides were mounted by Diamond Antifade Mountant with DAPI (Cat#: S36964, Fisher Scientific). Slides were imaged by fluorescence microscopy, and ImageJ was used to quantify the fluorescence intensity with the following equation: integrated Density (IntDen) − (Area of cells * Mean fluorescence of background). The intensity was normalized to cell number, and cell number was counted based on DAPI staining via cell counter in ImageJ. Statistical analysis The significant difference was calculated by one-way ANOVA or Student’s t test via GraphPad Prism software (V.9.0), and p < 0.05 was considered a significant difference. All the data were presented as mean ± SEM. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data support the results of this manuscript are available in the article or Supplementary Information, and raw data will be available upon request. Source data are provided with this paper.
Abnormal sleep patterns, like those of night-shift workers, disrupt the body's natural biological clock and have been linked to lung health issues. A new study by University of Rochester Medical Center (URMC) researchers shows how a biological clock molecule, called REV-ERBα, contributes to lung scarring, uncovering new potential drugs and drug targets along the way. Pulmonary fibrosis, or lung scarring, is a serious condition in which connective tissue builds up in the lungs, making them thick and rigid, and causing difficulty breathing. While medications can ease the symptoms of pulmonary fibrosis, none can repair the lung damage caused by this sometimes-fatal disease. The URMC study, published in Nature Communications, confirms a previously-discovered link between the body's biological clock (or circadian rhythm) and lung diseases and uncovers a new mechanism underlying this link. Study authors show that a lack of the circadian rhythm protein, REV-ERBα, contributes to lung scarring in mice by increasing production of collagen, a major component of connective tissue, and lysyl oxidase, which stabilizes connective tissue and makes it more rigid. The team, which was led by Irfan Rahman, Ph.D., Dean's Professor of Environmental Medicine at URMC, found low levels of REV-ERBα and large amounts of collagen and lysyl oxidase in lung samples from patients with pulmonary fibrosis. Inducing lung injury in mice had a similar outcome: reduced REV-ERBα levels and increased levels of collagen, lysyl oxidase, and other markers of fibrosis. As a circadian rhythm protein, REV-ERBα expression normally fluctuates throughout the day, peaking at noon and dipping to its lowest levels at midnight. When the team induced lung injury at night, mice had larger increases in lysyl oxidase and collagen proteins, more extensive lung damage, and lower survival rates compared to mice injured in the morning. Rahman said this could be relevant to night-shift workers who are exposed to lung irritants at work. "Night-shift work usually occurs during the midnight timeframe when the expression of REV-ERBα is lowest," he said. "Our study suggests there is less protection against lung fibrosis generated from REV-ERBα activation at night." When the team induced lung injury in genetically modified mice that express low levels of REV-ERBα, the mice had worse outcomes that appeared to be mediated by increased collagen and lysyl oxidase. After 15 days of infection with influenza A, these mice had greater upregulation of collagen and lysyl oxidase gene expression, worse flu infections, and worse lung injury compared with mice who expressed normal levels of REV-ERBα. Activating REV-ERBα with a drug 14 days after lung injury in mice that express normal levels of REV-ERBα slightly reduced collagen and lysyl oxidase gene expression and improved lung health in the mice, though not significantly. When tested in cell cultures, the REV-ERBα-activating drugs had an anti-fibrotic effect. "Currently, there are only two drugs approved by the FDA to treat fibrosis, and they only delay the process, they don't cure the disease," said study author Qixin Wang, Ph.D., a postdoctoral fellow working in Rahman's lab. "REV-ERBα-activating drugs could serve as potential therapeutics to help prevent fibrosis and stop the disease process." But, he adds, a better REV-ERBα drug or a more direct way to deliver the drug is needed. In their studies, mice treated with the REV-ERBα-activating drug SR9009 lost more weight and had lower survival than untreated mice. While further research is needed, Rahman and Wang believe their findings open new possibilities for developing treatments for all sorts of fibrotic diseases—especially those with a circadian component, like nighttime alcohol consumption causing liver fibrosis.
10.1038/s41467-023-36896-0
Earth
Rising greenhouse gases pose continued threat to Arctic ozone layer
Climate change favours large seasonal loss of Arctic ozone, Nature Communications (2021). DOI: 10.1038/s41467-021-24089-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-24089-6
https://phys.org/news/2021-06-greenhouse-gases-pose-threat-arctic.html
Abstract Chemical loss of Arctic ozone due to anthropogenic halogens is driven by temperature, with more loss occurring during cold winters favourable for formation of polar stratospheric clouds (PSCs). We show that a positive, statistically significant rise in the local maxima of PSC formation potential (PFP LM ) for cold winters is apparent in meteorological data collected over the past half century. Output from numerous General Circulation Models (GCMs) also exhibits positive trends in PFP LM over 1950 to 2100, with highest values occurring at end of century, for simulations driven by a large rise in the radiative forcing of climate from greenhouse gases (GHGs). We combine projections of stratospheric halogen loading and humidity with GCM-based forecasts of temperature to suggest that conditions favourable for large, seasonal loss of Arctic column O 3 could persist or even worsen until the end of this century, if future abundances of GHGs continue to steeply rise. Introduction Variations in ozone within the Arctic polar vortex during winter and spring (hereafter: winter) are driven by anthropogenic chemical loss and dynamical resupply 1 , 2 . Chemical loss and dynamical resupply of stratospheric ozone show large inter-annual variability, driven by meteorology. Colder, more isolated vortices are associated with smaller values of total column ozone 3 , 4 , less resupply and larger chemical loss of ozone (due to low temperatures). Colder vortices are caused by a weaker Brewer-Dobson Circulation, reduced planetary-scale wave activity and lower eddy heat flux in the extratropical lower stratosphere 5 . The coldest Arctic winters experience the smallest values of total column ozone, due in part to a larger amount of chemical loss 3 , 4 . Chemical loss of O 3 in the Arctic stratosphere occurs following the activation of chlorine on or within cold sulphate aerosols 6 , 7 and supercooled ternary (H 2 SO 4 -HNO 3 -H 2 O) solution droplets 8 (STS), and on the surfaces of nitric acid trihydrate (NAT) particles 9 or water ice when air is exceptionally cold. When temperatures fall during Arctic winter, STS and NAT particles 10 , 11 , 12 are the first types of PSCs to form. The timescale for chemical processing of chlorine reservoir gases on STS droplets transitions from weeks to days near the temperature at which NAT becomes thermodynamically stable ( T NAT ) 7 , which is governed by the vapour pressure of nitric acid (HNO 3 ) and water (H 2 O) 9 . The volume of air cold enough to allow for the existence of polar stratospheric clouds (PSCs) in the Arctic polar vortex, averaged over an ozone loss season ( V PSC ), exhibits a compact, near-linear relation with chemical loss of column ozone 13 , 14 , 15 , 16 , 17 during recent winters. Rex et al. 13 postulated that the maximum value of V PSC during Arctic winters had risen in a statistically significant manner between 1966 and 2003, and suggested this increase was caused by radiative and dynamical effects of rising levels of greenhouse gases (GHGs). New record values of V PSC were set in the winters of 2005 (ref. 14 ), 2011 (ref. 3 ), 2016 (refs. 18 , 19 ), and 2020 (ref. 20 ). An early evaluation using a general circulation model (GCM) with coupled active chemistry (a chemistry climate model, or CCM) suggested decreases in planetary wave activity reaching the mid-latitude stratosphere due to increased westerly winds in the subtropics, driven by rising levels of GHGs, would lead to stronger, colder Arctic vortices 21 . More recently, a simulation using another CCM suggested that future cooling of the Arctic lower stratosphere during early winter would result from direct radiative cooling driven by GHGs and indirect effects related to declining Arctic sea ice and rising sea surface temperatures 22 . Simulations conducted using a third CCM showed modest cooling (~0.15 K decade −1 ) of the future Arctic stratosphere at 50 hPa also driven by GHGs, with high interannual variability that complicates the assessment of statistical significance 23 . Here we examine trends in the PSC formation potential (PFP), which represents the number of days a volume of air equal to the volume of the polar vortex was exposed to PSC conditions for each Arctic ozone loss season based on T NAT (similar to ref. 24 ). We show that positive, statistically significant trends in the local maxima (LM) of the PFP timeseries (PFP LM , the upper quartile of PFP relative to a trend line) over the past four decades are apparent in data from four meteorological centres. A central component of our analysis is the examination of output from GCMs that provide estimates of stratospheric conditions until the end of this century, with a focus on models that submitted output for the Shared Socioeconomic Pathways SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6 runs of Climate Model Intercomparison Project Phase 6 (CMIP6) 25 . We combine GCM forecasts of PFP with projections of stratospheric halogen loading and stratospheric humidity to evaluate how the chemical loss of Arctic ozone may evolve, as a function of future levels of atmospheric GHGs and stratospheric H 2 O. We find that if the future abundance of GHGs continues to rise steeply as in either the SSP3-7.0 or SSP5-8.5 scenario, then continued growth in the atmospheric conditions favourable for large, seasonal loss of column ozone could persist or even worsen until the end of this century, despite the decline in the abundance of anthropogenic halogens that is expected to occur due to compliance with the Montreal Protocol. Results Chemical loss of ozone Figure 1a shows values of column ozone loss between 380 and 550 K potential temperature (ΔO 3 ) at the end of winter, based on ozonesonde measurements in the Arctic vortex, plotted as a function of PFP (see “Methods” for the detailed definition of PFP). Data values are shown for all of the cold winters that have occurred since the inception of regular ozonesonde launches. The estimates of ΔO 3 are based either on Match events (situations where individual air masses are usually probed twice above different measurement stations) 13 , 14 , 17 , 26 or on the difference between a passive ozone tracer and the vortex mean, observed profile of ozone 20 . Figure 1a also shows computations of ΔO 3 found using the ATLAS Chemistry and Transport Model 27 for meteorological conditions of Arctic winters 2005, 2010, 2011, and 2020. This model includes a comprehensive treatment of stratospheric chemistry, constrained by the abundance of stratospheric chlorine and bromine from long-lived lived source gases (Fig. 2a ) for these four winters 28 plus a constant 5 parts per trillion (pptv) from very short-lived (VSL) bromocarbons 29 (see “Methods”). Fig. 1: Chemical loss of Arctic Ozone. a Chemical loss of column ozone (ΔO 3 ) in Dobson Units (DU; 1 DU = 2.687 × 10 16 molecules cm −2 ) inside the Arctic polar vortex determined by ozonesonde campaigns for various winters since 1993 versus PSC formation potential (PFP) computed from ERA5/ERA5.1 (closed symbols), calculated as the vertical integral of loss profiles between 380 and 550 K potential temperature, which is ~14 and ~24 km altitude. The error bars representing 1σ uncertainty for ozone loss are based upon considerations such as uncertainties in the calculated cooling rates and the potential impact of mixing across the edge of the vortex edge as described in Harris et al. 17 ; the 1σ uncertainty for PFP is derived by assuming an error of ±1 K in the ERA5/ERA5.1 temperature field (see “Methods”). Computations of ΔO 3 are found using the global ATLAS Chemistry and Transport Model that includes a comprehensive treatment of stratospheric chemistry, for the halogen loading and meteorological conditions of winter 2005, 2010, 2011, and 2020 as well as halogen loading for 2060 and 2100 with meteorological conditions for 2020 (symbols with crosses). The ATLAS values of ΔO 3 are also based on integrals between the 380 and 550 K potential temperature 20 . b Same as panel a except ozone loss potential (OLP) is used for the abscissa. The variance in observed (data) and modelled (ATLAS) ΔO 3 explained by PFP and by OLP is reported as the square of the correlation coefficient in both panels. The solid line on both panels shows a linear, least-squares fit to the 15 ozonesonde data points, forced through the origin. Full size image Fig. 2: Polar Stratospheric EESC and H 2 O. a EESC (equivalent effective stratospheric chlorine) for the polar stratosphere computed using fractional release factors from Newman et al. 30 and values of the abundances of long-lived halogen source gases from Table 6 - 4 of most recent WMO Ozone Assessment Report 28 (black line). Throughout, we use a slightly modified version of polar EESC, found by accounting for a 5 ppt contribution from very short-lived (VSL) bromocarbons 29 (red line; circles denote years of the ATLAS simulations shown in Fig. 1 ). The contribution to this modified polar EESC from stratospheric chlorine and bromine are shown by the violet and blue lines, respectively. b – d polar stratospheric H 2 O (in several SSP scenarios) found accounting for: variations in atmospheric CH 4 ( b ); the temperature rise of the tropical tropopause layer (TTL) ( c ); both CH 4 and warming of the TTL ( d ) (see “Methods”). The circle denotes H 2 O = 4.6 ppm, used to compute PFP whenever time-invariant H 2 O is specified. (Historical part: black lines; SSP1-2.6: green lines; SSP2-4.5: blue lines; SSP3-7.0: brown lines; SSP5-8.5: red lines). Full size image Measured and modelled values of ΔO 3 display a compact, near-linear relation with PFP for 1993–2020 (data) and 2005–2020 (ATLAS) (Fig. 1a ). This behaviour occurs because over this time period, the abundance of stratospheric halogens, commonly represented by equivalent effective stratospheric chlorine (EESC) 30 (Fig. 2a ), varies by only ~11% between the value in early 1993 and the maximum in mid-2001. Modelled values of ΔO 3 lie either close to measured ΔO 3 (2011 and 2020) or just below the 1σ uncertainty (2005 and 2010), demonstrating that the primary control on interannual variations in ΔO 3 over the past 15 years has been the exposure of air to PSC temperatures. The near-linear relation between ΔO 3 and V PSC is a robust relation for the contemporary Arctic stratosphere 16 , 17 , despite the fact that in early winter, a small volume of the Arctic vortex can exist below the temperature threshold for chlorine activation and affect a large portion of the vortex 31 . Figure 1a also contains values of ΔO 3 for years 2060 and 2100 computed using the ATLAS model, for projected stratospheric chlorine and bromine for both years, and meteorological conditions for 2020. Modelled ΔO 3 for 2060 and 2100 falls below the compact relation observed and simulated for the contemporary atmosphere due to the projected future decline in EESC (Fig. 2a ). Figure 1b shows measured and modelled values of ΔO 3 as a function of a term we shall refer to as ozone loss potential (OLP), defined as: $${\rm{OLP}}({\rm{yr}})=\frac{{{\rm{EESC}}({\rm{yr}})}^{1.2}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{1.2}}\times {\rm{PFP}}({\rm{yr}})$$ (1) where EESC MAX (4.45 ppbv) is the maximum yearly value of EESC in the polar stratosphere. The variance, r 2 , in ΔO 3 explained by OLP is quite large, exhibiting values of r 2 of 0.89 and 0.96 for measured and modelled ΔO 3 , respectively (Fig. 1b ). Our OLP is defined in a manner nearly identical to the potential for activation of chlorine term of Tilmes et al. 32 , except for the use of 1.2 rather than 1 as the exponent of EESC in Eq. ( 1 ). Hassler et al. 33 conducted an analysis of ozone depletion and recovery at the South Pole assuming a linear relation between ozone loss rate and EESC, even though they state the actual relation may be more complicated. Harris et al. 17 examined model estimates of accumulated ozone losses at the 500 K potential temperature level in the Arctic stratosphere as a function of the abundance of activated chlorine, and reported a small positive non-linearity in this relationship. Here we use an exponent of 1.2 for EESC because this choice leads to the largest value of r 2 for the six ATLAS runs shown in Fig. 1b (see “Methods”). The linear, least-squares regression of the ozonesonde-based estimates of ΔO 3 versus OLP in Fig. 1b will be used below to relate estimates of the future evolution of OLP inferred from GCMs to the seasonal loss of Arctic ozone, which we denote ΔO 3 REG . We assess the uncertainty in ΔO 3 REG using lower and upper limits of 1 and 1.4 for the exponent in the expression for OLP (see “Methods”). Observed PSC formation potential Figure 3 shows time series of PFP found using data from four meteorological centres (see “Methods”). Our primary source of meteorological data is ERA5/ERA5.1/ERA5 BE (preliminary version) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) 34 . We also use meteorological fields from Climate Forecast System Reanalysis (CFSR/CFSv2) provided by the National Centers for Environmental Prediction of the U.S. National Oceanic and Atmospheric Administration 35 , 36 , the Modern-Era Retrospective analysis for Research and Applications (MERRA-2) product provided by the U.S. National Aeronautics and Space Administration Goddard Earth Observing System Model 37 , 38 , as well as the Japanese 55-year Reanalysis (JRA-55) provided by the Japanese Meteorological Agency (JMA) 39 . We calculate V PSC based on temperature and wind fields from these meteorological reanalyses to evaluate the consistency of our estimates of V PSC and to assess the robustness of inferred trends in PFP. Diagnostics for the existence of PSCs can vary substantially between reanalyses, such that conclusions based on the often marginal conditions for PSC condensation in the NH could be affected by small differences among the reanalyses 40 . Fig. 3: PFP as a function of time. a – f Time series of PSC formation potential (PFP) for reanalysis data from: ERA5/ERA5.1 from 1980 to 2020 ( a ) and ERA5/ERA5.1 combined with the ERA5 back extension (BE) (preliminary version) from 1965 to 2020 ( b ); JRA-55 from 1980 to 2020 ( c ) and from 1965 to 2020 ( d ); MERRA-2 from 1981 to 2020 ( e ); CFSR/CFSv1 from 1980 to 2020 ( f ). The solid red circles indicate the coldest winters in the record selected using the ISA trend detection procedure (see “Methods”). A linear, least-squares fit (solid line) and 1σ uncertainty of the fit (dashed lines) to the solid red circles are shown in each panel, along with numerical values of the slopes ( S PFP−LM ), the 1σ uncertainties of these fits (Δ S PFP−LM ), as well as p -values for the quantity S PFP−LM /Δ S PFP−LM (last column, Table 1 ). Full size image Meteorological fields from ERA5 have recently been extended back to 1950 and data from JRA-55 are available from 1958 to 2020, whereas the other data sets are available from 1979 (or 1980) to 2020. Stratospheric data in the Arctic mainly rely on radiosonde soundings before 1979 and on satellite data thereafter, which could introduce potential bias (see “Methods”). We use ERA5 and JRA-55 only back to 1965 since this year marks the start of regular radiosonde coverage of the Arctic stratosphere. Finally, reanalyses transitioned from the use of space-borne data from SSU and TOVS to AMSU and ATOVS systems in the 1998 to 1999 timeframe 40 . We obtain similar results for trends in PFP LM (differences within respective uncertainties) when considering data obtained prior and after this transition (see “Methods”). As noted in the Introduction, we had previously suggested a tendency for the highest values of V PSC to have risen over time. These analyses 13 , 14 were based upon the selection of maximum values of V PSC over successive 5 year time intervals, a trend detection procedure we term here the Maximum in the Interval Method (MIM). Since the publication of these papers, we have developed a more accurate and robust trend detection procedure as documented by a series of Monte-Carlo (MC) simulations (see “Methods”), termed the Iterative Selection Approach (ISA). The slope of the LM of PFP ( S PFP−LM ) selected by ISA is strongly positive over 1980 to 2020 based upon analysis of data from all four meteorological centres, ranging from a high of 4.77 ± 0.48 d decade −1 (CSFR) to a low of 3.85 ± 0.40 d decade −1 (MERRA-2) (Fig. 3 ). The mean and 1σ standard deviation of S PFP−LM over 1980 to 2020 from these four centres is 4.26 ± 0.45 d decade −1 . The values of S PFP−LM over the longer time period of 1965 to 2020 are 3.84 ± 0.34 d decade −1 and 3.50 ± 0.29 d decade −1 based on ERA5 and JRA-55, respectively, the only data sets that extend further back than 1979, the start of the modern satellite era. In other words, during particularly cold winters over the past half century, the Arctic polar vortex has tended to experience between 3.5 and 4.8 more days per decade of exposure to conditions cold enough to sustain PSCs and activate chlorine, an increase of about 40% compared to the values that occurred a half century ago. We have conducted MC simulations to assess the statistical significance of S PFP−LM and the 1σ uncertainty in S PFP−LM (Δ S PFP−LM ) found using the ISA selection procedure (see “Methods”). These simulations indicate statistical significance at better than the 2σ confidence level for this important metric of the trend in PFP LM , based upon p -values for S PFP−LM /Δ S PFP−LM from all four meteorological data centres that are <0.001 (see “Methods”, Table 1 ). Table 1 PFP LM trend results for the reanalyses and CMIP6 GCM output. Full size table PSC formation potential from GCMs In this section, we calculate PFP from the output of all 26 GCMs in CMIP6 that archived results for the SSP5-8.5 scenario 25 . The numerical value after the dash in the SSP designation represents the rise in radiative forcing of climate (RF; units W m −2 ) at end of the century relative to pre-industrial, due to GHGs including ozone-depleting substances as well as tropospheric aerosols 41 . Temperature fields within these GCMs often exhibit biases with respect to observed temperature that can approach 5 K, with most models being biased warm 42 . Stratospheric H 2 O tends to be biased low in many models 43 , which together with a high-temperature bias will lead to an underestimation of the accumulated exposure to PSCs in the Arctic. To compensate for the temperature biases, the temperature threshold for the existence of PSCs has been offset by a constant value specific to each model such that the overall magnitude of PFP LM in the GCM matches the observed magnitude of PFP LM over the modern satellite era. Furthermore, the computation of PFP uses profiles for H 2 O and HNO 3 for the contemporary stratosphere (see “Methods”). Values of PFP for the SSP5-8.5 run of 16 of the 20 GCMs that submitted results for all four SSPs highlighted in our study (SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6) are shown in Fig. 4 . PFP for the remaining SSP5-8.5 GCM runs are shown either in Fig. 5 or in the Supplementary Information (SI). The suggestion that the coldest Arctic winters are getting colder is also apparent in GCM simulations without adjusting the PSC temperature threshold (see SI). We highlight results with adjusted thresholds to place all of the GCMs on a common scale for assessing PFP in the Arctic stratosphere. Fig. 4: PFP, 1950–2100, from CMIP6 GCMs for SSP5-8.5 scenario and time-invariant H 2 O. a – p Time series of PSC formation potential (PFP) from 16 CMIP6 GCMs (as indicated on top of each panel), based on archived output from the SSP5-8.5 scenario (2015–2100) combined with output from the historical scenario (1950–2014). The solid circles indicate the coldest winters in the record (local maxima) selected using the ISA trend detection procedure (see “Methods”). A linear, least-squares fit (solid line) and 1σ uncertainty (dashed lines) to the solid red circles are shown in each panel, along with numerical values of the slopes ( S PFP−LM ) and 1σ uncertainties of these fits. The blue line shows the best fit to PFP of the radiative forcing time series for each model run, and the grey line is a 21-year running mean (±10 years) to PFP from each GCM. The temperature threshold for the formation of PSCs has been offset by a constant number, specific to each model, so that the overall magnitude of PFP LM in the GCM matches the observed magnitude of PFP LM , over the modern satellite era (see “Methods” and Table 1 ). Full size image Fig. 5: PFP, 1950–2100, from CMIP6 GCMs for various SSP scenarios and time-invariant H 2 O. a – p Time series of PSC formation potential (PFP) from 4 CMIP6 GCMs (as indicated on top of each panel), based on archived output from various historical (1950–2014) and SSP scenarios (2015–2100) for radiative forcing of climate. See Fig. 4 for more details. Full size image Values of S PFP−LM found for each of the 26 GCM simulations with archived results for SSP5-8.5 are all positive, ranging from a high of 3.66 ± 0.16 d decade −1 (IITM-ESM) to a low of 0.62 ± 0.09 d decade −1 (BCC-CSM2-MR) (Table 1 ). The majority of these slopes lie between about 1.0 and 2.5 d decade −1 ; statistical significance at better than the 2σ level is exhibited for S PFP−LM in 16 and for S PFP−LM /Δ S PFP−LM in 24 of these 26 runs. The similarity of the long-term running mean of PFP and regression of PFP versus RF in each of the panels (Fig. 5 ) suggests the Arctic stratosphere is cooling in a manner that follows the rise in RF of climate. This provides further support that rising GHGs are the primary factor driving increasing PFP. Nearly all of the GCMs exhibit maximum values of PFP towards the end of the century. The progressive tendency towards colder Arctic winters is also exhibited in GCMs that participated in the earlier CMIP5 project 44 . For CMIP5, archived output from 27 GCM simulations that ran the Representative Concentration Pathway (RCP) 8.5 (ref. 45 ) is considered. The frequency distribution function of the ISA-based value of S PFP−LM over 1950–2100, for 26 CMIP6 GCMs and 27 CMIP5 GCMs, is shown in Fig. 6 . The mean and standard deviation of S PFP−LM are 1.71 ± 0.7 d decade −1 and 1.48 ± 1.0 d decade −1 for the CMIP6 and CMIP5 GCMs, respectively (Fig. 6b ). The CMIP5 GCMs exhibit a greater tendency towards both low and high values of S PFP−LM compared to the CMIP6 GCMs. Most importantly, values of S PFP−LM over 1950–2100 are positive for 52 of the 53 CMIP5/6 GCM simulations forced by an 8.5 W m −2 rise in RF by end of the century. These GCM runs provide numerical support for the contention that rising levels of GHGs will lead to cooler conditions in the polar stratosphere that are conducive to the chemical loss of ozone by anthropogenic halogens. The GCM simulations in Figs. 4 and 5 also show a tendency for PFP associated with the warmer Arctic winters (open circles at bottom of the data envelope) to rise slightly over time, a projected trend not yet apparent in observations 46 due perhaps to the generally small values of PFP for the warmest winters over the observational period as well as the lower limit of zero for PFP. Fig. 6: Modelled and measured values of S PFP−LM . a Mean and 1σ standard deviation of the slope of local maxima of PFP ( S PFP−LM ) selected using the ISA trend detection procedure, for 1980–2020, based upon analysis of output from 26 CMIP6 GCM simulations (blue), 27 CMIP5 GCM runs (grey) (see “Methods”) as well as reanalysis data from four meteorological centres (red) b , Mean and 1σ standard deviation of S PFP−LM selected using the ISA trend detection procedure, for 1950–2100, based upon analysis of output from 26 CMIP6 GCM simulations (blue points with error bars) and 27 CMIP5 GCM runs (grey points with error bars) as well as the frequency distribution of S PFP−LM from the individual CMIP6 simulations (blue vertical bars) and CMIP5 runs (grey vertical bars). Full size image The mean and standard deviation of the empirical value of S PFP−LM over 1980 to 2020 from the four reanalysis datasets is compared to GCM-based values (for the same time period) in Fig. 6a . The rationale for this comparison is the models have undergone a similar rise in the RF of climate over these four decades as the atmosphere. The observationally based trend lies near the upper 1σ value of the GCMs. Over this short period internally generated climate variability may play a substantial role and the one realisation that developed in earth’s climate system may have coincidentally followed a path that led to S PFP−LM at the upper range of the GCM values. On the other hand, tropospheric climate exhibited a shift in the early 2000s that weakened the intensity of planetary wave activity propagating into the stratosphere 47 , which could be responsible for a portion of the larger observed value of S PFP−LM compared to results from GCMs. Shifts in patterns of sea surface temperature in the North Pacific have also been implicated as a causal factor in decreased planetary wave activity and the strengthening of the Arctic vortex 48 . The potential association of these drivers of Arctic, stratospheric temperature with climate change is an area of active research 47 . We interpret the results in Fig. 6a as follows: there is a strong similarity in the four observationally based estimates of S PFP−LM , and this value is consistent with a subset of the GCMs (i.e., those with the largest values of S PFP−LM ). It is difficult to attach further meaning to this comparison; because of the potential role of internal variability in planetary wave activity, we caution against asserting that GCMs with the best match to the empirically based value S PFP−LM will provide a more realistic forecast of the future. As further support for the notion that larger values of PFP towards the end of the century are driven by rising levels of GHGs, we analyse results for the 20 GCM simulations that have provided an output for SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6 (ref. 41 ). A comparison of PFP for four of these GCMs is shown in Fig. 5 . Results for the other 16 GCMs exhibit similar behaviour, as shown below using the multi-model ensemble mean projections. Nearly without exception, the ISA-based value of S PFP−LM over 1950–2100 for a particular GCM is largest for the SSP5-8.5 simulation and lowest (in many cases, near zero) for the SSP1-2.6 run. This finding provides further evidence that stratospheric cooling caused by the human release of GHGs is the primary driver of rising LM values of PFP within these GCMs. The projections of PFP shown in Fig. 5 have been found assuming profiles for H 2 O and HNO 3 appropriate for the contemporary atmosphere. However, future levels of stratospheric H 2 O will likely rise due to increasing tropospheric CH 4 as well as the warming of the tropical tropopause 49 , 50 . Figure 2 shows estimates of polar, stratospheric H 2 O for changes driven by the oxidation of CH 4 (Fig. 2b ), warming of the tropical tropopause (Fig. 2c ), and the combination of both effects (Fig. 2d ). Our CH 4 -based estimate is derived from the relation between CH 4 and H 2 O in the contemporary Arctic stratosphere 51 combined with historical and future projections of CH 4 from the SSP-database, and the thermodynamic-based estimate results from an analysis of CMIP6 GCM output 43 (see “Methods”). Accounting for the future rise in stratospheric water for the computation of T NAT has a profound effect on PFP as well as S PFP−LM . Figure 7 shows results from one of the four GCMs highlighted in Fig. 5 . The first column of Fig. 7 shows the effect on PFP and S PFP−LM of projected future increases in stratospheric H 2 O due to CH 4 , the second shows the effect due to thermodynamics, and the third column shows the full effect of rising stratospheric H 2 O. The sensitivity of future PFP to the projected change in H 2 O is large within the EC-Earth3 GCM, as shown by comparing the first three columns of Fig. 7 (variable H 2 O) to the first column of Fig. 5 (time-invariant H 2 O), particularly for SSP5-8.5 and SSP3-7.0. The trend in S PFP−LM found using archived output from the EC-Earth3 GCM for SSP5-8.5 increases from 2.27 ± 0.13 d decade −1 for time-invariant H 2 O (Fig. 5a) to 3.93 ± 0.13 d decade −1 when both of the factors driving the potential future rise in stratospheric H 2 O are considered (Fig. 7i ), because a more humid future stratosphere is more conducive to the chlorine activation and the formation of PSCs. Conversely, as expected, the impact of future stratospheric H 2 O on PFP and S PFP−LM is small for SSP2-4.5 and SSP1-2.6. The other GCMs that have archived results for all four SSPs exhibit similar behaviour (see SI). Fig. 7: PSC formation potential (PFP) and Ozone Loss Potential (OLP), 1950–2100, from EC-Earth3 model for variable H 2 O, various SSP scenarios. a – l Same as Fig. 5 for the EC-Earth3 GCM, for variable H 2 O accounting for: tropical tropopause warming ( a - d ), changes in atmospheric CH 4 ( e – h ), and both effects ( i – l ). m – p OLP from the EC-Earth3 GCM, for variable H 2 O due to both tropopause warming and CH 4 oxidation. The grey line shows a 21-year running mean (±10 years) to OLP from each simulation, conducted for various SSPs. Figures showing results for the other GCMs that appear in Fig. 5 are included in the SI. Full size image Projections of conditions conducive to Arctic ozone loss As shown in Fig. 1b , measured and modelled values of the chemical loss of column ozone in the Arctic stratosphere are well described by OLP. For the EC-Earth3 GCM constrained by GHGs abundances for SSP5-8.5 and SSP3-7.0, the largest values of OLP occur towards the latter half of this century, particularly when the full effect of rising stratospheric H 2 O is considered (Fig. 7m, n ). This projection suggests stratospheric cooling combined with moister conditions, driven by future rises in the atmospheric abundance of anthropogenic GHGs, could prolong the conditions that lead to significant chemical loss of column O 3 within the Arctic vortex until late in this century. Conversely, if GHGs follow either the SSP2-4.5 or SSP1-2.6 scenario, the value of OLP is projected to decline from close to present time until the end of the century (Fig. 7o, p ). We now turn to the multi-model ensemble mean values of PFP, rather than the LM of PFP from a single GCM. Figure 8 shows the time series of ensemble-mean values of ΔO 3 REG and OLP from the 20 CMIP6 GCMs that have archived output for GHG abundances from SSP5-8.5, SSP3-7.0, SSP2-4.5, and SSP1-2.6, assuming constant stratospheric H 2 O. Commonly, year 1980 is used as a benchmark for studies of polar ozone recovery 23 . For fixed H 2 O, the multi-model mean value of OLP remains well above the 1980 level until the end of the century for SSP5-8.5 and SSP3-7.0, approaches the 1980 level for SSP2-4.5, and reaches the 1980 level at end of the century for SSP1-2.6. For SSP5-8.5 and SSP3-7.0, the seasonal loss of ozone (i.e., ΔO 3 REG ) in the range of 70–100 DU persists until the end of this century at an amount comparable to contemporary values. Fig. 8: Ensemble model mean regressed column ozone loss and Ozone Loss Potential (OLP), time-invariant H 2 O. The value of OLP (right ordinate) and ΔO 3 REG computed from OLP (left ordinate) from the 20 CMIP6 GCMs (CanESM5, CESM2-WACCM, CNRM-CM6-1, CNRM-CM6-1-HR, CNRM-ESM2-1, EC-Earth3, EC-Earth3-Veg, FGOALS-g3, IITM-ESM, INM-CM4-8, INM-CM5-0, IPSL-CM6A-LR, MIROC6, MIROC-ES2L, MPI-ESM1-2-HR, MPI-ESM1-2-LR, MRI-ESM2-0, NorESM2-LM, NorESM2-MM, UKESM1-0-LL) that archived results for the SSP5-8.5 ( a ), SSP3-7.0 ( b ), SSP2-4.5 ( c ), and SSP1-2.6 ( d ) scenarios, computed assuming a constant volume mixing ratio for stratospheric H 2 O of 4.6 ppmv. The same temperature threshold offsets specified in Table 1 and Figs. 4 and 5 have been used. The grey solid line shows a 21-year running mean (±10 years) to the ensemble mean of ΔO 3 REG for each SSP, the grey shaded area represents a 21-year running mean of the range in ΔO 3 REG for exponents of 1 (upper boundary) and 1.4 (lower boundary) of the expression for OLP, and the grey dashed horizontal lines denoted the 1980 value of ΔO 3 REG . The right-hand ordinate shows the scale of the multi-model mean values of OLP, which are the initial quantities computed from the GCM output. Note, this right-hand ordinate does not correspond to the grey shaded area, since an exponent different from 1.2 was used. Full size image Stratospheric humidity is expected to rise due to an increased source from the oxidation of CH 4 and a warmer tropical tropopause, particularly for climate scenarios with high RF of climate towards the end of the century, which will lead to further increases in ΔO 3 REG and OLP. Figure 9 shows ensemble mean values of ΔO 3 REG and OLP for the GCMs also represented in Fig. 8 , allowing for variations in stratospheric H 2 O in addition to temperature. When the effect of rising H 2 O on the future occurrence of PSCs is considered, ΔO 3 REG and OLP at end of the century are higher than contemporary values of these quantities for the SSP5-8.5 and SSP3-7.0 simulations. This analysis suggests that despite a projected decline in stratospheric halogen loading, the potential for significant chemical loss of Arctic column ozone could not only persist until the end of the century but might actually exceed contemporary loss if the atmospheric abundance of GHGs follows either SSP5-8.5 or SSP3-7.0 (Fig. 9a, b ). The multi-model mean values of ΔO 3 REG and OLP at end of the century for SSP2-4.5 (Fig. 9c ) also lie above the 1980 levels. Both quantities drop below the 1980 level for SSP1-2.6 (Fig. 9d ), because the suppressed abundance of CH 4 towards the end of the century within this scenario leads to a decline in stratospheric H 2 O relative to today (Fig. 2d ). Fig. 9: Ensemble mean regressed column ozone loss and Ozone Loss Potential (OLP), variable H 2 O. Same as Fig. 8 , except OLP from the archived GCM output of each GCM has been computed using the time series for polar stratospheric H 2 O shown in Fig. 2d , which accounts for increasing stratospheric humidity due to both variable CH 4 and warming of the tropical tropopause. a SSP5-8.5, b SSP3-7.0, c SSP2-4.5, and d SSP1-2.6 scenarios. Full size image The multi-model ensemble values of ΔO 3 REG and OLP shown in Figs. 8 and 9 capture the general tendency of projections of stratospheric temperature within 20 GCMs, the result of an enormous computational effort by the climate modelling community. On the other hand, this averaging procedure masks the strong year to year variability in Arctic conditions conducive for major ozone depletion, as represented in Fig. 7m–p (for EC-Earth3) and in SI for other GCMs, and as noted by an analysis of a seven-member ensemble from the United Kingdom Chemistry and Aerosols (UM-UKCA) CCM 23 . Discussion There are a number of factors that affect the accuracy of lower stratospheric temperature within GCMs, such as the maximum altitude and vertical resolution 52 as well as model representation of planetary wave activity that transports energy from equatorial to poleward regions 53 . One important marker of the usefulness of a GCM to simulate stratospheric dynamics is whether the model generates an oscillation of the direction of the zonal wind in the tropical lower stratosphere with a period of about 28 months, known as the quasi-biennial oscillation (QBO) 53 . Our examination of the tropical zonal wind from the models suggests CMIP6 GCMs tend to provide a better representation of the QBO than was evident in CMIP5 GCMs (see “Methods”), consistent with the more formal analysis of Richter et al. 54 . We see little difference in our projections of column ozone loss for the Arctic stratosphere (ΔO 3 REG ) (Figs. 8 and 9 ) when the CMIP6 GCM output is examined in groups of models that provide a reasonable representation of the QBO versus other models (see “Methods”). Richter et al. 54 note that while the number of models with an internally generated QBO has increased substantially from CMIP5 to CMIP6, the multi-model mean amplitude for atmospheric levels below a pressure of 20 hPa is still much lower than observed. Given the importance of the QBO in stratospheric dynamics, substantial effort is being directed towards improving the representation of this process within GCMs 55 . Ideally, GCMs would include interactive chemistry, as there are numerous feedbacks and interactions between the photochemical processes that regulate stratospheric ozone and the dynamical and radiative drivers of PFP. Four of the 20 CMIP6 GCMs considered above have fully interactive chemistry: the other 16 models use prescribed fields of ozone. The temporal evolution of OLP found using results from the four GCMs with interactive chemistry is about 20–25% lower at end of century than that found for the other 16 GCMs; nonetheless, ΔO 3 REG remains close to the contemporary value until the end of century for the SSP3-7.0 and SSP5-8.5 simulations conducted using these interactive GCMs (see “Methods”). Finally, CCMs that have been used to assess the evolution of Arctic ozone have interactive chemistry with vertically resolved stratospheres and better spatial resolution than most of the CMIP6 GCMs 56 . These CCMs tend to exhibit a more realistic representation of planetary wave activity and are capable of representing the impact of the intensification of the Brewer Dobson Circulation (BDC) and upper stratospheric cooling on ozone, two factors that result in the projection of future increases in Arctic column ozone during winter and spring 56 . However, the multi-model mean of CCMs used to project the future evolution of Arctic column ozone significantly underestimates prior observed ozone depletion, particularly during cold winters with extensive PSC activity 56 . Values of ΔO 3 REG shown in Figs. 8 and 9 represent the seasonal loss of column ozone that may occur for various GHG scenarios, rather than resulting column ozone. Future levels of Arctic column ozone during late winter and early spring are expected to increase due to factors such as intensification of the BDC, upper stratospheric cooling, as well as possible changes in planetary and gravity wave activity that exert a strong influence on the abundance of column ozone within the Arctic vortex during its formation in early winter and dynamically induced increases during winter 22 , 23 , 56 . Langematz et al. 22 project maximum V PSC to occur around 2060 with a subsequent decline due to enhanced dynamical warming of the Arctic vortex in February and March, based on simulations conducted with their CCM. Finally, future levels of N 2 O are expected to rise 41 , leading to higher levels of HNO 3 that will lead to more favourable conditions for the formation and existence of PSCs 9 . Future total column ozone during spring will reflect a balance between the initial abundance, dynamical transport, and chemical loss that is driven by a large number of factors. The strong dependence of the ensemble mean value of OLP towards the end of the century on radiative forcing of climate suggests that large, seasonal loss of column ozone in the Arctic could persist for much longer than is commonly appreciated 56 . If stratospheric H 2 O rises as projected in Fig. 2d and GHGs follow a trajectory similar to either SSP5-8.5 or SSP3-7.0, chemical loss of Arctic ozone could even be larger by end of the century than has occurred in the past. Consequently, anthropogenic climate change has the potential to partially counteract the positive effects of the Montreal Protocol in protecting the Arctic ozone layer. Methods Computation of PFP The temperature at which nitric acid trihydrate (NAT) becomes thermodynamically stable, T NAT , is governed by the vapour pressure of nitric acid (HNO 3 ) and water (H 2 O) 9 . Here, we use a constant volume mixing ratio of stratospheric H 2 O equal to 4.6 parts per million (ppmv) and a profile of HNO 3 , both based on satellite observations, to find T NAT . We compute T NAT using the saturation vapour pressure of H 2 O and HNO 3 over NAT measured by Hanson and Mauersberger 9 . A volume mixing ratio for H 2 O of 4.6 parts per million (ppmv) is used at all pressure levels, consistent with observations reported by the U.S. National Aeronautics and Space Administration Microwave Limb Sounder instrument for the lower stratosphere of the Arctic 57 , as input to the calculation of T NAT . The specified mixing ratio profile of HNO 3 , which varies as a function of pressure, is based on measurements acquired in the Arctic during January 1979 by the Limb Infrared Monitor of the Stratosphere (LIMS) on board Nimbus 7 (ref. 58 ). The quantity V PSC represents the volume of air for which temperature is less than T NAT , evaluated between potential temperatures of 400 and 700 K. The formation of PSCs in the Arctic stratosphere also depends on factors such as cooling rate, the degree of super-saturation, the chemical composition of pre-existing nuclei, as well as the surface coating of condensed particles 10 , 11 , 12 , 59 . During cold Arctic winters, the profile of HNO 3 will be altered by the sedimentation of nitrate-bearing PSCs, termed denitrification 11 , 12 , 59 , 60 . Nonetheless, our approach captures the primary factor that drives the chemical loss of Arctic O 3 : that is, temperatures low enough to allow for the existence of PSCs. As described in the main paper and detailed below, we arrive at remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT , because these two temperature thresholds are similar. Our analysis requires definition of the area and volume of the Arctic polar vortex, denoted A VORTEX and V VORTEX . The horizontal boundary of the vortex is based on the value of 36 s −1 for normalized potential vorticity (nPV), which is found from the horizontal wind and temperature fields and then scaled to account for the steep altitude dependence of PV. The value of 36 s −1 for normalized PV (nPV) is used to define the edge of the polar vortex, as described in section 3.3 of Rex et al. 26 . Other studies utilize the maximum gradient in PV to define the boundary of the polar vortex 61 . We use nPV = 36 s −1 to define the vortex boundary because on some days the gradient method introduces a level of complexity, due to the existence of multiple maximum gradients of nearly equal magnitude separated by a considerable distance, which requires human judgement. We have examined maps of nPV and temperature plotted for 1 February of the years 1960–2100, in increments of every 10 years, for all 26 CMIP6 GCMs that archived results for SSP5-8.5. These maps show that the nPV = 36 s −1 boundary for the Arctic vortex is not greatly affected by climate change until the end of the century; maps for the four CMIP6 GCMs highlighted in Fig. 5 of the paper are shown in Supplementary Fig. 1 . Since PV from four reanalyses that span many decades and model output from 53 GCM simulations that span more than a century and a half are examined, it is preferable to implement a method that requires no human intervention. The next step for the computation of PFP involves calculation of the area over which temperature is below the threshold for the existence of PSCs, A PSC , as well as A VORTEX . The area for which T < T NAT and the area enclosed by the nPV = 36 s −1 contour are found on various potential temperature ( θ ) surfaces for each time step of the analysis, which are evaluated to yield A PSC ( θ , t ) and A VORTEX ( θ , t ). Next, V PSC ( t ) and V VORTEX ( t ) are computed for each time step by evaluating: $${V}_{{\rm{PSC}}}\left({\rm{t}}\right)=\int_{400\ {\rm{K}}}^{{700\ {\rm{K}}}}c\left(\theta \right){A}_{{\rm{PSC}}}\left(\theta ,t\right)\,{dt}\,$$ (2) $${V}_{{\rm{VORTEX}}}\left({\rm{t}}\right)=\int_{400\ {\rm{K}}}^{{700\ {\rm{K}}}}c\left(\theta \right){A}_{{\rm{VORTEX}}}\left(\theta ,t\right)\,{dt}$$ (3) where c ( θ ) is a factor that converts intervals of potential temperature to geometric altitude (numerical values provided in a data repository). The next step in the calculation of PFP involves evaluating the integral of the ratio of V PSC (t) and V VORTEX (t) over the Arctic ozone loss season of each winter, which are combined to yield: $${\rm{PFP}}\left({\rm{yr}}\right)=\int_{1\ {\rm{Nov}}}^{{30\ {\rm{Apr}}}}\frac{{V}_{{\rm{PSC}}}\left(t\right)}{{V}_{{\rm{VORTEX}}}\left(t\right)}{dt}$$ (4) 1 November (prior year) and 30 April (specified year) are used as limits of integration because these dates encompass the time period of possible PSC activity among reanalysis and GCM-based temperature fields. A grid for θ from 400 to 700 K, in 5 K increments, is used for the computation of V PSC from each reanalysis data set, all of which are provided at 6 h time steps. At each time step the value of the ratio V PSC / V VORTEX is capped at unity, because in rare instances the volume for PSC temperatures is larger than the volume of the vortex defined using the 36 s −1 boundary. The GCM output is generally available on a daily basis, although some modelling groups have archived output every 6 h; details are provided in Supplementary Table 1 . The models that archive output every 6 h provide high model vertical resolution fields on the native model grid, whereas the daily output is generally provided for only a limited number of pressure levels (i.e., 100, 50, and 10 hPa). In cases where the output for the SSP1-2.6, SSP2-4.5, and SSP3-7.0 scenarios are available only in low resolution (daily), we use low resolution for the SSP5-8.5 scenario from the corresponding GCM run, even if a higher resolution is available for SSP5-8.5. Values of V PSC (t) and V VORTEX (t) found using Eqs. ( 2 ) and ( 3 ) as well as the ratio of these terms are shown in Supplementary Fig. 2 . The unusual behaviour of Arctic winter 2020, such as record high values for V PSC in March and V VORTEX in March and April, is readily apparent. V PSC (t) and V VORTEX (t) are used in Eq. ( 4 ) to determine PFP. All reanalyses and GCM fields are analysed on the native horizontal resolution of the product. Finally, the 1σ uncertainty of PFP shown in Fig. 1 is based on perturbation of the reanalysis temperature field by ±1 K; this magnitude of the offset is based on our analysis of the approximate 1σ standard deviation about the mean of stratospheric temperature from the four data centres, over the modern satellite era. In the main article, we estimate PFP using the JRA-55 and ERA5/ERA5.1/ERA5 BE (preliminary version) reanalysis products over 1965–2020, as well as 1980–2020. Meteorological data in the Arctic stratosphere acquired prior to 1979 mainly rely on radiosonde measurements, and 1965 marked the beginning of regular radiosonde coverage of the Arctic stratosphere. Luers and Eskridge 62 quantified the bias in temperature reported by ten of the most common radiosondes used throughout the world since 1960, for use in climate studies. The JRA-55 reanalysis makes use of the Radiosonde Observation Correction using Reanalysis (RAOBCORE) version 1.4 (ref. 63 ) bias correction procedure for radiosonde temperature until the end of 2006, and RAOBCORE version 1.5 (ref. 64 ) thereafter. As an important check on the temporal integrity of the reanalyses prior to 1979, in Supplementary Fig. 3 we show an update to the radiosonde temperature time series acquired at Sodankylä, Finland, for each winter since 1965 (ref. 65 ). This figure shows the time evolution of the percentage of observations of temperature < −77.9 °C at 50 hPa over the months of December (prior year) and January, February, and March (indicated year) from regular radiosonde launches from Sodankylä. Supplementary Fig. 3 supports our conclusion, shown in Fig. 3d of the main article, that conditions conducive for the existence of PSCs tended to be less common between 1965 and 1979, compared to the past few decades. In the main article, we discuss an application of a threshold for the existence temperature of PSCs applied to output from the CMIP5 and CMIP6 GCMs, such that the magnitude of the LM in PFP matches the observed magnitude over the modern satellite record. Details of the specific GCMs 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 are given in the Supplement. We compute PFP from these GCMs in a similar way to that applied to the computation of PFP from meteorological data, except for the application of a temperature offset to account for either warm or cold bias. The offsets for T NAT used for CMIP6 GCMs are given in Table 1 . These offsets have been determined based on the criterion that a trend line fit to the LM of PFP (PFP LM ) from the GCM over 1980–2020 using the ISA selection procedure (described below), should have a value in year 2000 (mid-point of the data record) that lies closest to the value of the fit to PFP LM data from ERA5/ERA5.1 in year 2000, among all possible 1 K incremental offsets to T NAT (including no offset) ranging from −9 to +9 K. For CMIP6, 19 of the 26 GCMs required a positive temperature offset for the PSC threshold (Table 1 ), indicating temperature conditions computed within these GCMs tend to be warmer than climatology, particularly for winters with cold, isolated Arctic vortices. Supplementary Fig. 4 shows comparisons of PFP for each CMIP6 GCM, with and without application of this threshold. Supplementary Table 2 is similar to Table 1 of the main article, except values and statistical analysis of value S PFP−LM and Δ S PFP−LM are shown without application of any adjustment for the PSC temperature threshold. It is evident from Supplementary Table 2 and Supplementary Fig. 4 that the main thesis of our study, the coldest winters in the Arctic stratosphere are getting colder due to rising GHGs, is apparent in GCM simulations with and without this adjustment. We have chosen to show estimates of S PFP−LM upon application of a threshold correction in the main article because this is a more realistic metric to examine within the models, particularly those GCMs that have very warm biases and thus exhibit unrealistically small values of PFP. Trend detection procedures We utilise several procedures to assess the trend in LM of PFP. First, we describe the ISA, which we apply to the 41-year time series from ERA5/ERA5.1. Following the computation of PFP for all Arctic winters, all of the data are fit using a linear least-squares regression line (Supplementary Fig. 5a ). We then compute the vertical distance (i.e., the difference in PFP) between the fit line and each data point. The point (in blue) with the largest distance below the line, the most warm winter relative to the current trend line, is omitted from the subsequent analysis. The remaining data points are then fit with another linear least squares regression line (Supplementary Fig. 5b ). The same procedure of finding and removing the point (blue) with the greatest distance below the fit line is repeated leading to Supplementary Fig. 5c . The procedure is repeated until one-quarter of the points (termed the upper quartile relative to the trend line) remain; Supplementary Fig. 5d – f shows results of iterations number 28, 29, and 30. The slope ( S PFP−LM ) of 4.50 d decade −1 and 1σ uncertainty (Δ S PFP−LM ) of 0.19 d decade −1 given for the least-squares fit of data shown on Supplementary Fig. 5 f are the same as that shown in Fig. 3a . Next, we describe the Maximum in Interval Method (MIM) for assessing trends in PFP. Rex et al. 13 applied this selection procedure to their analysis of V PSC . They quantified the slope in the maximum values of V PSC that had occurred over successive 5-year long, independent time intervals. Their analysis considered 37 years of data spanning winters of 1966–2003, from which eight values of V PSC were selected. Supplementary Fig. 6b shows the resulting selections of LM (red solid points), which yields values of S PFP−LM and Δ S PFP−LM of 4.24 ± 0.34 d decade −1 for the trend in LM from the ERA5/ERA5.1 time series of PFP. Clearly, the results are quite similar to the value of S PFP−LM found using the ISA procedure, even though some of the data points selected as LM by these two techniques differ (Supplementary Fig. 6a and b ). Our development of the ISA selection procedure, rather than MIM, was also driven by our analysis of GCM output that shows steadily rising values of PFP until the end of this century when models are driven by either RCP 8.5 or SSP5-8.5 GHG scenarios, which for some models are interspersed with gaps >5 years for LM in PFP. The time interval of the MIM procedure could have been altered, but rather we offer the ISA procedure as a more robust method for the selection of LM of PFP. Supplementary Fig. 6c illustrates the value above sigma (VAS) selection procedure used by Rieder and Polvani 108 to address trends in V PSC . For VAS, one first computes the mean and standard deviation about the mean (σ) using all values of the PFP time series. Next, the slope in PFP is found using only those data points that lie 1σ above the mean. The VAS selection yields seven selected points, resulting in a slope of 3.06 ± 1.51 d decade −1 for a fit to these selected points. The selection of PFP from Arctic winter 2018 and lack of selection of any data points prior to 1995 by the VAS selection procedure illustrates the problem with this method: by design, only the highest values are selected. To test the hypothesis that the LM of a quantity has risen over time, one should apply a time-varying statistical method for the selection of points. A static selection such as VAS is not an appropriate means to assess whether the coldest winters are getting colder because VAS tends to select only the highest values, rather than the LM, from the time series of PFP. In order to further assess the selection of PFP LM by the ISA, MIM, and VAS trend detection procedures a set of MC simulations was conducted for a dataset with an imposed, positive trend in PFP. For this set of MC simulations, one million time series of PFP were generated for a 41-year long record (matching the time period 1980–2020), each with PFP distributed between a lower bound of 0 and an upper bound that starts at 13.6 d (first winter) and rises with a slope of 4.59 d decade −1 . Each PFP data point is uniformly, randomly distributed between the time-varying upper bound and the lower bounds, chosen to match the lower and upper limits of PFP from ERA5 in a statistical fashion. Supplementary Table 3 summarizes the results of this first set of MC simulations. This table gives the mean value of the slope \((\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}})\) and 1σ uncertainty \((\overline{\varDelta {S}_{{\rm{PFP}}-{\rm{LM}}}})\) of the fits to the maxima in PFP of these one million randomly generated time series. The table also provides the mean number \((\overline{k})\) and minimum number ( k MIN ) of LM points from which the slopes and uncertainties are computed. Use of the ISA approach yields a value for \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) of 4.50 d decade −1 upon selection of the upper quartile of LM points relative to the trend line. The fact this value of \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) lies within 2% of the slope of the design value of the upper bound attests to the robust accuracy of the ISA approach. The MIM selection procedure with 5-year intervals (the last interval covers 6 years) results in the selection of eight points from which S PFP−LM is computed, for each of the million cases. The MIM approach results in a value for \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) of 3.95 d decade −1 , which is 14% lower than the upper bound of the experimental design. Numerous values of S PFP−LM from the MIM ensemble are greater than the upper bound design value of 4.59 d decade −1 . Nonetheless, on average, the MIM approach tends to underestimate the true value of the prescribed upper bound of the experimental design, due to gaps in the true LM of PFP that sometimes exceed five years. Finally, for the VAS approach, the number of selected points can often be low, which is reflected in the value of \(\overline{k}\) given for the VAS entries in Supplementary Table 3 . Therefore, we have imposed criteria that VAS must select either a minimum of three, five, or seven points from each of the million artificial time series. For a final test of VAS, we have imposed a requirement that ten points (that is, the ten largest values of PFP) must be used for the computation of S PFP−LM for each time series. Values of \(\overline{{S}_{{\rm{PFP}}-{\rm{LM}}}}\) returned by VAS range from 2.08 to 2.30 d decade −1 , a factor of two less than the upper bound of the experimental design, because as noted above the VAS procedure selects the highest values rather than LM. As such, the ISA selection procedure provides a more accurate representation of the design of the underlying model than the MIM approach and a much more accurate representation than that provided by the VAS selection procedure. Statistical significance The fitting uncertainty (Δ S PFP−LM ) in the regression lines for PFP LM is not a true measure of the significance of the trend ( S PFP−LM ), because Δ S PFP−LM does not consider the selection process for obtaining LM of PFP. Therefore, we assess the statistical significance of S PFP−LM and Δ S PFP−LM using another set of MC simulations. In these MC simulations, we work with actual data for PFP from either a reanalysis or GCM to assure the basis set of our randomly generated time series are identical to the PFP time series. The time series for PFP shown in Fig. 3a consists of 41 data points, which could be arranged in more than 3 × 10 49 possible combinations. We use a random number generator to place these 41 PFP data points into 10 million combinations. The ISA selection algorithm is applied to each of the 10 million combinations of PFP, resulting in a selection of the upper quartile (that is, 10 and usually 38 points for the reanalyses and GCMs, respectively) relative to the trend line, following the same algorithm used to select the PFP LM shown in the main article. The corresponding slope ( S PFP−LM ) and uncertainty (Δ S PFP−LM ) is found for each of these combinations. The p-values given in Table 1 for S PFP−LM are equal to the probability that the slope of these random fits exceeds the slope determined from the data. In other words, 18% of the randomly generated combinations of PFP for the ERA5/ERA5.1 basis set (over the 1980–2020 time period) yield a value for S PFP−LM larger than 4.50 d decade −1 . However, for the vast majority of the time series that yields a value of S PFP−LM larger than 4.50 d decade −1 , the value of Δ S PFP−LM associated with the fit is larger than the ±0.19 d decade −1 uncertainty found from the ERA5/ERA5.1 time series. High slopes with large uncertainty are usually dominated either by several low values of PFP LM at the start of the time series of the selected points or a couple of high values of PFP LM towards the end of the time series. As explained below, very few of the randomly generated time-series yield a high value of S PFP−LM in combination with a low value of Δ S PFP−LM . We therefore examine the quantity S PFP−LM /Δ S PFP−LM as a measure of the statistical significance of both the temporal rise in PFP LM as well as the uncertainty in this rise. Of the randomly generated time series, 99.992% yield a value of S PFP−LM /Δ S PFP−LM that is smaller than the actual value of 23.6 (4.50 d decade −1 divided by 0.19 d decade −1 ). Consequently, a p-value of 8 × 10 −5 is associated with the entry for S PFP−LM /Δ S PFP−LM based upon ERA5/ERA5.1 data in Table 1 and we state, in the main article, that the value of S PFP−LM and the associated uncertainty are statistically significant at better than the 2σ confidence level. While the shape of the probability density functions of S PFP−LM and S PFP−LM /Δ S PFP−LM are not strictly Gaussian, the fall-offs of the tail of both functions are Gaussian-like (i.e., kurtosis close to 3; more specifically, the kurtosis for S PFP−LM is 2.1 and for S PFP−LM /Δ S PFP−LM is 3.4). Therefore, we are comfortable assessing better than 2σ confidence to S PFP−LM /Δ S PFP−LM since 8 × 10 −5 is so much less than 0.05, the 2σ confidence marker for a strictly Gaussian distribution. We have similarly estimated the statistical likelihood of achieving the reported values of S PFP−LM and S PFP−LM /Δ S PFP−LM from the 150-year time series of PFP from each CMIP6 GCM simulation constrained by SSP5-8.5, again using 10 million of the possible combinations of PFP from each basis set. The vast majority of the resulting p-values indicate statistical significance at close to or better than the 2σ level of confidence for both GCM-based values of S PFP−LM as well as S PFP−LM /Δ S PFP−LM (Table 1 ). Vortex boundary The vortex boundary used throughout our study is based on the value of 36 s −1 for nPV. This definition of the vortex boundary is commonly used in other studies of Arctic ozone, because nPV = 36 s −1 tends to be closely associated with the maximum, horizontal gradient of potential vorticity 20 , 26 , 109 . To check whether other definitions of the vortex boundary would yield insignificant results Supplementary Fig. 7 shows trends in S PFP−LM found by the ISA algorithm applied to data from ERA5/ERA5.1 combined with ERA5 BE (preliminary version) from 1965 to 2020 for four alternate definitions of the vortex boundary, along with the resulting trends and p-values for the quantity S PFP−LM /Δ S PFP−LM . For each alternate vortex boundary definition, the resulting trends in S PFP−LM are positive and highly statistically significant. The numerical values for PFP do vary based on how the boundary is specified and differ from those shown in Fig. 3b of the main paper, due largely to the use of the volume of the Arctic vortex in the denominator of the definition of PFP (Eq. 4 ). SSU & TOVS versus AMUS & ATOVS In the paper, we state that similar results are obtained for trends in PFP (differences within respective uncertainties) when considering temperature from the SSU and TOVS space-borne systems versus AMSU and ATOVS systems. The transition occurred in the years 1998–1999 (ref. 40 ). Supplementary Fig. 8 shows that similar results for trends in PFP LM (differences within respective uncertainties) are found when considering data obtained only prior and only after this transition. Aerosol reactivity potential The main article states: we arrive at remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT , because these two temperature thresholds are so similar. The term aerosol reactivity potential (ARP) is similar to PFP, except in Eq. ( 1 ) the quantity T NAT is replaced by T ACL , which represents the temperature at which chlorine is activated. Values of T ACL are computed as a function of H 2 O and sulphate surface area density at 210 K using Eq. ( 1 ) and information in the caption of Fig. 5 of Drdla and Müller 6 . We use potential temperature as the vertical coordinate and the values of coefficients given in Table 1 to find T ACL . The entire analysis is then repeated (i.e., analogues of A PSC and V PSC , termed A ACL and V ACL , are computed and used as in Eq. ( 1 )), resulting in the term ARP being computed using an Eq. ( 2 ) with V ACL rather than V PSC . Supplementary Fig. 9 shows measured and modelled ΔO 3 as a function of ARP (panel a) and OLP found using ARP rather than PFP (panel b). Supplementary Figure 9 shows trends in ARP from the four reanalysis data centres used in Fig. 3 . The numerical values for the slope of the LM of ARP ( S ARP−LM ) differ by only a small amount (typically 10%) compared to those given for S PFP−LM in the main article. Finally, Supplementary Fig. 11 shows the time series of ARP and the local maximum in ARP selected using ISA, for the 4 GCMs highlighted in Fig. 5 . The results shown in Supplementary Figs. 9 and 10 are quite similar to those shown in Figs. 1 and 5 of the main article because T NAT is so similar to T ACL . In the actual Arctic stratosphere, denitrification (the removal of HNO 3 by the physical sedimentation of PSCs) will prolong ozone loss 60 and alter T NAT due to suppression of gas-phase HNO 3 (ref. 57 ). However, the volume of air for which chlorine is activated by heterogeneous chemistry is governed most strongly by temperature. The close visual relation between Figs. 1 , 3 and 5 and Supplementary Figs. 9 , 10 , and 11 support the validity of the definition of OLP used in the main paper, which does not explicitly represent denitrification for the computation of T NAT . Stratospheric H 2 O Figure 2 contains our projections of stratospheric H 2 O accounting for contributions from the oxidation of CH 4 (Fig. 2b ), warming of the tropical tropopause (Fig. 2c ), and the sum of both forcings (Fig. 2d ). The effect of oxidation of CH 4 on stratospheric H 2 O is based upon analysis of satellite observations of CH 4 obtained by the HALOE instrument in the Arctic polar vortex, as shown in Figure 12 of Müller et al. 51 for April 1993. In the Arctic stratosphere, between about 450 and 600 K potential temperature, the HALOE measurement of CH 4 exhibits a near-constant (with respect to altitude) value of ∼ 0.5 ppmv. The age of air in the Arctic, lower stratosphere (i.e., the mean transit time from the tropical tropopause to the polar, lower stratosphere) tends to be about 6 years 30 . Hence, the appropriate comparison for surface conditions is the global mean abundance of CH 4 in January 1987, which was 1.639 ppmv according to . Consequently, we infer that about 70% of the available CH 4 (at the time this air parcel entered the stratosphere) has been converted to H 2 O, based on the simple calculation fraction = (1.67 ppmv−0.5 ppmv)/(1.67 ppmv) = 0.70. The time series for H 2 O shown in Fig. 2b is found from: $$\varDelta {{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})=2\times 0.7\times {{{\rm{CH}}}_{4}}^{{\rm{SURFACE}}}({\rm{yr}}-6)$$ (5) $${{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})=\varDelta {{\rm{H}}}_{2}{\rm{O}}({\rm{yr}})+2.306\,{\rm{ppmv}}$$ (6) where the leading 2 in Eq. ( 5 ) accounts for the production of two H 2 O molecules upon loss of every CH 4 molecule, the factor of 0.7 and the 6-year lag have been explained just above, and the constant value of 2.306 ppmv is used to force polar stratospheric H 2 O to equal 4.6 ppmv in year 1990. The historical and future surface CH 4 time series that underlie Fig. 2b have been obtained from the various SSP scenarios 41 . Since the numerical value of the 0.7 terms in Eq. ( 5 ) depends on stratospheric OH (mainly), stratospheric Cl (second order), and the strength of the BDC, this conversion factor could change over time. Our approach is simplistic, yet captures the primary first-order effect of changing CH 4 on polar stratospheric H 2 O. The satellite-based data record for H 2 O that affords coverage of the polar regions starts in 1984, but trends are difficult to discern due to offsets between retrievals from various instruments that are commonly larger than the expected increase in polar H 2 O since 1984 (ref. 110 ). The projections for the effect of warming of the tropical tropopause shown in Fig. 2c are based on the analysis of output from CMIP6 GCMs shown in Figure 15 of Keeble et al. 43 . They document results from ten CMIP6 GCMs, for the four SSP scenarios shown in Fig. 2c , plus a few additional SSPs. We have computed a multi-model mean from the time series for nine of the ten GCMs, neglecting results from the UKESM1-0-LL GCM, because the results from this GCM seem to be an outlier (large future rise in stratospheric H 2 O) compared to results from the other nine GCMs. We then apply a time-invariant, constant offset to this time series such that stratospheric H 2 O equals 4.60 ppmv in 1990. A few of the GCMs did not archive output for all four of the SSP scenarios used in our paper; in this case, we simply averaged output from all available GCMs. These ten CMIP6 GCMs tend, on average, to underestimate observed H 2 O in the tropical lower stratosphere 110 by nearly 1 ppmv from 1984 to the present, as shown in the upper panel of Figure 12 of Keeble et al. 43 . The abundance of H 2 O in the tropical lower stratospheric is governed by thermodynamics, whereas the abundance of H 2 O in the polar stratosphere is driven by this process as well as the oxidation of CH 4 . This forecast of rising polar stratospheric H 2 O shown in Fig. 2c is consistent with a recent theoretical analysis of the future evolution of height and temperature of the tropical tropopause associated with global warming 50 . ATLAS chemical transport model Simulations are performed with the ATLAS global Lagrangian Chemistry and Transport Model (CTM) 27 , 109 . Model runs are driven by meteorological data from the ERA5 reanalysis 34 . Descent rates are calculated directly from the heating rates provided by the ERA5 reanalysis. From the two different options provided by ECMWF, we use the total (all sky) heating rates and not the clear sky heating rates. The vertical range of the model domain is 350–1900 K and the horizontal resolution is 150 km. The run for winter 2020 starts on 1 September 2019 and ends on 1 May 2020, with the first 30 days consisting of model spin up. Additional runs with a similar setup for the Arctic winters of 2005, 2010, and 2011 are performed. Model values of O 3 , H 2 O, HCl, N 2 O, HNO 3 and CO are initialized from the measurements obtained by the MLS instrument for the particular year (data obtained from ), and ClONO 2 is initialized from a climatology provided by the ACE-FTS instrument at . Initialization of CH 4 , NO x and Br y are as described in Wohltmann et al. 109 . Reaction rates and absorption cross sections are from the 2015 NASA Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies compendium . A common deficiency of CTMs is a pronounced discrepancy between measured and modelled HCl mixing ratios in the Antarctic polar vortex, as described in section 6.1 of Wohltmann et al. 109 . Therefore, a temperature offset of −3 K was used for the calculation of the Henry constant of HCl, which improves this discrepancy. Two additional ATLAS runs were started with the meteorological data of 2019/2020 and with scaling factors for chlorine and bromine relative to 2020, intended to simulate conditions for 2060 and 2100, respectively (Fig. 2a ). The scaling factors for chlorine were 0.667 and 0.455 for 2060 and 2100, respectively, and the scaling factors for bromine were 0.778 and 0.694 for these two years. These scaling factors are based on the contributions of chlorine and bromine to polar EESC, found as described in the caption of Fig. 2 . The main article states: we use an exponent of 1.2 for EESC because this choice leads to the largest value of r 2 for the six ATLAS runs shown in Fig. 1b . Supplementary Fig. 12 illustrates the value of r 2 found as a function of the exponent η in the expression: $$\frac{{{\rm{EESC}}({\rm{yr}})}^{{\rm{\eta }}}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{{\rm{\eta }}}}\times {\rm{PFP}}({\rm{yr}})$$ (7) The ATLAS runs for winters 2005, 2010, 2011, 2020, 2060, and 2100 exhibit a well-defined maximum in r 2 at η = 1.2, due to the large variation of EESC over these years. Conversely, the ozonesonde determinations of ΔO 3 cannot be used to constrain η because EESC varies by only ~15% from 1993 to 2020. The ozonesonde data are quite valuable for showing the near-linear dependence of ΔO 3 with PFP (Fig. 1a ). Values of r 2 as a function of η , for the expression EESC η ×ARP, are also shown in Supplementary Fig. 12 . The simulation of ΔO 3 by the ATLAS model also exhibits a maximum near η of 1.2 when T ACL is used rather than T NAT , reinforcing the statement in the main article: remarkably similar conclusions based upon consideration of the temperature at which chlorine is activated on aerosols 6 , 32 , rather than T NAT . Exponent for EESC In the main article, we assess the uncertainty in ΔO 3 REG using lower and upper limits of 1 and 1.4 as the exponent for EESC in the expression for OLP. The lower limit of 1 corresponds to a linear dependence of chemical loss of Arctic O 3 on EESC, based upon the work of Douglass et al. 111 who showed that ΔO 3 for the Arctic vortex varies linearly with EESC for fixed values of V PSC , for values of EESC spanning 1990–2016. The upper limit of 1.4 was chosen because r 2 has the same value for η = 1 and η = 1.4 in Supplementary Fig. 12, and also because Jiang et al. 112 showed that the variation of the chemical loss of Antarctic ozone varies as a function of chlorine loading to the power of 1.4 for 1980–1990, a period of rapid rise in the chlorine component of EESC. General circulation models (GCMs) and the QBO of zonal wind This paper relies extensively on archived GCM output. The computation of PFP is based upon analysis of horizontally and vertically resolved fields of temperature and pressure from 26 CMIP6 GCM simulations constrained by SSP5-8.5 projections of GHGs and 27 CMIP5 GCM runs constrained by RCP 8.5. Supplementary Fig. 13 shows the time series of PFP from CMIP5 GCMs in a manner analogous to Fig. 4 of the main article, which provides results for CMIP6 GCMs; Supplementary Table 4 provides tabular information regarding S PFP−LM , Δ S PFP−LM , the temperature threshold offset for the existence of PSCs, and p -values for CMIP5 GCMs in a manner analogous to Table 1 . The modelling centre and literature reference for each of these GCM simulations are given in Supplementary Table 1 . On the CMIP5 archive model output is stored using a nomenclature of rLiMpN, where r refers to realization, i refers to initialization method, p refers to physics version, and L, M, and M are integers used to distinguish results from different runs from a particular GCM. Based upon file availability, we have used r1i1p1 output for all GCM runs except for r6i1p1 from CCSM4 for both the historical and RCP 8.5 simulations, r6i1p1 for the historical and r2i1p1 for the RCP 8.5 runs from GISS-E2-H as well as GISS-ER-2. For CMIP6 output, the nomenclature of rLiMpNfO is used, where r, i, and p are the same as described above, and f refers to the forcing index and O is a fourth integer. In this study, all output is from r1i1p1f1 files except for the use of r1i1p1f2 for historical and SSP runs from the CNRM-CM6-1, CNRM-CM6-1-HR, CNRM-ESM2-1, MIROC-ES2L, and UKESM1-0-LL GCMs, and the use of r1i1p1f3 for the historical and SSP runs from the HadGEM3-GC21-LL and HadEM3-GC21-MM GCMs. Figure 7 of the paper shows the effect of time-dependent stratospheric H 2 O on the time series of PFP and OLP from the EC-Earth3 GCM. Supplementary Fig. 14 shows the effect of variable H 2 O on PFP and OLP from the other three GCMs that appear in Fig. 5 . These other GCMs exhibit similar behaviour to the results from EC-Earth3 illustrated in Fig. 7 , supporting the robustness of the time series for PFP and OLP across numerous GCMs. In the main article, we state that examination of the tropical zonal wind from the GCMs indicates that the CMIP6 models tend to provide a better representation of the QBO than was evident in output from CMIP5 GCMs. This feature of the GCMs is illustrated in Supplementary Fig. 15 (reanalysis data) and 16 (GCMs). The model output shown in Supplementary Fig. 16 was mainly based upon archived monthly mean zonal wind fields from each GCM and complemented above 10 hPa by corresponding computed monthly means from daily/six-hourly data where needed; data for each panel are shown up to the highest altitude of each GCM. As can be seen from this figure, the representation of the QBO is considerably more realistic within the CMIP6 GCMs than the CMIP5 models. Supplementary Fig. 17 is similar to Fig. 9 , except trends are shown for ΔO 3 REG and OLP from the 20 CMIP6 GCMs that submitted results for all four SSPs to the CMIP6 archive that either: (a) exhibit a realistic QBO based upon our cursory examination or (b) do not exhibit a rendering of the QBO. There is little difference in the behaviour of ΔO 3 REG and OLP among these two groupings of the CMIP6 GCMs. As noted in the Main article, a more quantitative analysis of the representation of the QBO in these models reveals deficiencies in the mean amplitude below 20 hPa 54 and substantial effort is currently being directed towards improving the representation of the QBO within GCMs 55 . Further considerations In the main article we state: the temporal evolution of ΔO 3 REG and OLP found using results from the four GCMs with interactive chemistry is about 20–25% lower at end of century than that found for the other 16 CMIP6 GCMs; nonetheless, ΔO 3 REG remains close to the contemporary value until the end of the century for the SSP3-7.0 and SSP5-8.5 simulations conducted using these interactive GCMs. This finding is illustrated by Supplementary Fig. 18 , similar to Fig. 9 except results are shown only for the four CMIP6 GCMs with fully interactive stratospheric chemistry. Supplementary Fig. 19 is also similar to Fig. 9 , except trends are shown for the quantity: $$\frac{{{\rm{EESC}}({\rm{yr}})}^{{\rm{\eta }}}}{{{\rm{EESC}}}_{{\rm{MAX}}}^{{\rm{\eta }}}\,}\times {\rm{ARP}}({\rm{yr}})$$ (8) computed from CMIP6 GCM output. This figure reinforces the notion that remarkably similar conclusions are found upon consideration of the temperature at which chlorine is activated, rather than the PSC existence temperature. Finally, Supplementary Fig. 20 shows results similar to Figs. 8 a and 9a , in this case illustrating how ΔO 3 REG and OLP vary as a function of time for a multi-model mean of the 27 CMIP5 GCMs that archived results for RCP 8.5, the 26 CMIP6 GCMs that recorded output for SSP 5-8.5, and a grand multi-model ensemble of all 53 GCM runs conducted using an end of century RF of climate equal to 8.5 W m −2 . Supplementary Figures 17 to 20 provide further evidence that the future rise in GHGs has the potential to cause a significant cooling of the Arctic stratosphere leading to conditions conducive to large, seasonal loss of Arctic O 3 , particularly with future levels of stratospheric H 2 O as shown in Fig. 2d . Data availability The data that support the findings of this study are available in Zenodo with the identifier . ERA5/ERA5.1/ERA5 BE (preliminary version) data are available at (ERA5) as well as (ERA5 BE prelim). CFSR and CFSv2 data are provided by NOAA’s National Centers for Environmental Prediction and are available at (CFSR) and (CFSv2). MERRA-2 data are provided by the Global Modeling and Assimilation Office at NASA Goddard Space Flight Center and are available at ( ). The Japanese 55-year Reanalysis (JRA-55) project was carried out by the Japan Meteorological Agency and the data are available at . That dataset was collected and provided under the Data Integration and Analysis System (DIAS, Project No. JPMXD0716808999), which has been developed and operated by the Ministry of Education, Culture, Sports, Science and Technology. CMIP5 and CMIP6 GCM output are provided by the World Climate Research Programme’s Working Group on Coupled Modelling and are available at and . Code availability Code relating to this study is available from the corresponding author on request.
There is a race going on high in the atmosphere above the Arctic, and the ozone layer that protects Earth from damaging ultraviolet (UV) radiation will lose the race if greenhouse gas emissions aren't reduced quickly enough. A new study from an international team of scientists, including University of Maryland Professor Ross Salawitch, shows that extremely low winter temperatures high in the atmosphere over the arctic are becoming more frequent and more extreme because of climate patterns associated with global warming. The study also shows that those extreme low temperatures are causing reactions among chemicals humans pumped into the air decades ago, leading to greater ozone losses. The new findings call into question the commonly held assumption that ozone loss would grind to a halt in just a few decades following the 2010 global ban on the production of ozone depleting chemicals called chlorofluorocarbons (CFCs) and halons. The study—which was jointly conducted by UMD, the Alfred Wegener Institute's Helmholtz Centre for Polar and Marine Research, and the Finnish Meteorological Institute—was published in the journal Nature Communications on June 23, 2021. "We're in a kind of race between the slow and steady decline in CFCs, which take 50 to 100 years to go away, and climate change, which is causing polar vortex temperature extremes to become colder at a rapid pace," said Ross Salawitch, who is a professor in the UMD Department of Atmospheric and Oceanic Science, the Department of Chemistry and Biochemistry, and the Earth System Science Interdisciplinary Center. "The increasingly cold temperatures create conditions that promote ozone depletion by CFCs. So, even though these compounds are slowly going away, Arctic ozone depletion is on the rise as the climate changes." New data from the study showed the lowest Arctic polar vortex temperatures and the highest ozone losses on record in 2020, beating the previous records set nine years ago in 2011. The polar vortex is a relatively self-contained, low-pressure system that forms in the stratosphere—at an altitude of about 12 to 50 kilometers (7.5 to 31 miles)—over the Arctic every autumn and stays for varying durations throughout the winter to spring. The pattern of warm and cold winter temperatures in the polar vortex is very irregular, so not every winter is extremely cold. But the trend toward more frequent and more extreme low temperatures in the polar vortex concerns the researchers, because those conditions promote the formation of clouds, and that promotes ozone loss in the polar stratosphere. Most of the chlorine and a significant amount of the bromine in the stratosphere comes from the breakdown of CFCs, halons and other ozone-depleting substances. Normally within the Arctic polar vortex the chlorine is non-reactive, but clouds provide the right conditions for the chlorine to change form and react with bromine and sunlight to destroy ozone. Despite drastic reduction of the industrial production of CFCs and halons since the Montreal Protocol in 1987 and the global ban that followed in 2010, these long-lasting compounds are still abundant in the atmosphere. According to the World Meteorological Organization, atmospheric chlorine and bromine produced by humans is not expected to fall below 50% of their highest levels until the end of this century. To determine what this situation means for the future, the researchers projected ozone loss out to the year 2100 based on the long-term temperature trend in the polar vortex and the expected decline in chlorine and bromine compounds. They based their predictions on the output from 53 top climate models used by the Intergovernmental Panel on Climate Change. "All but one of the climate models we looked at show that exceptionally cold winters in the polar vortex will get colder over time," Salawitch said. "And the more greenhouse gas emissions there are, the steeper the trend, which means greater ozone depletion." Combining these projections with analyses of meteorological data from the past 56 years, the researchers confirmed that the Arctic is already experiencing a significant trend toward lower stratospheric temperatures and associated increases in ozone losses. What's more, their observations reveal that these trends are occurring at rate consistent with the fastest climate models. "We have been saying that a train is coming for a number of years now," said Salawitch, pointing to research papers he published in 2004 and 2006 that showed extreme winters in the Arctic were becoming colder. "We've now seen the train whizzing by with record ozone loss in 2011 and now in 2020. So, this paper is really a wake-up call that something is happening in the atmosphere that's really important for ozone, and it looks like greenhouse gases are driving it." Salawitch and his colleagues do not yet fully understand how increasing greenhouse gas emissions and the associated changes to global climate are causing the extreme cold winters in the stratospheric layer of the polar vortex. But some of the underlying mechanisms are understood. Global warming occurs in part because greenhouse gases trap heat closer to Earth's surface, which allows cooling of the upper layers in the stratosphere, where the ozone layer is located. Warming at the surface causes changes to prevailing wind patterns, and the researchers suggest that these changes also produce lower temperatures in the polar vortex. The researchers also note that recent years have seen a rapid increase in methane, a more powerful greenhouse gas than carbon dioxide, in the lower atmosphere. As this gas travels to the stratosphere, it increases humidity, which also leads to conditions that promote ozone-destroying chemical reactions in the Arctic. Because ozone filters much of the sun's potentially harmful UV radiation, a depleted ozone layer over the Arctic can result in more UV radiation reaching the surface of the Earth over Europe, North America and Asia when the polar vortex dips south. But there is hope for avoiding future ozone depletion, according to the researchers. Their study shows that substantial reductions in greenhouse gas emissions over the coming decades could lead to a steady decline in conditions that favor large ozone loss in the Arctic stratosphere. The research paper, Climate change favours large seasonal loss of Arctic ozone, Peter von der Gathen, Rigel Kivi, Ingo Wohltmann, Ross J. Salawitch, Markus Rex, was published in the journal Nature Communications on June 23, 2021.
10.1038/s41467-021-24089-6
Medicine
New targets for CAR-T cell therapy against acute myeloid leukemia through AI-assisted analysis
Gottschlich, A. et al. Single-cell transcriptomic atlas-guided development of CAR-T cells for the treatment of acute myeloid leukemia, Nature Biotechnology (2023). DOI: 10.1038/s41587-023-01684-0. www.nature.com/articles/s41587-023-01684-0 Journal information: Nature Biotechnology
https://dx.doi.org/10.1038/s41587-023-01684-0
https://medicalxpress.com/news/2023-03-car-t-cell-therapy-acute-myeloid.html
Abstract Chimeric antigen receptor T cells (CAR-T cells) have emerged as a powerful treatment option for individuals with B cell malignancies but have yet to achieve success in treating acute myeloid leukemia (AML) due to a lack of safe targets. Here we leveraged an atlas of publicly available RNA-sequencing data of over 500,000 single cells from 15 individuals with AML and tissue from 9 healthy individuals for prediction of target antigens that are expressed on malignant cells but lacking on healthy cells, including T cells. Aided by this high-resolution, single-cell expression approach, we computationally identify colony-stimulating factor 1 receptor and cluster of differentiation 86 as targets for CAR-T cell therapy in AML. Functional validation of these established CAR-T cells shows robust in vitro and in vivo efficacy in cell line- and human-derived AML models with minimal off-target toxicity toward relevant healthy human tissues. This provides a strong rationale for further clinical development. Main Chimeric antigen receptor T cells (CAR-T cells) are human-derived effector cells that are genetically engineered to therapeutically target a specific epitope on malignant cells 1 . CAR-T cells targeting the B cell lineage antigens cluster of differentiation 19 (CD19) or B cell maturation antigen (BCMA) have shown clinical efficacy in heavily pretreated individuals suffering from different B cell malignancies, such as B cell lymphoma, B cell acute lymphoblastic leukemia and multiple myeloma 2 , 3 , 4 . However, CAR-T cells targeting non-B cell-associated epitopes have yet to show similar response rates 5 . For instance, in myeloid malignancies, such as acute myeloid leukemia (AML), common target structures are often coexpressed on vital tissues, such as endothelial cells or hematopoietic stem and progenitor cells (HSPCs), increasing the risk for on-target off-tumor toxicity 6 , 7 . Identifying safe target structures is thus pivotal to translate the vast potential of CAR-T cell therapy to myeloid neoplasms. AML is the most common acute leukemia in adults, and its molecular heterogeneity has complicated the successful development of new therapeutic agents 8 . Despite upfront curative intent in most individuals with combinatorial chemotherapy, disease relapse is frequent, occurring in over 50% of treated individuals 9 . After relapse, allogeneic hematopoietic stem cell transplantation (allo-HSCT) remains the only curative approach; but even then, long-term survival probabilities are below 20%. Therefore, innovative treatment options represent a high unmet medical need. Currently, CAR-T cells targeting AML-associated target antigens CD33 and interleukin-3 receptor-α (IL3RA, CD123) are undergoing clinical investigation. Due to preclinical evidence of off-tumor toxicity toward HSPCs, most clinical trials are evaluating the potential of anti-CD123 or anti-CD33 CAR-T cells as a bridge-to-transplant regimen before allo-HSCT. Early reports of these trials have shown only limited therapeutic efficacy 10 , 11 , 12 . Yet, more complete results of these clinical studies in AML are eagerly awaited. Meanwhile, other targets, such as CD70, C-type lectin-like molecule-1, FMS-like tyrosine kinase-3 (FLT3), CD44 variant 6 (CD44v6), sialic acid-binding Ig-like lectin-6 (Siglec-6) or CD117, have been tested in preclinical studies as alternative CAR targets 13 , 14 , 15 , 16 , 17 . However, clinical validation is pending, and expression profiles of most of the targets raise at least some uncertainties regarding their clinical safety and efficacy. Newly developed CAR-T cells are often directed to target structures that have already been used for antibody therapy. By contrast, unbiased de novo target screenings for CAR-T cell therapy have rarely been conducted 18 . In addition, until recently, off-tumor antigen projections could only leverage bulk sequencing data, missing detailed information about cell-type-specific target antigen expression patterns. Conveniently, the revolution in single-cell technologies in the last decade has generated massive single-cell expression datasets that provide precise information about the transcriptomic anatomy of healthy and malignant cells 19 , a mostly untapped resource for therapeutic development, at least in the context of de novo antigen predictions and CAR-T cell development. These advancements allow in-depth on- and off-tumor antigen prediction 20 , offering unique insights into healthy and malignant cells at an unmatched resolution. We thus developed a single-cell RNA-sequencing (scRNA-seq)-based approach specifically tailored to identify promising antigens for CAR-T cell therapy on a discovery AML cohort of 15 individuals 21 . We generated a transcriptomic atlas from publicly available datasets, consisting of over 28,000 healthy and malignant bone marrow cells from these individuals and over 500,000 healthy cells from nine of the most vital human tissues. We screened these data for cell surface antigens expressed on malignant cells with minimal coexpression on healthy cells, including T cells. With rigorous cutoffs, we identified two unrecognized targets for CAR-T cells in AML: colony-stimulating factor 1 receptor (CSF1R) and CD86. We developed CAR-T cells against both targets and tested their efficacy in vitro and in vivo in cell lines and human-derived models, including primary AML blasts. We assessed the safety of these CAR-T cells in vitro using advanced primary cell cultures for target-expressing cell types, demonstrating a better discriminatory capacity than established anti-CD33 CAR-T cells. In addition, we used several in vivo models to mitigate safety concerns. Our results illustrate the translational potential of an unbiased scRNA-seq-based screening approach and lay the basis for clinical development of our CAR candidates. Results Development of scRNA-seq-based screening algorithm We created an unbiased scRNA-seq-based discovery approach for identification of CAR targets. To ensure CAR efficacy, a suitable candidate is (1) overexpressed in malignant cells and (2) located on the cell surface. In terms of CAR safety, the candidate should (3) not be expressed on T cells and (4) show minimal expression across vital, healthy tissues (Fig. 1a ). Applying our approach to AML, we used publicly available scRNA-seq data from 15 individuals with AML 21 . From these, a total of 28,404 sequenced healthy and malignant bone marrow cells passed quality control (Fig. 1b,c ; see Methods for a detailed description of quality control steps). For maximal CAR efficacy, we sought to identify candidates with higher expression on malignant HSPC-like cells (herein termed hematopoietic stem cell (HSC)-like and progenitor (Prog)-like) than on healthy cells. Differential gene expression analyses between malignant and healthy HSPCs revealed 96 genes that were strongly overexpressed in HSPC-like cells and were used for further downstream analyses (Extended Data Fig. 1a ). Fig. 1: A scRNA-seq-based screening approach identifies CSF1R and CD86 as potential CAR targets in AML. a , Workflow of computational CAR target antigen identification by stepwise evaluation against a set of criteria for an ideal and effective CAR target antigen. The decreasing numbers of screened AML target genes are shown on the bottom. b , c , UMAP showing 28,404 healthy and malignant cells from data of 15 previously published individuals with AML harboring 15 different mutations 21 . Normalized gene expression values were log transformed. Colors highlight the different cell types ( b ) and condition ( c ). Cell annotations are provided; NK cells, natural killer cells; GMP, granulocyte–monocyte progenitors; ProMono, promonocytes; EarlyEry, early erythrocytes; ProB cells, pro-B cells; Mono, monocytes; cDC, conventional dendritic cells; pDC, plasmacytoid dendritic cells; LateEry, late erythrocytes. d , Summary of databases used to identify cell surface coding genes. e , Quantification of T cell expression of newly identified targets. Red crosses indicate targets with high expression on T cells, which were excluded from further analyses. Green check marks indicate no significant expression on T cells. f , Harmonization of 11 scRNA-seq datasets from nine healthy human tissues into a COOTA consisting of 544,764 cells. A detailed summary of all used datasets is provided in Extended Data Fig. 1b . Targets highly expressed in non-immune cell lineages or on cell types in direct proximity to infused T cells (critical cell clusters: arterial, capillary, venous, endothelial and smooth muscle cells) were excluded from further analysis. g , Volcano plot showing the remaining two target antigens with their respective FDR-adjusted log 10 ( P value) and log 2 (fold change) values from differential expression analysis between malignant HSPC-like cells and healthy HSPCs using a t -test with overestimated variance. Dashed lines indicate applied thresholds at a log 2 (fold change) of 2 and P value of 0.01. Full size image To identify candidates accessible for CAR-T cells on the target cell surface, we used OmniPath 22 , a large-scale molecular database, to integrate data from multiple resources 23 , 24 , 25 , 26 into a comprehensive human surface gene library of 4,924 genes (Fig. 1d ). Of the 96 genes overexpressed in HSPC-like cells, 36 were present in this library. Genes that passed all previous filters but showed high expression on T cells (for example, CD52 and CRIP1 ) were excluded from further analysis (Fig. 1e ). To minimize on-target off-tumor effects, we processed and harmonized 11 scRNA-seq datasets from nine healthy human tissues (brain, lung, lymph nodes, heart, skin, liver, kidney, colon and esophagus) into a massive cross-organ off-target transcriptomic atlas (COOTA) consisting of over 500,000 single healthy cells (Fig. 1f ) 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . A detailed summary of all datasets used for COOTA is provided in Extended Data Fig. 1b,c . Targets highly expressed in vital non-immune cell lineages or on cell types of tissues in direct proximity to infused T cells (that is, endothelium, arteries, veins, bronchial vessels, capillary and smooth muscle cells) were excluded from further analyses (Fig. 1f ). Using this stringent and rigorous approach, 12 potential candidates for CAR development remained. Interestingly, most of the described CAR targets for AML ( n = 20) failed the thresholds of our analyses at different levels (Extended Data Fig. 1d ). For example, prototypic AML antigens CD33 and CD123 did not fulfill our strict criteria of overexpression in malignant HSPCs (see Methods for applied thresholds), most likely due to expression of both antigens on healthy HSPCs. In addition, CD123 had high expression levels across endothelial and various lung cell types (see Fig. 2d for detailed analysis). Fig. 2: CSF1R and CD86 are preferentially expressed on malignant HSPC-like cells compared to healthy HSPCs, and off-tumor expression is restricted to infiltrating or tissue-resident immune cells. a , Expression of target and reference genes ( CD123 and CD33 ) in single healthy and malignant cell types. Normalized expression values were log transformed and scaled to unit variance; Cand., candidates; Ref., references. b , Expression of CSF1R and CD86 target genes in malignant (HSC-like and Prog-like; left) and healthy (HSC and Prog; right) stem cells. For visualization purposes, normalized expression values of healthy HSPCs and a random subsample of malignant HSPCs were log transformed and scaled to unit variance. Each peak corresponds to a cell, and peak height indicates expression intensity. c , Expression of CSF1R and CD86 target genes in healthy and malignant cells from 15 individuals with AML. Normalized gene expression values were log transformed and visualized in a UMAP embedding. d , Single-cell COOTA screening for target ( CSF1R and CD86 ) and reference ( CD123 and CD33 ) genes. The single-cell transcriptomic atlas consists of a total of 544,764 sequenced cells from nine different organs. Each field represents the mean expression value per cluster. Blank fields indicate cell types not present in a study. e , Representative flow cytometry images of target gene expression on a panel of six different AML cell lines or NALM-6 control cells. Staining for target antigens was performed at least twice; MFI, mean fluorescence intensity. f , Expression of target antigens on human immune cell populations quantified by flow cytometry. Data are shown as mean ± s.e.m. from four different donors; CM, classical monocytes; IM, intermediate monocytes; NM, non-classical monocytes. Full size image To further optimize the safety profile of newly developed CAR-T cells, we reasoned that, if targeted therapies for any of the 12 identified candidates have already been approved by the Food and Drug Administration (FDA), the risk for unexpected, severe on-target off-tumor toxicities of newly developed CAR-T cells will be minimized. In addition, this could shorten the length of time and decrease regulatory hurdles for translation of newly developed CAR-T cells into clinical routines, as safety of target-directed therapies was previously demonstrated. Thus, we used an accessible database of all monitored FDA-approved drugs that contains information on the interactions, pharmacology and chemical structures of drugs and drug targets 38 . We identified two targets, CD86 and CSF1R, which have already undergone clinical investigation (Fig. 1g ). To the best of our knowledge, neither anti-CD86 nor anti-CSF1R CAR-T cells have been implicated for CAR-T cell therapy in AML. We thus decided to further investigate their potential. Both antigens were highly expressed across malignant cells in 100% of the individuals with AML with captured malignant blasts (11 of 15; Extended Data Fig. 2a,b ), despite the heterogeneous molecular profile of the participant collective (see van Galen et al. 21 for participant characteristics). To ensure the validity of our analyses and to better reflect the cytogenetic diversity of AML as a disease, we next sought to further increase the size of our cohort. Thus, we obtained a second publicly available scRNA-seq dataset of five additional individuals with AML 39 (Extended Data Fig. 2c ). For the cross-validation of our computational target identification approach, we used scANVI, a semisupervised variational autoencoder 40 , to map the data from Petti et al. 39 onto a newly generated reference map of van Galen et al. 21 (Extended Data Fig. 2e ). In line with the results above, CSF1R and CD86 were preferentially expressed in malignant cells compared to healthy hematopoietic cells (Extended Data Fig. 2d ). Next, after extending our target identification approach to these five additional individuals with AML (Fig. 1a ), both CSF1R and CD86 were again identified as suitable target antigens for CAR therapy in this second AML cohort (Extended Data Fig. 2f,g ). In summary, using two independent single-cell AML cohorts consisting of a total of 20 individuals, we identified CSF1R and CD86 as potential CAR targets for AML therapy. On- and off-tumor expression analysis of CSF1R and CD86 Next, we benchmarked the two target antigens CSF1R and CD86 to the reference genes CD123 and CD33 to ease interpretation of receptor expression on a transcriptomic level (Fig. 2a–c ). CSF1R was expressed in all six malignant cell clusters, but was expressed the highest on monocyte-like or conventional dendritic cell-like clusters. CD86 was most strongly expressed in monocyte-like, promonocyte-like and conventional dendritic cell-like clusters (Fig. 2a ). In terms of expression in malignant HSPC clusters, CSF1R expression was higher than CD86 , albeit lower than CD123 and CD33 reference genes (Fig. 2a,b ). In contrast, CD123 or CD33 were detected in healthy HSCs and progenitors, while both CSF1R and CD86 were only minimally expressed among these cells (Fig. 2b ). Visualized in a uniform manifold approximation and projection (UMAP) embedding, the expression profiles of CSF1R and CD86 were very comparable to CD123 and CD33 reference genes (Fig. 2c ). COOTA analysis revealed target antigen expression mainly in immune cells of myeloid origin (monocytes, macrophages and dendritic cells), similar to the peripheral expression profile of CD33 (Fig. 2d ). CSF1R and CD86 were not highly expressed on epithelial or stromal cells (Fig. 2d , top). In organ-specific cell clusters (Fig. 2d , bottom), expression was restricted to microglia cells in the brain, as described in the literature 41 . We next sought to assess expression of the target antigens on a protein level. We performed primary screening using a panel of six different human AML cell lines (THP-1, Mv4-11, OCI-AML-3, PL-21, MOLM-13 and U937) and B cell malignant NALM-6 cells as negative-staining control cells (Fig. 2e ). CSF1R and CD86 were detected on all screened AML cell lines. CD123 and CD33 expression was measured as a reference (Fig. 2e ). Given the similar expression profile on mature, healthy immune cells of our targets to CD33, we decided to use CD33 as the main control for all subsequent experiments. To validate the transcriptomic profiles predicted by COOTA, we assessed receptor expression of each candidate antigen on peripheral blood immune cells from healthy donors using multicolor flow cytometry (Fig. 2f ). In accordance with our transcriptomic prediction, expression of CSF1R and CD86 was mainly restricted to monocytic cell populations with no expression on granulocytes or T cells (Fig. 2f ). Anti-mouse CSF1R CAR-T cells (mCSF1R CART) do not cause toxicity in mice Despite the stringent thresholds set by our approach and our in-depth off-tumor antigen projection, expression patterns of CSF1R and CD86 were still broader than those of candidates in clinical use (CD19 and BCMA), which are almost entirely confined to B cells or B cell subsets 11 . Therefore, we first tested the safety of the developed anti-target CAR-T cells in fully immunocompetent syngeneic mouse models. To ensure similar target expression in mice and humans, we compared expression of candidates in different organs using available bulk sequencing data (Fig. 3a ). CSF1R showed higher expression in organs of both mice and humans, while CD86 was only detected in the spleen. Also, in line with our COOTA prediction, CSF1R is known to be expressed on microglia 42 , raising additional safety concerns. scRNA-seq analysis of archived mouse brain tissue 27 confirmed expression of Csf1r in microglia and similar expression patterns in tissue-resident myeloid cells (Fig. 3b ). Fig. 3: mCSF1R CART do not cause toxicity in mice. a , Target expression (transcripts per million) across organs in humans (top) or mice (bottom) quantified using bulk RNA-seq. b , Target expression in single mouse brain cells. A UMAP embedding of sequenced brain cells is shown on the left. Each peak corresponds to a cell, and peak height indicates expression intensity. Normalized, log-transformed antigen expression per cell type is shown on the right. c , Construct expression on transduced primary mouse T cells. d , Activation of mCSF1R or mEpCAM CART after incubation with plate-bound mCSF1R measured by flow cytometry. e , mCSF1R or mEpCAM CART cocultured with J774A.1-Luc + for 48 h. Cell lysis was quantified by BLI (left). Secretion of mouse IFNγ (mIFNγ) was quantified by enzyme-linked immunosorbent assay (ELISA; right). Data in d and e are shown as mean ± s.e.m of three independent experiments. For data in e (right), statistical significance was calculated by unpaired t- test. f , Treatment schedule for in vivo toxicity assessment of mCSF1R CART. g , Weight curves of mice treated with 3 × 10 6 mCSF1R CART ( n = 9) or mCherry T cells ( n = 11); 6 × 10 6 mEpCAM CART ( n = 5) were transferred as a toxicity control. Error bars indicate s.e.m.; NS, not significant. h , Quantification of mCherry + T cells of the parent population (top; parent population: CD3 + CD8 + cells) or CD11b + cells (bottom) by flow cytometry. Data are shown as mean ± s.e.m. of n = 6 mice. The shown statistical significance applies to day 7. i , Serum cytokine levels 1 d (d1) or 7 d (d7) after ACT. Cytokine levels were measured with LEGENDplex; n = 3 mice. Statistically significant increases in serum cytokine levels (mEpCAM versus mCSF1R CART or mCherry T cell-treated mice) occurred at day 7; IFNγ ( P = 0.0371), CXCL9 ( P = 0.0096) and CXCL10 ( P < 0.0001); TNFα, tumor necrosis factor-α; VEGF, vascular endothelial growth factor; GM-CSF, granulocyte–macrophage colony-stimulating factor. j , Treatment regimen to assess neurological toxicity in CX3CR1–GFP reporter mice. k , Weight curves of mice after i.c. (2 × 10 5 ) or i.v. (3 × 10 6 ) injection of mCSF1R CART or mCherry T cells. l , m , Quantification of transferred T cells ( l ) or microglia ( m ) by TPLSM. The indicated P values in m apply to comparisons between all groups. n , Mean body volume of microglia. The indicated P values apply to comparisons between mCSF1R CART i.c. and mCherry T cells i.c. o , Representative maximum intensity projection of microglia or macrophages (green) in CX3CR1–GFP mice after i.c. injection of mCSF1R CART (red, top) or mCherry T cells (red, bottom). White arrowheads indicate microglia and macrophages with higher mean density and mean body volume of microglia; depth from brain surface, 0–100 µm. Data in k , m and n are shown as mean ± s.e.m.; mCSF1R CART: i.c. n = 5 and i.v. n = 4; mCherry T cells: i.c./i.v. n = 2. Data in l are shown as mean ± s.e.m.; mCSF1R CART: i.c. n = 3 and i.v. n = 4; mCherry T cells: i.c. one control mouse. For all data, if not otherwise indicated, statistical significance was calculated by two-way analysis of variance (ANOVA) with a Sidak multiple-comparison correction. Full size image Given the above, we decided to use CSF1R to model potential off-target toxicity in mice. We sequenced an mCSF1R antibody-producing hybridoma and designed second-generation mCSF1R CART (Extended Data Fig. 3a ). Mouse anti-EpCAM CAR-T cells (mEpCAM CART) or mCherry-transduced T cells were used as negative controls for all experiments (Extended Data Fig. 3a ). mCSF1R CAR construct could be efficiently transduced into primary mouse T cells (Fig. 3c ). mCSF1R CART were dose-dependently activated through Fc-immobilized recombinant mouse CSF1R protein, as seen by upregulation of the activation marker CD69 (Fig. 3d , left) and cell surface exposure of degranulation marker CD107a (Fig. 3d , right) compared to mEpCAM CART. To further validate functionality of the developed mCSF1R CART, we investigated killing capacity toward mCSF1R-expressing cell lines. Therefore, we selected the mouse reticulum cell sarcoma cell line J774A.1, which expresses mCSF1R 43 . Using flow cytometry, we verified expression of mCSF1R on J774A.1 cells, while mEpCAM was not detected (Extended Data Fig. 3b ). Coculturing mCSF1R or mEpCAM CART with J774A.1 tumor cells demonstrated efficient lysis of J774A.1 tumor cells by mCSF1R CART (Fig. 3e , left). As a marker of selective activation, high amounts of interferon-γ (IFNγ) were secreted by mCSF1R CART (Fig. 3e , right). Next, we used in vivo experiments to assess the risk for on-target toxicities. Initially, mCSF1R CART or controls were injected intravenously (i.v.) into healthy C57BL/6 mice with limited engraftment (Extended Data Fig. 3c–e ). To enhance persistence of the T cells, mice were next preconditioned using whole-body irradiation (WBI; 5 Gy) 5 d before adoptive cell transfer (ACT) of mCSF1R CART (Fig. 3f ). High counts of mEpCAM CART were used as positive controls, while mCherry T cells were used as negative controls. Following transfer of T cells, we did not detect a measurable change of weight in mCSF1R CART-treated mice as a sensitive surrogate for toxicity in mice (Fig. 3g ). In comparison, as described in the literature 44 , mEpCAM CART-treated mice rapidly lost weight 1 week after ACT (Fig. 3g ). On day 7, when mEpCAM CART-treated mice reached the predefined experimental endpoint criteria, organs were collected for subsequent analyses. Remaining mCSF1R CART- or mCherry T cell-treated mice were killed 2 weeks after ACT, and organ-derived cell suspensions were analyzed by flow cytometry. We detected higher percentages of mCSF1R CART in all organs than mCherry-transduced T cells, indicative of better persistence (or antigen-dependent proliferation) of mCSF1R CART (Fig. 3h , top). We observed lower numbers of tissue-resident CD11b + cells in the kidney, liver and lung but not in other analyzed organs (Fig. 3h , bottom), most likely due to on-target effects of mCSF1R CART. Multiplex serum cytokine measurements on day 1 or day 7 after ACT revealed no differences in cytokine levels on d7 between mCSF1R CART- and mCherry T cell- or PBS-treated mice (Fig. 3i ). By contrast, high levels of proinflammatory cytokines, such as IFNγ, CXCL9 or CXCL10, were detected in the sera of mice that received mEpCAM CART (Fig. 3i ). Similarly, serum levels of clinically used markers of organ damage (for example, urea, bilirubin and liver enzymes) were elevated in mice treated with mEpCAM CART but not in mice that received mCSF1R CART or mCherry T cells (Extended Data Fig. 3g ). Finally, we performed histopathological analysis of organs with known high expression of CSF1R. mCSF1R CART-treated mice did not exhibit any signs of organ damage in hematoxylin and eosin-stained lungs, livers or spleens (Extended Data Fig. 3h ). Notably, as previously reported 44 , lungs of mEpCAM CART-treated mice showed thickening of the alveolar epithelium, indicative of on-target off-tumor toxicities of the transferred mEpCAM CART (Extended Data Fig. 3h ). To investigate the homing and killing potential of CAR-T cells in the brain, we made use of CX3CR1–GFP reporter mice, which enable direct visualization of CAR-T cell–microglia interaction by two-photon laser scanning microscopy (TPLSM). After implantation of cranial windows, mCSF1R CART or mCherry-transduced T cells were either i.v. or intracranially (i.c.) implanted into CX3CR1–GFP reporter mice (Fig. 3j–o ). T cell–microglia interactions, change in microglia morphology and reduction of overall microglia counts were monitored using TPLSM for a total of 28 d (Fig. 3l–o ). Again, we observed no changes in weight or behavior across all treatment groups (Fig. 3k ). We detected high T cell numbers following i.c. implantation of mCSF1R CART or mCherry T cells (Fig. 3l,o ). These numbers gradually declined over the course of 28 d, regardless of whether mice were implanted with mCSF1R CART or mCherry T cells. At day 28, no transferred T cells could be detected in any group (Fig. 3l,o ). Furthermore, microglia numbers did not substantially differ between any of the groups (Fig. 3m,o ). Following i.c. implantation of T cells, mean body volume of microglia increased, most likely due to activation of the cells 45 (Fig. 3n,o ). This activation was most pronounced in mice injected with mCSF1R CART (Fig. 3n,o ). However, by day 28, signs of microglia activation diminished in all groups (Fig. 3n,o ). After i.v. injection of T cells, we detected neither mCSF1R CART nor mCherry control T cells in the brain and observed no signs of microglia activation or depletion (Fig. 3k–o and Extended Data Fig. 3i ). Our results suggest that, despite expression of target antigens on tissue-resident immune cells in different organs and on microglia, there were no relevant safety signals that would prevent further therapeutic development. Anti-human CAR-T cells exhibit high potency in AML xenograft models After proving the safety of CSF1R in various syngeneic mouse models, we next aimed to validate the targets in human models. We cloned an anti-human CSF1R-binding single-chain fragment variable (scFv) into the preexisting anti-mCSF1R CAR backbone, which allows direct cross-comparisons of CAR-T cell activation thresholds of both anti-mouse and anti-human CAR-T cells in mice and humans. In addition, we created two fully human anti-CSF1R CAR constructs harboring either a CD8 or CD28 hinge domain (hCSF1R CART 1–3; Extended Data Fig. 4a , left). First, we extensively cross-compared the functionality of the different anti-hCSF1R CAR constructs. All constructs could be efficiently introduced into primary human T cells (Extended Data Fig. 4b ) and were dose-dependently activated by recombinant plate-bound hCSF1R protein (Extended Data Fig. 4c ). CAR products efficiently lysed all six human AML cell lines tested but not antigen-negative NALM-6 cells (Extended Data Fig. 4d ). Constructs harboring CD8 hinge domains showed a tendency for higher lytic potency at lower effector-to-target (E:T) cell ratios (Extended Data Fig. 4d ). To evaluate antigen-specific proliferation, we cocultured CSF1R CAR-T cells with AML cell lines for 4 or 7 d. All CSF1R CAR-T cells showed antigen-specific, time-dependent proliferation (Extended Data Fig. 4e ). Absolute quantification of T cell numbers revealed a more robust expansion of CD8 hinge-based anti-CSF1R CAR constructs (Extended Data Fig. 4f ). All CSF1R CAR-T cells secreted high amounts of IFNγ after coculture with THP-1, Mv4-11 or OCI-AML-3 AML cell lines but not when cocultured with NALM-6 control cells (Extended Data Fig. 4g ). Building on these results, we decided to further proceed with CSF1R CAR-T cells harboring a CD8 hinge domain (hCSF1R CART 1, herein named hCSF1R CART). Constructs for human CD86 CAR-T cells (CD86 CART) and human CD33 CAR-T control cells (CD33 CART) were similarly designed (Extended Data Fig. 4a , right). All CAR-T cell products could be efficiently introduced into primary human T cells (Fig. 4a ). To validate functionality of CD86 CART and to compare sensitivity thresholds of both newly developed therapeutics, both CAR-T cells were incubated with their respective plate-bound antigens (Fig. 4b ). Activation of CD86 CART was already observed at very low concentrations of target protein (0.01 µg ml –1 ). In comparison, hCSF1R CART required concentrations of 1 µg ml –1 or higher (Fig. 4b ). We cocultured all CAR-T cells with AML cell lines and assessed both specific lysis of AML cells and antigen-dependent proliferation (Fig. 4c,d ). hCSF1R and CD86 CART efficiently lysed all six AML cell lines, comparable to CD33 CART (Fig. 4c ), and proliferated to a similar extent (Fig. 4d ). CD19 CAR-T cells (CD19 CART) were used as control-transduced cells. Fig. 4: Anti-target CAR-T cells are functional and efficiently lyse AML cell lines in vitro and in vivo. a , Representative flow cytometric images of construct expression on primary human T cells. b , Activation of hCSF1R or CD86 CART after incubation with plate-bound hCSF1R or hCD86 protein was quantified by flow cytometry. Data are shown as mean ± s.e.m of three different donors. c , hCSF1R or CD86 CART were cocultured with luciferase-positive target antigen-expressing AML tumor cell lines or antigen-negative NALM-6 control cells expressing luciferase for 48 h at the indicated E:T ratios. CD33 and CD19 CART (CTRL-transduced) were used as positive or negative controls, respectively. Cell lysis was quantified by BLI. Data are shown as mean ± s.e.m of three different donors. d , Dye-labeled CAR-T cells were cocultured with the above indicated cell lines for 7 d at an E:T ratio of 0.5:1 to assess proliferation. One representative image of three different donors is shown. e , Diagram of the treatment scheme used for in vivo experiments. f – h , BLI images ( f ) survival curves ( g ) and quantification of tumor burden ( h ) in Mv4-11 tumor-bearing mice after treatment with different CAR-T cells; n = 5 mice per group. i – k , BLI images ( i ), survival curves ( j ) and quantification of tumor burden ( k ) of THP-1-bearing mice treated with hCSF1R CART or control-transduced T cells; n = 10 mice per group. Shown are pooled data ± s.e.m. from two independent experiments. Red crosses in f and i indicate mice that succumbed to disease. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. For Kaplan–Meier curves, statistical significance was calculated with a log-rank test. Full size image To prove in vivo efficacy of newly developed CAR-T cells, we injected NOD- scid Il2rg null (NSG) mice with a lethal dose of Mv4-11 AML cells and treated them with hCSF1R, CD86, CD33 or CD19 (control) CART (Fig. 4e ). We monitored tumor progression using bioluminescence imaging (BLI). Both hCSF1R and CD86 CART eliminated Mv4-11 tumor burden in vivo (Fig. 4f–h ). To deliver in vivo proof for another AML model, we injected a lethal dose of THP-1 cells i.v. into NSG mice and treated them with CSF1R, CD33 or control CART (Fig. 4e ). Again, hCSF1R CART efficiently controlled experimental leukemia with similar complete remission (CR) rates as CD33 CART (hCSF1R CART: CR in seven of ten; CD33 CART: CR in eight of ten), with overall survival of up to 80 d after tumor cell injection (Fig. 4i–k ). In summary, we were able to demonstrate both in vitro and in vivo efficacy of newly developed hCSF1R and CD86 CART toward a large panel of human AML cell lines. hCSF1R and CD86 CART are effective in primary human models We next assessed receptor expression in primary AML samples. Until now, CSF1R expression on primary AML blasts was thought to be restricted to ‘AML-supportive cells’ or only to mature leukemic cells 46 . Indeed, when analyzing surface CSF1R expression on frozen bone marrow samples immediately after thawing, we could not detect any measurable receptor expression by flow cytometry (Fig. 5a,b ). However, when primary AML cells were cocultured on MS-5 mouse bone marrow stromal cells (Extended Data Fig. 5b ), we observed a strong, time-dependent increase of CSF1R expression (Fig. 5a,b ). We hypothesized that these discrepancies in measurable surface CSF1R expression were most likely due to receptor downmodulation during the freezing and thawing process. To probe this, we analyzed receptor expression on AML cell lines after freeze–thaw cycles. Similar to the results seen on primary AML blasts, CSF1R was undetectable directly after thawing but regained high expression after 24 to 48 h of culture (Extended Data Fig. 5a ). To further exclude any cell culture artifacts, we analyzed surface receptor expression on primary AML blasts after culturing in cytokine-rich medium 47 (Extended Data Fig. 5c ). Again, CSF1R was highly expressed on malignant primary AML blasts after culture (Extended Data Fig. 5d ). We also confirmed expression of CD86 on primary AML blasts (Fig. 5c ). Fig. 5: CSF1R and CD86 are readily detected on primary AML samples, and hCSF1R CART show efficient lysis of primary AML samples in vitro and in vivo. a , Expression of CSF1R following thawing of primary AML samples over 72 h. Each line represents one individual. b , Representative histograms of CSF1R (colored) expression on primary AML samples over time in comparison to isotype control (gray). c , Expression of CD86 on primary AML samples. Each dot represents one individual. Left: percentage of CD86 + cells gated to isotype. Right: representative histograms of four different individuals. Data are shown as mean ± s.e.m. of 11 different primary AML samples. d , hCSF1R, CD86 or CD33 CART or untransduced T cells (UT) were cocultured with primary AML samples for 72 h. Specific lysis was assessed by flow cytometry. Data are shown as mean ± s.e.m. of seven different primary AML samples. Indicated P values apply to an E:T ratio of 0.5:1. e , hCSF1R CAR construct transduced into T cells of individuals with AML. Left: transduction efficiency of human AML-derived CAR-T cells. Right: representative flow cytometry image. f , Human-derived CAR-T cells or untransduced T cells were cocultured with primary AML samples from the same donor. Experiments were performed as outlined in d . Data in e and f are shown as mean ± s.e.m. of three different autologous donors. g , Summary of treatment scheme used for in vivo experiments. h – j , BLI images ( h ), survival curves ( i ) and BLI quantification of tumor burden ( j ) of PDX-573 tumor-bearing mice injected with 6 × 10 6 hCSF1R, CD33 CART or control-transduced T cells ( n = 5 mice per group). P values in j were calculated at week 8. White crosses in h indicate censored mice, while red crosses indicate mice that succumbed to disease. k – m , BLI images ( k ), survival curves ( l ) and BLI quantification of tumor burden ( m ) of PDX-388 tumor-bearing mice injected with 6 × 10 6 hCSF1R, CD86 or CD33 CART, control-transduced T cells or PBS ( n = 3–10 mice per group). CD86 CART treatment was performed separately. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. For Kaplan–Meier curves, statistical significance was calculated with a log-rank test. Full size image Our single-cell gene expression analysis revealed lower expression of CSF1R and CD86 in malignant HSPCs than CD123 and CD33 reference genes. Thus, we analyzed the protein expression of CSF1R and CD86 on malignant HSPC-like cells (Extended Data Fig. 5e–g ). Both, CSF1R and CD86 were expressed on malignant HSPC-like cells, showing no differences in expression between these cell types (Extended Data Fig. 5f,g ), illustrating the conserved expression of target antigens on these cells. Next, we cocultured primary AML samples with CAR-T cells and determined specific lysis by flow cytometry. hCSF1R and CD86 CART specifically lysed primary AML samples comparable to CD33 CART at low E:T ratios (Fig. 5d ). To reflect the genetic heterogeneity of AML, seven different primary AML specimens with differing cytogenetics were used for in vitro assays. To probe whether new anti-target CAR can be introduced into T cells derived from individuals with AML, we transduced anti-hCSF1R CAR constructs into T cells of individuals suffering from AML (Fig. 5e ). Human-derived hCSF1R CART were then cocultured with autologous primary AML blasts, resulting in potent lysis of primary samples (Fig. 5f ). To prove efficacy of hCSF1R and CD86 CART in more relevant in vivo models, we transplanted cytogenetically distinct human-derived xenograft (PDX) models 48 into mice and treated them with the respective CAR-T cells. First, we selected PDX-573, a model that was derived from an individual with relapsed AML with high-risk cytogenetics (European LeukemiaNet 2017, adverse prognosis; see Extended Data Table 1 for detailed characteristics). Three weeks later, we injected hCSF1R, CD86, CD33 or CD19 CART (Fig. 5g–j ). All CAR-T cells were highly effective, inhibiting tumor outgrowth in all treated mice (five of five; Fig. 5h–j ). Next, we tested the efficacy of hCSF1R, CD86 and CD33 CART in PDX-388, derived from an individual with AML at initial diagnosis with KMT2A rearrangement (European LeukemiaNet 2017, adverse prognosis; Fig. 5k–m ). Notably, expression of CSF1R on PDX-388 samples mimicked the above-described pattern; following thawing of the cells, CSF1R was not expressed on PDX-388 cells but was detectable after at least 24 h of in vitro culture (Extended Data Fig. 5h ) and also in vivo in bone marrow sections of control-treated PDX-388 mice (Extended Data Fig. 5i ). hCSF1R and CD86 CART induced sustained remission in all treated mice over a period of 85 d (CR in ten of ten for CSF1R CART and three of three for CD86 CART; Fig. 5k–m ). Interestingly, in this model, CD33 CART completely failed to control tumor growth in all mice (CR zero of ten). We excluded manufacturing failure of CD33 CART in vitro (Extended Data Fig. 5j ). Furthermore, in a separate cohort, we verified that CD33 CART were present in the circulation of treated mice (Extended Data Fig. 5k , left) and expressed the CAR on the cell surface (Extended Data Fig. 5k , right). Ex vivo flow cytometric measurement of CD33 on PDX-388 blasts revealed a strong decrease of CD33 surface expression on PDX cells of mice treated with CD33 CART compared to CD19 CART (Extended Data Fig. 5l,m ). Failure of CD33 CART to control tumor burden was thus most likely due to downregulation of surface CD33 expression on PDX-388 blasts. However, the detailed biological mechanism remains elusive and still requires further characterization. To unambiguously validate the potential of hCSF1R CART in vivo, we used a third PDX model (PDX-372; Extended Data Fig. 6a–e ) and a third cell line xenograft model (OCI-AML3; Extended Data Fig. 6f–h ). PDX-372 samples were again derived from an individual with relapsed AML with high-risk cytogenetics and TP53 mutation (Extended Data Table 1 ). In addition, to create more challenging models, we transferred reduced numbers of CAR-T cells into PDX-372-bearing mice (Extended Data Fig. 6a ). hCSF1R CART stunted AML growth in three of five mice. Detected BLI signal did not vary between hCSF1R and CD33 CART (Extended Data Fig. 6b,c ). As previously described for PDX-388, immunohistochemical analysis revealed high expression of CSF1R on PDX cells in vivo (Extended Data Fig. 6e ). hCSF1R CART transferred into OCI-AML3 tumor-bearing mice were similarly effective (Extended Data Fig. 6f–h ). To gain a better understanding of the expression patterns of CSF1R and CD86 in the complex molecular landscape of AML and of potentially differing expression patterns in different AML subtypes, we used a published large-scale dataset (the Leukemia MILE study) and analyzed the expression of CSF1R and CD86 compared to CD123 and CD33 reference genes. Similar to CD33 , CSF1R and CD86 were broadly expressed in different subtypes, with highest expression observed in KMT2A::MLLT3 (MLL::AF9), t(15;17) and inv(16)-mutated AML (Extended Data Fig. 6i ). Given the comprehensive panel of different in vitro and in vivo models used throughout our studies, we next sought to investigate whether we can determine an antigen threshold for effective CAR-T cell therapy in AML. However, we did not observe correlations between antigen site density measured with flow cytometry and lysis capacity of CAR-T cells for any of the tested antigens (Extended Data Fig. 6j ). In summary, using three different, cytogenetically distinct PDX models and three cell line xenograft models, we were able to provide strong evidence of functionality of newly developed anti-target CAR-T cells in vitro and in vivo. Toxicity analyses of hCSF1R and CD86 CART After having verified the expression on malignant AML cells, we next evaluated target antigen expression on CD34 + HSPCs. Using flow cytometry, we demonstrated lower expression of CSF1R and CD86 than CD33 on healthy HSPCs (Fig. 6a,b ). To directly assess toxicity toward HSPCs, we cocultured enriched bone marrow-derived CD34 + cells with hCSF1R, CD86 and CD33 CART or untransduced T cells for 24 h (Fig. 6c ). CD34 + HSPCs were exclusively lysed by CD33 CART (Fig. 6c , left). Also, CD33 CART secreted more IFNγ into coculture supernatant than hCSF1R or CD86 CART (Fig. 6c , right). To further validate these results, we performed conventional colony-forming unit (c.f.u.) assays. Colony counts of c.f.u.-E and burst-forming unit (BFU)-E were higher when HSPCs were cocultured with hCSF1R CART than when they were cocultured with CD33 CART, indicative of better survival of stem cells in the presence of hCSF1R CART (Fig. 6d ). Importantly, colony counts of HSPCs cocultured with either hCSF1R CART or untransduced T cells did not vary (Fig. 6d ). Fig. 6: hCSF1R CART show better discriminatory capacity toward healthy human hematopoietic cells than CD33 CART. a , Target expression on magnetic-activated cell sorting-enriched, bone marrow-derived CD34 + HSPCs. Data are shown as mean ± s.e.m. of two to three independent, pooled HSPC donors. b , Representative flow cytometric image of target expression on HSPCs. c , CSF1R, CD86 or CD33 CART or untransduced T cells were cocultured with HSPCs for 24 h at an E:T ratio of 2:1. Left: lysis of HSPCs was quantified by flow cytometry. Right: IFNγ secretion was measured by ELISA; hIFNγ, human IFNγ. d , CSF1R and CD33 CART or untransduced T cells were cocultured with HSPCs for 24 h at an E:T ratio of 2:1, and a c.f.u. assay was performed. Colony count was quantified after 14 d. Data in c and d are shown as mean ± s.e.m. from three ( c ) or four ( d ) different donors. e , CSF1R expression on HD samples. Left: percentage of CSF1R + cells gated to isotype. Right: representative histograms of CSF1R expression on HD. f , Quantified target expression on HD. Left: percentage of positive cells gated to isotype. Right: representative flow cytometric image. Data in e and f are shown as mean ± s.e.m. from three different donors. g , hCSF1R and CD33 CART or untransduced T cells were cocultured with HD for 72 h at the indicated E:T ratios. Left: off-tumor lysis of CAR-T cells assessed by flow cytometry. Right: activation of T cells quantified by IFNγ secretion. Data are shown as mean ± s.e.m. from 11 different samples. h , i , Quantification of log-transformed normalized target expression in 13,067 single human brain cells ( h ). Each peak corresponds to a cell, and peak height indicates expression intensity. A UMAP plot illustrating the expression patterns of CSF1R , CD86 and CD33 in human brain cells is shown ( i ). j , Phenotype of human iMGL. k , Representative histograms of CSF1R and CD33 expression on iMGL. l , hCSF1R CART, CD33 CART or untransduced T cells were cocultured with iMGL for 24 h at the indicated E:T ratios. Left: lysis of iMGL was quantified by flow cytometry. Right: T cell activation was quantified by ELISA. Data are shown as mean ± s.e.m. from five T cell donors. For all experiments, statistical significance was calculated by two-way ANOVA with a Sidak multiple-comparison correction. Full size image Next, we analyzed expression of target antigens on samples of healthy human bone marrow donors (HD samples; Fig. 6e,f ). Again, surface CSF1R expression could only be detected after at least 24 h of culture (Fig. 6e ), but its expression remained lower than that of CD86 or CD33 (Fig. 6f ). Cocultures of hCSF1R and CD33 CART or untransduced T cells with HD samples revealed higher lysis of HD samples (Fig. 6g , left) and increased secretion of IFNγ by CD33 CART (Fig. 6g , right). Both lysis of HD samples and IFNγ secretion did not differ between hCSF1R CART and untransduced T cells (Fig. 6g ). scRNA-seq analysis of single human brain cells confirmed expression of CSF1R in microglia (Fig. 6h,i ). On a single-cell level, CSF1R showed higher expression in microglia than CD86 or CD33 (Fig. 6h,i ). To model toxicity of CAR-T cells toward human microglia, we generated induced pluripotent stem cell (iPSC)-derived human microglia-like cells (iMGLs) 49 , 50 and verified their phenotype (Fig. 6j ). Both CSF1R and CD33 were highly expressed on iMGLs (Fig. 6k ). Cocultures of human iMGLs with CSF1R CART, CD33 CART or untransduced T cells demonstrated lysis of human iMGLs by both CAR at high E:T ratios of 1:1 (Fig. 6l , left). At more physiological E:T ratios (0.2:1), neither CSF1R nor CD33 CART were able to lyse human iMGLs, consistent with our in vivo data (Fig. 6l , left). IFNγ release mimicked the results obtained from flow cytometric analyses (Fig. 6l , right). In summary, our data suggest a superior discriminative capacity toward healthy hematopoiesis of our newly developed CAR-T cells compared to CD33 CART and indicate that microglia might not be a relevant off-tumor target of anti-CSF1R CAR-T cells. Discussion We developed an unbiased scRNA-seq approach for de novo target identification and in-depth, high-resolution off-tumor mapping across multiple tissues that is specifically tailored to predict potential candidates for CAR-T cell therapy. Applying our approach to AML, we identified two target antigens: CSF1R and CD86. Extensive in vitro and in vivo validation revealed broad expression on AML blasts, strong and durable treatment responses of newly developed CAR-T cells in vitro and in vivo and minimal toxicities toward relevant healthy cells and tissue. For primary target screening, we leveraged single-cell sequencing data from 15 primary AML specimens with differing cytogenetic properties 21 . In addition, we validated the obtained results in an independent cohort of five additional individuals with AML 39 . The top hits of the present study were reliably found overexpressed in large bulk sequencing AML cohorts ( n = 615). Given the highly complex molecular landscape of AML, rare AML subtypes might still not be fully represented in our analyses. Despite this limitation, our study clearly demonstrates the translational potential of unbiased, scRNA-seq-based screening approaches and provides proof of principle of the whole spectrum of scRNA-seq-guided drug development spanning from computational target identification to preclinical investigation of newly developed CAR-T cells. CSF1R has been previously implicated as a target for small-molecule inhibition in AML 46 . However, its expression was thought to be restricted to a small subset of AML-supportive cells in certain individuals, while the majority of human blasts are regarded as antigen negative 51 . Using various techniques, including transcriptomic analysis, flow cytometry, immunohistochemistry and comprehensive functional investigation of CSF1R-directed CAR therapy, we were able to confirm high CSF1R expression on AML blasts. These reported ambiguities of CSF1R expression on malignant AML blasts encourage the use of unbiased, RNA-based screening algorithms for target identification and prioritization, as methodological or biological confounders can easily mask protein expression analysis. Nevertheless, it is crucial to bear in mind that scRNA-seq-centered strategies come with their own limitations (for example, the zero or dropout problem of singe-cell gene expression 52 ) and in any case require protein validation. CD86 is expressed on malignant AML blasts, and high receptor expression is associated with shortened overall survival of individuals with AML 53 , 54 , but, to the best of our knowledge, CD86 has never been explored as a target for (immuno)therapy of cancer. The expression of CD86 is not limited to AML and has also been reported in numerous B cell malignancies 55 . As such, the use of CD86 CART promises not only treatment options for AML but also applications for a variety of other hematological diseases, such as multiple myeloma 56 and childhood B cell precursor acute lymphoblastic leukemia 57 . Nevertheless, CD86 is also expressed on healthy macrophages and dendritic cells 58 , 59 , 60 and might increase the risk of immunosuppression and ensuing severe infection. However, CTLA-4 fusion proteins, such as abatacept (targeting both CD80 and CD86), have received approval by the FDA and are clinically used for the treatment of autoinflammatory disorders 61 . In clinical studies, abatacept was generally well tolerated 61 . For both CSF1R and CD86, the measured antigen site densities were rather low, especially compared to CD33 expression, which was high. Yet, despite our extensive functional validation, we did not observe marked differences between CSF1R and CD86 CART compared to established CD33 CART. Along these lines, we did not observe a correlation between lysis capacity of CAR-T cells and site density of the respective target antigen. To a certain extent, these findings are in line with recent reports observed for anti-mesothelin CAR-T cells in solid tumors 62 . Several factors, such as affinity and binding properties of the used scFv and conformation of the target antigen, can positively or negatively influence these CAR–tumor cell interactions. Ultimately, while high target antigen expression undoubtedly increases killing efficacy, our data suggest that, in some cases, functional cross-comparison might help to identify promising target antigens, despite, on first glance rather low antigen expression. Similar to previous results in AML 18 , we were not able to identify target antigens with expression limited to a single immune cell lineage, as is the case for CD19 or BCMA in B cell malignancies. However, expression of our prime candidates is limited to immune cells of myeloid origin (monocytes, tissue-resident macrophages and dendritic cells), with minimal detection on stem or progenitor cells. Thus, our candidates could bear the advantage of clinical application without the risk for severe bone marrow toxicity, which is a current concern of AML-targeted treatments 10 . It should be noted, however, that to date, the clinical outcomes of off-tumor gene expression on HSPCs remains elusive. Along these lines, precise projection of off-tumor antigen expression is one of the central objectives of our single-cell approach, because unwanted toxicity may be inferred from high transcriptomic off-target antigen expression 20 , 63 . Yet, as outlined above, the risk of severe adverse effects caused by off-tumor activity of CAR-T cells is not fully understood, and different outcomes have been reported 64 . As such, the latest trials evaluating the safety of CD123 CAR-T cells did not show sustained cytopenia 64 . However, in most anti-CD123 CAR-T cell trials currently being conducted, participants eventually received allo-HSCT, which presumably eradicated CAR-T cells. Of note, the development of fatal cytokine release syndrome and capillary leak syndrome following CD123 CAR-T cell infusion, potentially due to off-target expression of CD123 on small vessels, has been reported 7 . Altogether, current clinical evidence does not support a clear definition of the critical cell types and expression thresholds that would preclude the development of CAR-T cells against a certain target to avoid unmanageable toxicities. In any case, in the long run, detailed knowledge of off-tumor expression will allow vigilant monitoring of ‘high-risk off-tumor organs’ in clinical trials and might enable rapid side effect-mitigating treatments. Similarly, clinical lessons from anti-CD19 or anti-BCMA CAR-T cell therapy deem lineage-restricted expression patterns as highly desirable, providing further strong evidence for the use of single-cell technologies for de novo target identification, as these technologies might be able to aid the search for unrecognized target antigens with minimal off-tumor expression in healthy tissues. Many of the currently investigated CAR targets in AML failed the thresholds of overexpression on malignant HSPCs compared to their healthy counterparts in our analyses. Herein, to a certain extent, our data contradict data from publications of our colleagues 13 , 65 . Sauer et al., for example, illustrated higher expression of CD70 in bone marrow biopsies of individuals with AML than in bone marrow samples of healthy donors by using immunohistochemistry 13 . This discrepancy is most likely due to our restrictive analyses, in which we have chosen rather high cutoff criteria to ensure maximal safety of identified target antigens. Dynamic adjustment of these thresholds might yield different results, and many of the previously identified target antigens (for example, CD123, CD33, CD70, FLT3, C-type lectin-like molecule-1 and CD44v6) will most likely be of aid to improve clinical care of individuals with refractory or relapsed AML. Nonetheless, our data clearly demonstrate the value of CSF1R and CD86 as targets for CAR-T cell therapy in AML, and, especially considering the complex molecular landscape of AML and its highly diverse subsets, these targets are expected to be valuable additions to the immunotherapeutic repertoire in AML. Unsurprisingly, CSF1R was expressed on microglia, which share a common monocytic precursor, as also known for CD33 (ref. 66 ). Clinical investigation of the so far only CSF1R-directed monoclonal antibody did not reveal neurotoxicity as a concern when depleting CSF1R + cells from the periphery 67 . However, given the different mode of action of cellular- versus antibody-based therapies, these results might not be directly transferable to anti-CSF1R CAR-T cell therapy. In addition, CAR-T cells are known to be able to cross the blood–brain barrier already at steady state 68 , 69 , and peak levels of proinflammatory cytokines further increase permeability of this tightly regulated barrier 70 , 71 . Because of these considerations, we rigorously tested the possibility for neurotoxicity in numerous models. These models included fully syngenic mouse models in which we implanted large quantities of CAR-T cells directly into mouse brains. Yet, we did not observe any signs of neurotoxicity. Nevertheless, future clinical validations will need to include well-designed protocols to vigilantly detect any signs of neurotoxicity. Our results highlight the potential of using unbiased, high-resolution, single-cell transcriptomic data for target selection and drug development. Leveraging these data and the appropriate high-dimensional analyses as standard operating procedures promises to improve safety and efficacy of newly engineered CAR-T cells and enables identification of new target structures for targeted immunotherapy in malignant disorders. Methods Single-cell transcriptome analysis All preprocessing and analysis steps of scRNA-seq data were run in Python 3 using Scanpy 72 v.1.4.6 to 1.9.1 and anndata 73 v.0.7.1 to 0.8.0 unless otherwise stated. All scRNA-seq figures were plotted using matplotlib and seaborn. Preprocessing publicly available scRNA-seq data of healthy and AML cells For healthy donors and individuals with AML, we obtained raw, annotated count data of healthy bone marrow cells from two resources. (1) The data from van Galen et al. 21 were downloaded from Gene Expression Omnibus ( GSE116256 ). Here, we excluded individual AML916, as it had a mixed AML phenotype expressing markers of stem cells, myeloid, T and B lineages. (2) For the scRNA-seq data of Petti et al. 39 , we obtained raw count data from . For the data from GSE116256 , barcodes were filtered for each sample for high-quality cells based on the total distributions of unique molecular identifier counts and genes. For exact threshold values, see provided code documentation on GitHub. Cells with a fraction of mitochondria-encoded genes over 20% were excluded. Barcodes that could not be confidently assigned to either healthy or tumor cells were discarded. Genes detected in less than 20 cells were excluded from further analyses. The resulting count matrix was used for normalization. Unique molecular identifier counts of each cell were normalized using the SCRAN algorithm as implemented in the R-based package 74 , 75 . Briefly, size factors were estimated by preliminary clustering of the data using the Louvain algorithm implemented in Scanpy (tl.louvain) with a resolution of 0.5 before running ComputeSumFactors (min.mean = 0.1). The estimated size factors were then used for cell normalization. Finally, the data were log transformed (log (count + 1)). Feature selection and visualization in a low-dimensional embedding The top 4,000 variable genes were identified based on normalized dispersion, as described previously 76 , using Scanpy’s pp.highly_variable_genes with flavor = cell_ranger. Briefly, genes were ordered along their mean expression in several bins and selected according to their highest variance-to-mean ratio. To efficiently capture the underlying data structure in two dimensions, principal-component analysis dimension reduction was performed by computing 15 principal components on highly variable genes using Scanpy’s pp.pca. To account for technical batches, harmony 77 was used to integrate data from the respective individuals. Next, a neighborhood graph was computed on the first 50 harmony-adjusted principal components using Scanpy’s pp.neighbors with 15 neighbors. For two-dimensional visualization, embedding the neighborhood graph via UMAP 78 was done by running Scanpy’s tl.umap with an effective minimum distance between embedded points of 0.5. Differential gene expression of AML HSPCs and T cell marker gene identification Enriched gene expression in T cells was identified by comparing the mean expression of healthy T cells to the mean expression of all other healthy cell types using a t -test with overestimated variance, as adopted in Scanpy in the tl.rank_genes_groups function. Testing was performed on the log-transformed normalized data to account for differences in sequencing depth between samples. Upregulated genes with false-discovery rate (FDR)-adjusted P values of ≤0.01 and a log (fold change) of >0 were considered for target antigen filtering. Characteristic gene signatures of AML HSPCs were identified by performing separate differential expression analysis of AML HSC-like and HPC-like cells against their healthy equivalents, respectively. Genes that were expressed in at least 2% of all cells with log (fold change) values of >2 and FDR-adjusted P values of ≤0.01 were defined as enriched marker genes. Harmonizing public databases to obtain surface protein-coding genes To obtain genes encoding proteins on the cell surface, we used OmniPath 22 (a large-scale molecular database) to access data from (1) the mass spectrometry-based cell surface protein atlas (CSPA) 23 , (2) CellPhoneDB 25 (a repository of curated receptors, ligands and their interactions), (3) the machine learning-based in silico human surfaceome 24 and (4) the human protein atlas (HPA) 26 v20.1 ( ). Permissive integration of all datasets was critical, as cell surface expression showed strong variability between databases. Consequently, the union of these databases was used for all subsequent analyses. Targets of FDA-approved drugs To identify genes that encode druggable proteins, we used DrugBank 38 , a database containing information on the interactions, pharmacology and chemical structures of drugs and drug targets. We defined druggable genes as targets with known pharmacological action of FDA-approved drugs. COOTA To quantify off-target effects, we analyzed and combined a total of 11 scRNA-seq datasets across 9 healthy tissues 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . Raw annotated scRNA-seq data from the respective studies 27 , 29 , 31 , 33 , 34 , 35 , 36 , 37 were obtained using the Python-based data repository sfaira 79 . To quantitatively analyze the expression of possible CAR-T cell therapy targets across healthy tissues, comparable preprocessing steps were performed for each dataset separately, which involved removing low-quality cells and lowly expressed genes (see provided code documentation on GitHub for exact thresholds), normalizing cell counts using scran, selecting highly variable genes based on normalized dispersion and visualizing the cells in a two-dimensional UMAP embedding as described above. For the lung datasets of Travaglini, Madissoon and Reyfman 30 , 31 , 32 , we used publicly available data with cell annotations derived from a study integrating multiple scRNA-seq datasets 80 . To account for technical batches along the respective samples, batch-balanced k -nearest neighbors 81 were calculated for the datasets of Travaglini, Madissoon, Reyfman, Ramachandran, James, Cheng and Han 30 , 31 , 32 , 34 , 35 , 36 , 37 . Finally, processed and annotated count matrices were concatenated on union variables using Scanpy’s concatenate with join = outer, and the resulting matrix was used for target antigen filtering. Genes were excluded if they were expressed in over 2% of cells of a critical cell cluster (endothelial, arterial, bronchial, capillary, venous and smooth muscle cells). A single-gene-based count matrix with barcodes × datasets was created and used for plotting mean expression values across cell types. Reference mapping and label transfer Raw count data from Petti et al. 39 were filtered to obtain high-quality cells. Cells with less than ten expressed genes and genes detected in less than three cells were excluded from further analyses. Barcodes with a fraction of mitochondria-encoded genes of over 10% and a fraction of ribosomal genes of over 50% were excluded. For reference mapping and label transfer, we used scANVI 40 , a semisupervised variational autoencoder model, to leverage the cell-type knowledge from the data from GSE116256 (ref. 21 ) to infer the states of cells from Petti et al. 39 . Briefly, we trained the scVI model on the reference data with two hidden layers and a dropout rate of 0.2 (for the exact parameters, see provided code documentation on GitHub). Next, we initialized the scANVI model from the pretrained scVI model before training the scANVI model for 20 epochs with 100 samples per label. Afterward, we created a new query model instance before training the query data with a weight_decay of 0 for 100 epochs. The latent representation and the label predictions were obtained using get_latent_representation() and predict() functions, respectively. Finally, we computed a neighborhood graph with 15 neighbors using the scANVI representation before embedding the graph using UMAP as mentioned earlier. Bulk expression data The data used for the bulk analyses of CSF1R and CD86 expression in human and mouse tissues were obtained from the GTEX portal 82 ( E-MTAB-5214 ) and Merkin et al. 83 ( E-MTAB-2801 ) via the Expression Atlas 84 ( ). Cell lines Human AML cell lines THP-1, MV4-11, OCI-AML-3, PL-21, MOLM13, U937 and NALM-6 were purchased from ATCC. All cell lines were cultured in RPMI containing 20% fetal bovine serum (FBS), 2 mM l -glutamine, 100 U ml –1 penicillin and 100 µg ml –1 streptomycin. The mouse J774A.1 cell line was provided by P. Düwell (Institute of Innate Immunity, University Hospital, Bonn). Cells were cultured in DMEM containing 10% FBS, 2 mM l -glutamine, 100 U ml –1 penicillin and 100 µg ml –1 streptomycin. All cells were grown at 37 °C in a humidified incubator with 5% CO 2 . Short tandem repeat profiling was used to verify the identity of human cell lines. Cells were negatively tested for Mycoplasma contamination using PCR. All cell lines were lentivirally transduced with a pCDH-EF1a-eFly-eGFP plasmid 85 . After transduction, enhanced green fluorescent protein-positive cells were single-cell sorted using a BD FACSAria III cell sorter, and expression of firefly luciferase (fLuc) was verified using a Bio-Glo luciferase assay system. Cells were frozen in medium containing 90% fetal calf serum and 10% DMSO and stored at −80 °C or in liquid nitrogen for long-term storage. Generation of PDX cells was previously described 48 . Anti-mouse c-FMS (CD115)-producing hybridoma RCB4486 was acquired from the RIKEN Bio-Resource Research Center 86 . AML blast isolation and culture Primary AML blasts or healthy donor control samples (HD) were obtained from the bone marrow or peripheral blood of individuals suffering from AML after written informed consent was acquired in accordance with the Declaration of Helsinki and approval by the Institutional Review Board of the Ludwig-Maximilians Universität (LMU) or the Ethics Committee of the Medical Association of Hamburg. Bone marrow aspirates were enriched for AML blasts either through density centrifugation or lysis of red blood cells using osmotic gradient solutions and frozen in liquid nitrogen. Before T cell-based assays, bone marrow aspirates were thawed, and T cells were depleted using a CD3 + selection kit (StemCell Technologies). Primary AML samples were either cultured in IMDM basal medium supplemented with 15% BIT 9500 serum substitute and β-mercaptoethanol (10 −4 M), 100 ng ml –1 stem cell factor, 50 ng ml –1 FLT3 ligand, 20 ng ml –1 granulocyte colony-stimulating factor, 20 ng ml –1 IL-3, 1 µM UM729 and 500 nM SR1 (ref. 29 ) or alternatively in α-MEM supplemented with 12.5% horse serum, 12.5% fetal calf serum, 1% penicillin/streptomycin, 1% l -glutamine, granulocyte colony-stimulating factor, IL-3, thrombopoietin and β-mercaptoethanol on irradiated MS-5 (mouse bone marrow stromal cells) for coculture experiments 87 , 88 , 89 . Cocultures with primary AML samples were performed after a 3-d preculture of the thawed AML blasts 89 . Flow cytometry Flow cytometric analysis was performed using a BD FACSCanto, a BD LSRFortessa II or a Beckman Coulter CytoFLEX. All staining steps for identified AML target antigens were conducted on ice. Cells were centrifuged at 200–400 g for 5 min at 4 °C in a precooled centrifuge. For staining of primary AML blasts and AML cell lines, a maximum of 10 6 cells was counted and transferred to a U-bottom 96-well plate. Cells were washed twice with ice-cold PBS containing 2% FBS and incubated for 15 min with 5 µl of human TrueStain FcX. CSF1R was stained for 30 min in the dark either using unconjugated anti-human m-CSF-R/CD115 (R&D Systems, clone 61701) or mouse IgG1 isotype as a control (R&D Systems, clone 11711), followed by secondary staining with AlexaFluor 647 rat anti-mouse IgG (H + L; Jackson ImmunoResearch) or alternatively after incubation with biotinylated recombinant CSF-1 protein (Sino Biological) followed by secondary staining with streptavidin-APC. Staining for CD86 was conducted with anti-human CD86 (clone IT2.2) Dead cells were excluded after staining with a fixable viability dye (eFluor 780, eBioscience) in all experiments. Quantification of absolute cell counts was performed by using Count Bright Absolute Counting Beads (Thermo Fisher Scientific). Primary AML blasts were identified using anti-human CD45 (clone HI30) and anti-human CD33 (clone P67.6; WM53, Invitrogen/eBioscience). Staining with anti-human CD34 (clone 561) and anti-CD38 (clone HB-7) was included for gating of leukemia-initiating cells. CAR expression was quantified using anti-c-Myc-FITC (clone SH1-26E7.1.6; Miltenyi Biotec). CAR activation was measured using anti-mouse or anti-human CD69 (mouse clone H1.2F3, human clone FN50), CD107a (mouse clone 1D4B, BD Biosciences; human clone H4A3) and PD-1 (human clone PD-1, EH12.2H7). Anti-CD2 (human clone RPA-2.10), anti-CD3 (mouse clone 145-2C11, human clone UCHT1, HIT3a), anti-CD4 (mouse clone GK1.5, human clone OKT4) and anti-CD8 (mouse clone 53-6.7, human clone SK1, HIT8aa) were used to gate on T cells. Gating on tissue-resident immune cells was performed using anti-mouse CD45 (clone 30-F11) and anti-mouse/human CD11b (clone M1/70). Preparation of organs for flow cytometric analysis was performed as recently described 90 . All antibodies and reagents were purchased from BioLegend unless otherwise specified. Absolute quantification of antigen densities per molecule was performed using BD Biosciences Quantibrite phycoerythrin beads according to the manufacturer’s instructions. All flow cytometric stainings were performed at a dilution of 1:50. For flow cytometric quantification, an absolute molecule count per cell dilution of 1:20 was used. Generation of CAR constructs All CAR constructs contained Myc tags to readily detect CAR expression. Anti-CSF1R scFv was designed based on the patented sequence of anti-CSF1R heavy and light chain variable domains of the anti-CSF1R clone 2F11-e7 (ref. 91 ). Anti-human CD86 scFv was derived from 3D1 (ref. 92 ) . Anti-CD19 CAR-T cells were designed based on anti-CD19-CAR-FMC63-28Z CAR-T cells 93 . h-P67.7 scFv was used for anti-CD33 CAR-T cells. Mouse anti-CSF1R scFv (clone AFS98) was derived from anti-mouse c-FMS-producing hybridoma described earlier. Mouse CAR constructs contained an mCherry fluorescent tag separated from the CAR construct via a 2A self-cleaving peptide sequence. Constructs were either created using conventional cloning techniques or were codon optimized and cloned into pMP71 retroviral vectors using commercial cloning services (Twist Bioscience). Retroviral mouse and human T cell transduction For virus production, retroviral pMP71 vectors carrying the sequence of the relevant receptor were stably expressed in packaging cell lines 293Vec-Galv, 293Vec-Eco and 293Vec-RD114 (refs. 90 , 94 , 95 ). Human T cells were isolated from healthy donor peripheral blood mononuclear cells using density gradient centrifugation, enriched for T cells using anti-CD3 microbeads (Miltenyi Biotec) and stimulated for 48 h before retroviral transduction with human T-Activator CD3/CD28 Dynabeads (Life Technologies). Human T cells were expanded in human T cell medium (hTCM) containing 2.5% human serum, 2 mM l -glutamine, 100 U ml –1 penicillin, 100 µg ml –1 streptomycin, 1% sodium pyruvate and 1% non-essential amino acids supplemented with recombinant human IL-2 (PeproTech and Novartis) and IL-15 (PeproTech and Miltenyi Biotech). Mouse T cells were derived from splenocytes, activated using anti-CD3 and anti-CD28 and transduced in mouse T cell medium (10% FBS, 2 mM l -glutamine, 100 U ml –1 penicillin, 100 µg ml –1 streptomycin, 1% sodium pyruvate and 0.5% HEPES) supplemented with IL-2 and mouse T-Activator CD3/CD28 Dynabeads (Life Technologies), as described. Following retroviral transduction, mouse T cells were expanded in medium containing human IL-15 (PeproTech). For all experiments comparing different CAR-T cells, transduction efficiency was adjusted to the lowest measured efficiency of the respective construct. Animal experiments Animal experiments were approved by the local regulatory agency (Regierung von Oberbayern) and were performed in accordance with guidelines and regulations implemented by the Regierung von Oberbayern. Animals were housed in specific pathogen-free facilities. C57BL/6, BALB/c and NOD.Cg-Prkdc scid Il2rg tm1WjI/SzJ mice were purchased from Janvier (St. Berthevin) or Charles River Laboratories or were bred at local facilities. CX3CR1–GFP reporter mice were bred at local facilities. Mice were held in facilities with a 12-h dark/12-h light cycle including a 30-min twilight phase at noise levels below 50 dBA. Air velocity was held below 0.2 m s –1 . Air humidity in the facilities was between 45 and 60%, and the average temperature was held between 20 and 22 °C. PDX models AML-573, AML-388 and AML-372 were genetically modified to express fLuc 48 . For BLI, mice were anesthetized using an isoflurane–oxygen mixture (1.5–2.5%) following intraperitoneal injection of BLI substrate (Xenolight d -luciferin potassium salt, PerkinElmer) into each mouse, according to the manufacturer’s protocol. An in vivo imaging system platform Lumina X5 (IVIS, PerkinElmer) was used to measure BLI signal. Xenograft models using THP-1, MV4-11 or OCI-AML3 cell lines or PDX models were established by i.v. injection. T cells were transferred at the indicated times and numbers. Mice that had to be removed from animal experiments due to non-tumor-related toxicities (for example, did not have measurable BLI signal at the exclusion timepoint) were censored. Censored mice are indicated in the respective Kaplan–Meier curves and are marked with a white cross in the BLI images. Surgical procedures and stereotactic implantation Preparation of chronic cranial windows using microsurgical implantation and stereotactic CAR-T cell injection was performed as previously described 95 . After the mouse was deeply anesthetized by intraperitoneal injection of midazolam (5 mg kg –1 ), medetomidin (0.05 mg kg –1 ) and fentanyl (0.5 mg kg –1 ), the skin was cut, and the periosteum was removed. After marking the cortical area of interest, a 5.5-mm circular part of the cranium was removed using a sterile carbon steel microdrill. Dura mater was separated from leptomeninges using forceps and removed to prevent dural fibrosis. Sterile round cover glasses and tailored rings were attached to the cranial bone with acrylic dental glue. To prevent postsurgical astroglial or microglial activation affecting tumor growth, stereotactic implantation of CAR-T cells was performed at least 2 weeks after cranial window implantation. For stereotactic implantation of CAR-T cells, 2 × 10 5 transduced T cells were resuspended in 1–2 µl of PBS and injected at predefined coordinates (1 mm lateral and 2 mm posterior to the bregma at an intraparenchymal depth of 1.5 mm). Perioperative care included daily recording of weight and neurological scores. TPLSM TPLSM was performed using a multiphoton TrimScope I system (LaVision Biotec) connected to an upright Olympus microscope equipped with a MaiTai laser (690 to 1,040 nm, Spectra Physics) and a ×20/0.95-NA water immersion objective (Olympus). Single images were acquired from different depths depending on different regions, with a z interval of 2 and 5 μm. An excitation wavelength of 920 nm was used with a resolution of 1,024 × 1,024 pixels and detected by photomultiplier tubes (G6780-20, Hamamatsu). Mice were anesthetized by isoflurane and maintained with a constant flow from 0.8 to 2.0% (as low as possible according to the physical condition of the mouse). After original images were acquired using Imspector Pro, Bitplane Imaris software was used for further analysis. To obtain high-quality images, brightness, contrast or color balance was regulated manually for the whole images. Immunohistochemistry hCSF1R staining in mouse xenograft bone marrow samples was performed using primary anti-human CSF1R (Cell Signaling, rabbit monoclonal, clone E4T8Z, 28917). Samples were formalin fixed, decalcified in EDTA and paraffin embedded. Heat-mediated epitope retrieval was accomplished using epitope retrieval solution, pH 8 (Novocastra, RE7116). Slides were incubated in primary antibody for 60 min at room temperature at a dilution of 1:180. Biotinylated secondary anti-rabbit IgG (Vector, BA-1000) and streptavidin–HRP reagent (Novocastra, RE 7104) were used for antibody detection. Finally, slides were stained with DAB+ (Agilent Technologies, K3468) and counterstained with hematoxylin Gill’s formula (Vector, H-3401). T cell stimulation assay using plate-bound recombinant protein Ninety-six-well, half area, flat-bottom, polystyrene plates (Corning) were coated overnight at 4 °C with Fc-tagged recombinant protein diluted in 100 µl of PBS at the indicated concentrations. The next day, plates were washed with PBS and blocked with 2% bovine serum albumin (BSA) dissolved in PBS for 30 min. After another washing step, 50,000 T cells resuspended in hTCM without cytokines were added. T cell activation was assessed by flow cytometry after 24 h of incubation. Cytotoxicity assays For coculture experiments, 30,000 to 50,000 human AML cells were plated in a flat-bottom, 96-well plate. Tumor cells were cocultured with T cells at the indicated E:T ratio for 48 h in hTCM without supplements unless otherwise specified. Killing was assessed using either a Bio-Glo luciferase assay system (Promega Corporation) according to the manufacturer’s protocol or flow cytometry. Specific lysis was calculated after normalization to control conditions. Cocultures of primary AML blasts or healthy bone marrow cells and CAR-T cells were performed under the conditions outlined above. Killing was quantified by flow cytometry. Proliferation assays Before cocultures (E:T ratio of 0:5:1), T cells were stained using a CellTrace Far Red cell proliferation kit (Thermo Fisher Scientific), according to the manufacturer’s instructions. Trace dilution was measured by flow cytometry at day 7. Cytokine measurements Cytokine levels in coculture supernatants were analyzed using IFNγ and IL-2 ELISA (BD Bioscience) according to the manufacturer’s protocol. A LEGENDplex mouse cytokine release syndrome panel (Biolegend) was used to analyze serum cytokine levels in mice. Procedures were performed as described by the manufacturer. HSC cocultures Human CD34 + bone marrow- or cord blood-derived HSCs were acquired from Stemcell Technologies. Healthy human bone marrow samples (HD) were obtained from individuals undergoing hip replacement surgery at the University Hospital of the LMU, Munich. All cells were collected after informed consent was obtained, in accordance with the Declaration of Helsinki. HSCs were thawed in a prewarmed water bath at 37 °C. Directly after thawing, cells were expanded using StemSpan II medium (Stemcell Technologies) supplemented with serum-free nutrient supply and UM729 small-molecule inhibitor. Flow cytometric analysis of HSCs or cocultures of HSCs and CAR-T cells were conducted after a total expansion phase of 7 d. Fresh expansion medium was added on day 3. HSCs (30,000) were cocultured with T cells at the indicated E:T ratios for 24 h before flow cytometric analysis. Cocultures of HD and T cells were performed similar to the cocultures of primary human AML blasts (see earlier). Generation of iMGLs Human iMGLs were generated as previously described 49 , 50 . In brief, human iPSC cell lines were differentiated to hematopoietic progenitors using a STEMdiff hematopoietic kit (Stemcell Technologies). Following successful development to hematopoietic progenitors, cells were grown in serum-free iMGL differentiation medium containing CSF-1, IL-34 and transforming growth factor-β for at least 12 d. Cells were then collected and used for in vitro cocultures with CAR-T cells. Cocultures of iMGLs and CAR-T cells were performed as described above. Software and statistical analysis Flow cytometric data were obtained with BD FACSDiva or Beckman Coulter software. G*Power software v3.1 was used to calculate group size of animal experiments. Luminescence and absorbance were measured with a Mithras Reader using MicroWin2000 software. Flow cytometric data were analyzed using FlowJo V10.3 to V10.8.1 software. ImageJ/Fiji and Imaris (Bitplane AG) were used for analysis of TPLSM images. Radiance calculation of BLI images was performed using Living Image 4.4 (PerkinElmer). All statistical analyses were performed using GraphPad Prism software V9.2.0 to V9.5.0. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data from publicly available scRNA-seq studies can be found via the following accession numbers or the provided links: GSE116256 (ref. 21 ), (ref. 39 ), (ref. 27 ), GSE134355 (ref. 37 ), (ref. 36 ), GSE131907 (ref. 28 ), GSE115469 (ref. 33 ), (ref. 31 ), GSE136103 (ref. 34 ), (ref. 32 ), (ref. 29 ), (ref. 30 ) and EGAS00001002927 (ref. 35 ). Data from publicly available bulk sequencing studies can be found via the accession numbers E-MTAB-5214 (ref. 82 ) and E-MTAB-2801 (ref. 83 ) via the Expression Atlas ( ). The surface gene library was obtained by integrating publicly available data 23 , 24 , 25 , 26 using OmniPath 22 ( ). Targets of FDA-approved drugs were obtained using DrugBank ( ). All reagents and biological material will be made available upon reasonable request to the authors given the agreement by the providing institution. Code availability Python scripts for replicating the figures from the scRNA-seq data as jupyter notebooks can be found in the GitHub repository at ref. 96 . Count matrices of processed scRNA-seq data will be made available upon reasonable request.
Unlike other forms of blood cancer, acute myeloid leukemia (AML) cannot currently be treated with CAR-T cell immunotherapy. The reason is that specific molecular targets with which certain immune cells could specifically target AML cells are lacking, which would permit the immune system to attack cancer. Two research teams of Professor Dr. Sebastian Kobold with Dr. Adrian Gottschlich from the Division of Clinical Pharmacology at LMU University Hospital Munich and Dr. Carsten Marr with Moritz Thomas from the Institute of AI for Health at Helmholtz Munich have now succeeded in discovering such targets. The results have now been published in the journal Nature Biotechnology. AML—one of several forms of leukemia ("blood cancer")—is a treacherous disease. Five years after the initial diagnosis, only one-third of patients are still alive. Up to 85 percent of patients appear to be cured after intensive chemotherapy. However, in more than half of them, the disease returns within one to two years because the chemotherapy has not destroyed all leukemia cells. In the event of a relapse, a stem cell transplant is the only hope for cure for a patient. But even then, the long-term probability of survival is less than 20 percent. New treatment options are therefore urgently needed. CAR-T cell therapy is an innovative therapy. CAR-T stands for "chimeric antigen receptor in T cells." T cells are cells of the immune system. Cancer cells evade their "normal" attempts to attack them by using various molecular tricks. Thus, T cells no longer recognize their opponents, the cancer cells. During CAR-T cell therapy, T cells are first removed from the patients and then genetically engineered to produce a specific protein (CAR) on their surface. When these CAR-T cells are injected back into the patient's body, these will only engage their target: CD19, which ensures that they recognize the patient's cancer cells and bind to them in a targeted manner. The cancer cells consequently die. New targets However, the approved CAR-T cells against CD19 are not suitable for AML, because CD19 is (usually) not present on the surface of AML cells. Clinical results with CAR-T cells directed against other surface molecules of AML cells have been sobering so far, according to scientists. This is because CAR-T cells were unable to distinguish between healthy and degenerated cells—with correspondingly induced significant side effects. The physician Sebastian Kobold and the physicist Carsten Marr, together with colleagues from the LMU University Hospital Munich and the Institute of AI for Health at Helmholtz Munich, set out to find alternative molecules that would ideally be found exclusively on the surface of AML cells. With the help of extensive bioinformatic analyses and the integration of expression data from more than half a million individual cells, two candidates finally crystallized out of 25,000 potential cell surface molecules. These are known as CSF1R and CD86. "Such an analysis would not have been possible a few years ago, since the required single-cell data has been generated only very recently," says Marr, who led the AI-assisted analysis in the study at Helmholtz Munich. The researchers produced CAR-T cells in the laboratory of the LMU University Hospital Munich that precisely target these molecules. The cells were then tested on different AML models, including AML cells from patients. The results, according to Kobold are promising: "On the one hand, these CAR-T cells are effective against AML, but on the other hand, they hardly destroy healthy cells." The study impressively demonstrates how the synergy of interdisciplinary research groups can lead to breakthroughs in health research to treat patients in the best possible way. The researchers' next goal: They want to develop GMP (good manufacturing practice)-capable processes to produce CAR-T cells that can then also be used in clinical trials with AML patients. This is to take place within the framework of the "Bavarian Cell Therapy Catalyst," which is supported by the Bavarian Research Foundation. Kobold expects the first tests with patients in two to three years.
10.1038/s41587-023-01684-0
Biology
White cheeks are more titillating
E. P. Badás et al, Colour change in a structural ornament is related to individual quality, parasites and mating patterns in the blue tit, The Science of Nature (2018). DOI: 10.1007/s00114-018-1539-z
http://dx.doi.org/10.1007/s00114-018-1539-z
https://phys.org/news/2018-02-white-cheeks-titillating.html
Abstract Carry-over effects refer to processes that occur in one season and influence fitness in the following. In birds, two costly activities, namely reproduction and moult, are restricted to a small time window, and sometimes overlap. Thus, colour in newly moulted feathers is likely to be affected by the costs of reproduction. Using models of bird vision we investigated male colour change in a free-living population of blue tits ( Cyanistes caeruleus ) in three sampling occasions: spring 1, winter and spring 2. We related crown, tail, breast and cheek feather colouration after the moult (winter) to the intensity of infections by blood parasites during reproduction (spring 1). In the following spring (spring 2), we explored mating patterns with respect to changes in feather colour (springs 1 vs. 2). Males that were less intensely infected by the malaria parasite Plasmodium while breeding showed purer white cheek feathers in winter, which may indicate higher feather quality. Increased brightness in the white cheek was associated with better body condition during reproduction. In the following season, males with brighter cheeks paired with females that had noticeably brighter cheek patches compared to the male’s previous mate. These results suggest that the conditions experienced during reproduction are likely to affect moult and thus feather colouration, at least in the white patch. High quality individuals may allocate resources efficiently during reproduction increasing future reproductive success through variation in mating patterns. Carry-over effects from reproduction might extend not only to the non-breeding phase, but also to the following breeding season. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction A central tenet of life-history theory is that resources allocated to current reproduction are traded-off against self-maintenance and future reproductive output (Stearns 1992 ; Metcalfe and Monaghan 2001 ). In birds, a growing body of research has investigated the mechanisms behind the costs of reproduction (reviewed in Harshman and Zera 2007 ; Blount et al. 2016 ), and the costs per se, which can come as a reduction in survival (Santos and Nakagawa 2012 ), for example, through accelerated ageing (Bize et al. 2009 ; Badás et al. 2015 ). Others have focused on the downstream effects of non-breeding season processes on reproduction, commonly known as ‘carry-over effects’ (Gunnarsson et al. 2006 ; Robb et al. 2008 ; Sorensen et al. 2009 ). However, few studies have explored the effects of reproduction on the subsequent non-reproductive season, when reproductive activities, in fact, could influence the outcome during the following breeding event (Dreiss and Roulin 2010 ; Harrison et al. 2011 ). Reproduction can exert changes in the individual’s post-breeding activities such as the moult. Moulting is an energetically demanding process (Griggio et al. 2009 ), because it encompasses physiological (i.e. altering multiple stress response pathways, Merino and Barbosa 1997 ) and metabolic costs (an increase in 30% of metabolic rate) (Cyr et al. 2008 ). Many birds initiate the post-nuptial moult while still raising young (Jenni and Winkler 1994 ), but because these activities are highly demanding, they should be separate in time. Indeed, passerines that were already moulting while breeding had reduced fledgling success (Sanz 1999 ; Hemborg et al. 2001 ; Morales et al. 2007 ). Delayed reproduction can compromise the time allocated for moulting, thus reducing feather quality, as has been reported in starlings ( Sturnus vulgaris ) (Dawson et al. 2000 ). Furthermore, Nilsson and Svensson ( 1996 ) showed that blue tits ( Cyanistes caeruleus ) that delayed moult had higher thermoregulatory costs in the following winter, and this resulted in reduced over-winter survival and breeding success the following year. Thus, the effects in feather synthesis become evident when reproductive effort exceeds what individuals were prepared to sustain. Additional information is needed on whether the individual’s status during reproduction has an important bearing on feather quality. Data on recently moulted birds in free-living populations are scarce because re-trapping the same individuals repeatedly is difficult (Dawson et al. 2000 ). The costs of reproduction on feather quality could be assessed through colouration, because colours are incorporated to new feathers during moult (Hill and McGraw 2006 ). In fact, because plumage colours are produced through different metabolic pathways depending on its nature (structural or pigmentary), they are subject to different constraints that may convey information to prospecting mates (Hill 2006a ). For example, in eiders ( Somateria mollisima ), it has been suggested that reproductive females with reduced lymphocyte levels may suffer from infections in their following moult, which could reduce the reflectance of the white plumage bands (Hanssen et al. 2006 ). In blue tits, experimentally increasing reproductive effort produced changes in feather colouration in two ornaments in the year following manipulation: the yellow breast and the blue crown (Doutrelant et al. 2012 ). Other mechanisms, such as soiling (Fitzpatrick 1998 ) or feather degrading bacteria (Shawkey et al. 2007 ) can also explain changes in colour during the season, but to date, no study has evaluated these changes with respect to parasitic infections. Because reproduction can affect immunocompetence (Hanssen et al. 2003 ), it is common that bird populations in temperate regions suffer from chronic blood parasite infections with relapses during the breeding season (Valkiūnas 2005 ). The negative effects that these parasites exert on the host are well known (Merino et al. 2000 ; Martínez-de la Puente et al. 2010 ; Asghar et al. 2015 ); however, more studies are needed to explore the effects of parasitic infections during the breeding season on subsequent feather colouration. Strong immune responses (i.e. against parasitic infections) may have negative effects on the moult, decreasing the amount of resources available and resulting in a delayed onset of post-nuptial moult (Sanz et al. 2004 , but see Moreno et al. 2001 ). Indeed, experimental studies have shown that certain aspects of structural colouration can signal food stress (Siefferman and Hill 2005 ) or acute parasite infection during the moult (Doucet and Montgomerie 2003 ). Similarly, many studies have related dull carotenoid-based ornaments to high parasite loads (reviewed in Hill 2006b ), but others have failed to find such negative relationships (Seutin 1994 ; Fitze and Richner 2002 ). In this study, we investigated colour change in plumage patches that differ in their main mechanism of colour production (structural, pigmentary, or both) and related these changes to parasitic infections during reproduction and to other breeding parameters. In the blue tit, coherent scattering leads to the production of the structural blue plumage colours seen in the crown, while incoherent scattering is responsible for the achromatic white feathers found in the cheek (Prum 2006 ). Carotenoid pigments are obtained from food and deposited in feathers (McGraw et al. 2002 ) producing, among others, the yellow colouration seen in the blue tit’s breast feathers. Aside from the crown, cheek and breast, we also measured feather colouration in another patch for which less information is available in the literature, the blue base of the tail. This plumage patch has been described as sexually dichromatic in nestlings (Johnsen et al. 2003 ), and it is likely to be relevant for sexual selection in the blue tit. Studies investigating several ornaments simultaneously are increasing (Doucet and Montgomerie 2003 ; Hegyi et al. 2007 ; Galván 2010 ), but changes in feather colour in multiple ornaments are understudied. Furthermore, adult blue tits undergo a complete moult once a year; hence, the plumage achieved during moult will be carried until the end of the next breeding season (Nilsson and Svensson 1996 ). For this reason, the carry-over effects from reproduction on moult and feather colouration may have an effect on mating patterns in the following reproductive event. First, we aimed at relating individual quality during reproduction to change in male feather colour (spring 1 vs. winter). Individual quality was evaluated by measuring body condition and the intensity of infections by several blood parasites (avian malaria and malaria-like parasites) during the highly demanding nestling provisioning phase. In poor quality individuals, we expect reproductive costs to negatively affect the feather colouration obtained after the moult, therefore, these birds should show duller or less pure colour depending on the feather patch. High-quality individuals may be able to cope with the costs of reproduction and either (i) increase brightness/saturation or (ii) maintain brighter/more-saturated feather colours. Besides, if performance in the previous reproductive event was high, an individual may gain higher quality partners in the following season (see Griggio et al. 2009 ), although this has been unexplored so far. On this basis, the second aim of this study was to investigate the effects of male colour change and breeding parameters on matting patterns in the consecutive breeding season (spring 2). Using discrimination models that take into account, the blue tit’s photoreceptor spectral sensitivities (Endler and Mielke 2005 ; Stevens 2011 ), we evaluated the change in the mate’s feather colouration between seasons (spring 1 vs. spring 2). Methods Study site and sampling Data were collected during the 2013 (spring 1 and winter) and 2014 seasons (spring 2) on a free-ranging population of blue tits breeding in a deciduous forest of Pyrenean oak ( Quercus pyrenaica ), in the vicinity of Valsaín (Segovia), central Spain (40° 53′ N, 4° 01′ W, 1200 m.a.s.l.), where 300 wooden nestboxes have been in place since 1994 (Fargallo & Merino, 1999 ). Breeding birds in springs 1 and 2 were caught at the nestbox during chick provisioning (when nestlings were 3 days old, hatching date = day 0), while birds caught in winter (2013) were attracted to mist nets using blue tit specific playback calls. At every sampling occasion, 2–3 nets (24–36 m each) were set up for 1–2 h, and then they were moved to a different location within the vicinity of the deciduous forest. Bird captures took place in 6 days (dates: 1, 2, 3, 9, 10 and 24 of November 2013) during 4–5 h each day depending on climatic conditions. Unringed birds were individually marked with a numbered aluminium leg-ring. First-years were identified (if age not known from ringing records) by possession of distinctive, non-adult greater wing coverts (Svensson 1992 ). We also recorded tarsus length to the nearest 0.01 mm using callipers and weight to the nearest 0.1 g using an electronic balance. These measurements were used to calculate individual body mass, corrected by regression for body size (tarsus length) and time of day by using the equation from Senar ( 2002 ). At spring 1, we took a blood sample via the brachial vein. One drop of blood was stored on an FTA card (Whatman, UK) for molecular analyses (parasitological analyses, see below). We also measured feather colour reflectance on four different patches in males and females: breast, cheek, crown and base of the tail. First, we evaluated the change in colour for each patch before and after the moult (spring 1 vs. winter). Second, we explored the change in colour between winter (2013) and the following season (spring 2, 2014). Finally, for a subsample of males ( N = 13, see below), we described colour and luminance differences between seasons (spring 1 vs. spring 2) and compared this to colour change between their female partners (female pair from spring 1 vs. female pair from spring 2). We used these values in subsequent analyses (see below). Moult stage was recorded both at the breeding season (springs 1 and 2) and winter captures. One individual had already started moult in spring 1 (as of 28th of June). However, this male was not recaptured in winter. By the time they were recaptured in November of season 1, all individuals had already finished moulting. None of the birds used in this study had started moulting when captured at nestling age 3 in springs 1 or 2. Parasite quantification (spring, season 1) For all samples, DNA was extracted from blood using a standard ammonium-acetate protocol and stored at − 20 °C. This DNA solution was then purified using silica filters to obtain a higher quality DNA (NZYGel pure, NZYtech, Lda. -Genes and Enzymes). DNA samples were quantified by spectrophotometry and adjusted to the same concentration (10 ng/uL). We detected and quantified the following parasites using quantitative PCR (qPCR) with SYBR green (SYBR Selected Master Mix, Applied Biosystems) to amplify a fragment of the cytochrome b or 18S rRNA genes using a pair of species-specific primers for each parasite: Haemoproteus majoris haplotype cyan2, Plasmodium spp. haplotype cyan1, Lankesterella valsaininesis and Leucocytozoon spp. haplotypes leuA, leuA1 and leuB. The variable Leucocytozoon A includes haplotypes A and A1 (see Badás et al. 2015 for more information on the primers used). Models of bird vision (seasons 1 and 2) Colour spectra were collected with the use of a spectrophotometer (Ocean Optics Inc., Dunedin, FL, USA) connected to an Ocean Optics fibre-optic reflection probe. The probe was made up of seven optical fibres that were illuminated by a Pulsed Xenon Light Source (Jaz-PX lamp), and it was inserted in a miniature black chamber that acted as holder and excluded ambient light. The equipment was calibrated with a flat white standard (Ocean Optics) prior to each patch measured. The probe was lifted between repeated measurements within a body region. Reflectance data from 300 to 700 nm were undertaken at 90° incidence and 3 mm from the feather surface over an illuminated circular area approximately 1 mm in diameter. Each spectrum was an average of three scans and was calculated relative to the reflectance produced by the white standard and a dark current. To model the UV-sensitive (UVS) blue tit visual system, we used their known photoreceptor spectral sensitivities (Hart et al. 2000 ) and calculated the relative quantum (photon) catch values for the four single cones, used in colour vision, and the double cones, used in luminance vision (Endler and Mielke 2005 ; Stevens et al. 2009 ). From this, we extracted hue, saturation, and luminance variables for each colour patch (Endler and Mielke 2005 ; Stevens et al. 2009 ). Although hue and saturation colour variables may not necessarily relate to colour perception in birds, avian visual models that incorporate cone sensitivities of the bird’s retina and light conditions, have proved to be the most widely approach used to model avian colour vision and colouration (Stoddard and Prum 2008 ; Kemp et al. 2015 ). Luminance refers to the perceived lightness of a patch (brightness); so, we simply used the double cone photon catch values. Saturation refers to the amount of colour compared with white light, and it was obtained by plotting the standardised single cone catch data for each individual in avian tetrahedral colour space (Stevens et al. 2009 ) and calculating the distance from the centre of the colour space (following Endler and Mielke 2005 ). Values were generated using ‘d65’ irradiance levels (Badás et al. 2017 ). To calculate hue or colour type, we derived colour channels based on using ratios from the photon catch outputs for each patch. The four single cone types for bird vision are categorised after the relative stimulation of wavelengths: ultra-short (UV), short (SW), medium (MW) or long (LW) (Cuthill 2006 ). Hue was then calculated as the ratio of cone catch values: ‘(LW + MW + UV) versus SW’ for the yellow breast feathers (Badás et al. 2017 ), ‘(SW + UV) versus (MW + LW)’ for the crown, and ‘(MW + UV) versus (SW + LW)’ for the tail. This approach is broadly inspired by the way that opponent colour channels work in vision in encoding antagonistic colour types (Osorio et al. 1999 ) and is based on recent work following the same methods (Evans et al. 2010 ; Komdeur et al. 2005 ; Spottiswoode and Stevens 2011 ; Stevens et al. 2014 ). Note that we are not suggesting that the ratio used here is actually present in avian vision, but a logical and intuitive way to describe variation in hue. Hue was not calculated for the white cheek because this is an achromatic plumage patch. Following calculation of photon catches, we determined colour contrasts using a model of visual discrimination that accurately predicts discrimination behaviour in observers (Vorobyev et al. 1998 ). By using the single cones, we extracted colour differences (Vorobyev et al. 1998 ), and using the double cones we obtained luminance (achromatic) differences (Siddiqi et al. 2004 ). When modelling, we used the retinal single cone proportions of the blue tit available in the literature (long wave = 1.00, medium wave = 0.99, short wave = 0.71 and UVS = 0.37; Hart et al. 2000 ), and Weber fractions were set to 0.05 for all cones in both chromatic and achromatic contrasts. Some authors have suggested that the appropriate Weber fraction for the long wave cones may be 0.1 (Lind 2016 ), and thus we also computed chromatic and achromatic scores with this value. These models gave qualitatively the same results (shown in the Online Resource). Colour contrasts are expressed in ‘just noticeable differences’ (JND scores), where generally, a JND of less than 1.00 indicates that two stimuli are indistinguishable; values between 1.00 and 3.00 should be difficult to discriminate except under optimal viewing conditions, and larger values allow increasingly easy discrimination (Siddiqi et al. 2004 ). Then, for each individual and colour patch, by calculating chromatic and achromatic colour contrasts and reporting JND scores, we evaluated the change of colour between different periods. Statistical analyses All analyses were performed in R v.3.1.3 (R Foundation for Statistical Computing, Vienna). We used saturation and luminance (not hue) for each colour patch analyses. Yellow hue and saturation were highly correlated (spring: r = 0.87, p < 0.001; winter: r = 0.81, p < 0.001), and we chose to use saturation rather than hue as it most consistently reflects feather carotenoid content across species (Saks et al. 2003 ; McGraw and Gregory 2004 ). Hue and saturation were also correlated in the blue crown (spring: r = 0.99, p < 0.001; winter: r = 0.98, p < 0.001) and blue-green tail (spring: r = 0.91, p < 0.001; Winter: r = 0.90, p < 0.001) plumage, and again, we used saturation rather than hue. In winter, females had lower recapture probability than males (χ 2 1 = 12.63, p < 0.001, N = 40), probably because the method of capture using species-specific playback calls might preferentially attract males. Thus, females were not included in the analyses (data on breeding parameters and parasite infection was available only for one female). From 78 breeding pairs in the breeding season 1, we were able to recapture 21 males in winter (annual survival rates of adult blue tits are similar in other European populations, see Dhondt et al. 1998 ). Due to limiting blood volumes for molecular analyses on four individuals and two individuals for which colour data could not be obtained because of measurement error, data on breeding parameters and parasitic infections during reproduction was available for 15 individuals that were included in the analyses. To explore the differences in colour between spring 1 and winter, we fitted a linear mixed model for each feather patch with one of the colour variables as the response variable. For each dependent variable, a set of biologically meaningful models was designed in order to explore variation in feather colour before and after the moult. All models included sampling occasion as a fixed factor and individual identity as a random factor in order to control for repeated measures on the same individual. Alternative models may also include a maximum of three of the following predictors (maximum number of parameters to be estimated k = 5 including intercept and one interaction at a time due to reduced sample size): age, date of winter sampling, hatching date, body mass, and parasite infection intensity by one of each blood parasite species (see parasitological analyses above). Parasite intensity variables were cubic root transformed and classified by quantiles in order to categorise data in two meaningful groups: low and high intensity of parasitic infections. Models included the interaction between sample (spring or winter season 1) and one parasite species at a time due to limited sample size (we specifically tested for the interaction, as seen in Knowles et al. 2010 ). Different parasite species were analysed as competing hypotheses in which we explored whether parasite load by one malaria or malaria-like parasite may affect the moult on different feather patches. We chose to explore the effects of infection in feather colour change in different analyses for each species because these may have different virulence levels and thus different parasitaemia. Because we were interested in the change in feather colour with respect to individual status at the breeding season, winter body mass was not included in the analyses. Moreover, spring body mass was correlated to winter body mass ( t = 2.84, df = 17, correlation coefficient: r = 0.57, p = 0.01). Date of sampling was incorporated into the set of models to account for its effect on feather colouration and infection probability. Age was obtained from previous ringing records and coded into a three level score due to reduced sample size of older individuals: 1 = first-years ( N = 9), 2 = second-years ( N = 6), 3 = third-years and older ( N = 6). The final most parsimonious model was selected based on AIC (Akaike Information Criterion) via its corrected version for small sample sizes (AICc, Sugiura 1978 ). When the difference in AICc between two or more models is less than 10 AIC units ( Δ AIC < 10), they are thought to be reasonably well-fitted models (Bolker et al. 2009 ). When this was the case, results from models with similar support are presented in tables and ordered according to their AIC weight (AIC w , see Table S1 in the Online Resource). To quantify the relative importance of individual variables within the selected model, we calculated model weights (Johnson and Omland 2004 ). Due to reduced sample size, and in order to be conservative, we further confirmed model support by using estimates of significance between the selected versus the null model. These were obtained by parametric bootstrap procedures (‘PBmodcomp’ command from the R package pbkrtest following Halekoh and Højsgaard 2014 , not shown). In addition to this, model parameters and 95% confidence intervals for the main effects in selected models are shown in Tables S1 and S2 , and they were calculated from 1000 bootstrapped iterations derived with ‘bootMer’ (from the R package lme4, Bates et al. 2014 ). Finally, we explored whether changes in male colour variables between seasons 1 and 2 explained better performance in spring 2. Out of the 21 individuals captured between spring 1 and winter, 13 males were recaptured again in spring 2 (colour data was only available for 11 males due to measurement error). We calculated the rate of change in saturation and brightness between breeding seasons for patches where we had previously found significant change (white cheek): (C 2 -C 1 )/C 1 , where C 1 refers to colour in spring 1 and C 2 to colour in spring 2. Then, this change was related to (i) clutch size change between seasons 1–2, (ii) hatching date change between seasons 1–2 and (iii) the JND scores describing perceptible differences in colour between female partner in season 1 versus female partner in season 2. Four males paired with the same female in season 2, but we believe that excluding these males from the analysis was not necessary because the JND contrast between the same female could still provide information about changes in female feather colouration between seasons (i.e. higher quality females may change more between seasons if their feather colouration was duller before the moult). Thus, JND scores could still reflect individual quality given our premises. The relationship between male colour change and other breeding parameters (i.e. number of fledglings) was not checked because a post-hatching experiment conducted in season 2 could potentially affect these parameters. We aimed to look at trends shared between colour variables by using Pearson correlations, instead of using more complex modelling due to reduced sample size. In relatively small samples like the present one assumptions of normality are likely to be violated, whereas non-parametric bootstrapping allows us to compute new parameter estimates without making assumptions on the form of the population (Fox 2002 ). Thus, we further complemented the significant results obtained by Pearson correlations through robust regression models or with those calculated from 1000 bootstrapped iterations derived from resampling with the function ‘bootCase’ (from the R package car, Fox and Weisberg 2011 ). Effect sizes for all analyses are reported as Cohen’s D (Cohen 1998 ), which are generally interpreted as small (0.2), medium (0.5) and large (0.8). Results Colour change from spring to winter in season 1 Plumage saturation in the white cheek changed significantly before and after the moult depending on the infection by Plasmodium parasites ( t = − 2.34, p = 0.036, effect size ES = 0.57; Table 1 , cheek saturation model 1). The overall trend was that individuals decreased saturation in their white cheek feathers after the moult. Note that, in contrast to saturation changes in other ornaments (i.e. blue crown or yellow breast feathers), more saturation in white feathers may indicate that this is a less pure feather patch (Badás et al. 2017 ). Indeed, in this study, males that were more intensely parasitized by Plasmodium during the breeding season decreased saturation significantly less (Fig. 1 , for graphical purposes, we represent mean changes between sampling occasions). Saturation in yellow-breast blue tit feathers increased after the moult, although this increase was marginally non-significant ( t = − 1.87, p = 0.08, effect size ES = 0.48; Table S1 ). Such marginal increase was unaffected by parasite loads or other parameters during the breeding season. No significant change was detected for the blue crown and the blue-green tail feathers before and after the moult (Table S1 ), but there was a trend that males that were more parasitized by Leucocytozoon A grew more saturated blue crown feathers after moulting ( t = − 1.9, p = 0.08, effect size ES = 0.56; Table S1 ; Fig. S1 a). Finally, males with more saturated blue-green tail feathers had higher body mass, irrespective of sampling occasion ( t = 2.31, p = 0.03, effect size ES = 0.54; Table S1 ). Table 1 Best-fitted models explaining colour change in blue tits using the Akaike’s second-order Information Criterion (AICc). When ∆AICc < 10 units, all competing models are shown (see main text). Variables included in each model are marked with an X. AIC w refers to the weight of each new variable included in the models showing similar support. Only models that were supported after significant parametric bootstrap against the null model are shown (explaining why in some cases ΣAIC w ≠ 1). A total of 30 males were included in this analysis Full size table Fig. 1 Chromatic change between spring and winter in the white cheek in relation to Plasmodium infection during spring. Bars indicate standard errors Full size image Patterns of achromatic change from spring 1 to winter were related to individual quality during spring 1. The increase in white cheek brightness after the moult was positively correlated with higher body mass during the breeding season ( t = − 2.74, p = 0.017, effect size ES = 0.72; Table 1 , cheek brightness model 1; Fig. 2 ). We also detected that males that were more parasitized by Haemoproteus tended to grow marginally brighter yellow breast feathers ( t = − 1.95, p = 0.08, effect size ES = 0.69; Table S2 ; Fig. S1 b) and marginally brighter blue-green feathers at the base of the tail ( t = − 1.89, p = 0.08, effect size ES = 0.7; Table S2 ; Fig. S1 c). Intense infections by the parasite Leucocytozoon B during spring 1 had a marginally non-significant negative effect in brightness in the blue-green base of the tail in winter ( t = 1.96, p = 0.07, effect size ES = 0.71; Table S2 ; Fig. S1 d). Finally, no significant change was detected for the blue crown brightness (Table S2 ). Mean JND contrasts between each individual before and after the moult are shown in Table S3 . Fig. 2 Achromatic change in the white cheek in relation to spring body mass ( F 2,10 = 4.67, p value = 0.0382, R 2 = 38.54%, N = 14). Body mass (g) is expressed as the corrected body mass following Senar ( 2002 ). Regression line and ± 95% confidence intervals (shaded area) are shown. Note that confidence intervals were calculated from 1,000 bootstrapped iterations that control for reduced sample size and presence of outliers (see main text) Full size image Male colour, female partner and breeding parameters in season 2 Because we found significant differences in cheek colour variables before and after moult (spring vs. winter season 1, see Table 1 ), we used these variables in subsequent analyses for season 2. There were no significant associations between (i) change in cheek saturation between seasons and breeding parameters in season 2 (only hatching date and clutch size, see methods section, all p values > 0.5), (ii) change in cheek saturation between seasons and JND scores for female differences between seasons (all p values > 0.5) or (iii) change in cheek brightness between seasons and breeding parameters in season 2 (only hatching date and clutch size, see methods section, all p values > 0.5). Non-significant results are shown in Table S3 (Online Resource). However, the change in cheek brightness was significantly related to the JND scores describing female differences between seasons. The more pronounced increase in male cheek brightness after the moult, the higher the JND scores describing female differences between seasons. This is, in season 2, males with brighter cheeks paired with females that were more different in their cheek luminance than the female pair they had in the previous season (bootstrapped estimate = 0.089, sup. 95% CI = 0.18, inf. 95% CI = 0.003, R 2 = 0.425, F 1,9 = 8.39, p value = 0.018, N = 11, effect size ES = 1.93, Fig. 3 ). Because JND scores only state the magnitude of the absolute differences between two colour spectra, we confirmed that the female pair in season 2 had indeed brighter cheek feathers than the same male’s partner in season 1 (robust regression model R 2 = 0.39, Chi-sq = 6.17, df = 1, p value = 0.013, N = 13, effect size ES = 1.9). Mean JND contrasts between each individual comparing feather colour in season 1 versus season 2 are shown in Table S3 . Fig. 3 Change in male white cheek luminance between seasons in relation to female partners JND scores ( F 1,9 = 8.39, p value = 0.018, R 2 = 42.5%, N = 11). Female JND scores were obtained as achromatic colour differences between the male's partner in season 1 versus the same male's partner in season 2. Regression line and ± 95% confidence intervals (shaded area) are shown Full size image Discussion This study investigated feather colour change between seasons in relation to breeding characteristics from the previous reproductive season. We offered correlational evidence that, in blue tits, colour change in the white cheek was related to body mass and the intensity of Plasmodium infections while breeding. Additionally, after controlling for individual differences prior to moult, we found that males with a more pronounced increase in white cheek brightness paired with brighter females in season 2 compared to the females they paired with in season 1. The change in structural white colouration in the blue tit was, thus, related to pair formation in the consecutive breeding season. However, no differences were found in plumage patches such as the blue crown or the blue-green tail between seasons. This is unexpected, especially because previous studies have shown that newly moulted feathers, for example, in the blue crown, were brighter (Örnborg et al. 2002 ). Blue tit crown coloration may change within a given year (Örnborg et al. 2002 ) and also between years with individual age (Delhey et al. 2006 ). The apparent lack of colour change between seasons in structurally blue feathers in the present blue tit population could be explained by the time of year the spectral samples were taken. Here, the post-breeding reflectance spectra was not sampled soon after breeding (i.e. in summer) but after moult had been completed (in winter). It may be possible that the observed feather colouration may have already faded to be similar to what the colours were in the previous spring. For example, the differences may have been greater if the samples had been collected immediately after the moult and long before the next breeding season (Delhey et al. 2010 ). In this study, we also found a relationship between the change in brightness in unpigmented feathers and body mass during the breeding season. Brightness in white feathers is produced by large, randomly organised air vacuoles in the barbules (Prum 2006 ), and these vacuoles are absent in less bright white feathers, as seen in the rock partmigan Lagopus mutus (Dyck 1976 ). In the blue tit, the particular arrangement of barbules in the white cheek could be related to individual status during reproduction, because feathers are moulted immediately after the breeding season (Svensson and Nilsson 1995 ). Male blue tits that were heavier during the breeding season (spring 1) might have started the moult with more resources to allocate into plumage maintenance, which has been shown to increase feather brightness (Griggio et al. 2010 ). Our study will add to a recently growing body of evidence that white plumage reflectance may signal individual quality (Griggio et al. 2011 ; Zanollo et al. 2012 ; Ruiz-De-Castañeda et al. 2015 ). In addition to this, we offer for the first time, correlational evidence that the conditions experienced during the breeding season may have an effect on mating patterns in the following season (but see Doucet et al. 2005 ). We found that male blue tits that developed brighter cheeks in spring 2 paired with brighter females when these were compared to the females they mated with in spring 1. This suggests that there may be assortative mating with respect to white plumage colouration in the blue tit. In the same species, this was found for the ultraviolet colouration of the crown (Mahr et al. 2012 ). Brighter male blue tits in the present population may have been able to attract better quality females in the following spring if for example, they benefited from better body condition during winter. Accordingly, we found that brighter males were in better body condition during winter (spring and winter body mass were correlated). Passerines like the blue tit might signal body condition during winter because this may have great relevance for the subsequent breeding season by providing better access to food resources and higher probability of establishing a territory in spring (Smith and Nilsson 1987 ). Indeed, brighter achromatic patches and larger white ornaments have been related to higher female quality in pied flycatchers breeding in the same area (Cantarero et al. 2017 ; López-Arrabé et al. 2014 ). And in the barn owl ( Tyto alba ), adults that became whiter performed better than the previous year (Dreiss and Roulin 2010 ). Unfortunately, in this study, we were unable to explore breeding success as a result of mating with higher ornamented females in spring 2 because a post-hatching experiment was taking place in the season of 2014. Still, we present, for the first time, data on feather colour change after the moult, which is associated to mating patterns in the consecutive season, and is further supported by a discrimination model that takes into account the birds’ visual system (but see Griffith 2000 ; Gustafsson et al. 1995 ). Another noteworthy point is that a single ornament may provide different information on the overall quality of an individual via different colour characteristics, in accordance with the ‘multiple messages hypothesis’ (Møller and Pomiankowski 1993 ). In this study, we found this pattern in the white cheek (via brightness and saturation). Male blue tits that grew more saturated white cheeks were more intensely infected by Plasmodium in the spring of season 1, while cheek brightness was related to body mass (see above). It is possible that more saturation in white patches signals poorer individual quality, because white colours should be less saturated (Badás et al. 2017 ). Surprisingly, the relationship between feather colouration and parasite load by several malaria or malaria-like parasites offered conflicting results. As opposed to what we found in white cheek feathers, other haemosporidian species were marginally related to an increase in feather colouration (see Tables S1 and S2 and Fig. S1 : higher intensity of Haemoproteus tended to be associated with increased breast and tail brightness, and higher Leucocytozoon A parasite loads tended to be related to increased crown saturation after the moult). On the contrary, male blue tits that were more intensely infected by Leucocytozoon B parasites developed marginally duller tail feathers. It seems that the effects of parasitic species with respect to feather colour changes could vary between seasons, because in a previous study during the 2012 breeding season in the same population (Badás et al. 2017 ), males that were more intensely infected with Haemoproteus (as opposed to Plasmodium in this study), had more saturated white cheeks. Two hypotheses can be proposed to explain why several parasites species were found to affect colouration differently in different reproductive seasons: (i) certain parasites could increase their level of virulence depending on environmental conditions (Møller et al. 2013 ) or (ii) infections by a certain parasite could be positively correlated with infections by another undetected parasite that disrupts feather structure in the observed patch. For example, wild turkeys ( Meleagris gallopavo ) suffering from coccidiosis had reduced UV reflectance in a structural plumage patch (Hill et al. 2005 ). Although speculative at the moment, individuals suffering from avian malaria could be infected by other parasites such as coccidians ( Isospora sp.), which have been found to infect blue tits in our population (del Cerro, S., unpublished data). In fact, multiple infections with parasites other than haemosporidians are common in this blue tit population (Merino et al. 1997 ; del Cerro et al. 2010 ). Facilitation of secondary infections when individuals are already infected has been reported in humans (Nacher et al. 2002 ); and in birds, these could be driven by MHC alleles that alter the competitive interactions between malaria parasites (Loiseau et al. 2008 ). However, we cannot exclude the possibility that the observed associations respond to complex interactions between immune system responses and feather synthesis during the moult (Sanz et al. 2004 ; Serra et al. 2007 ; Orledge et al. 2012 ), so marginal results should be taken carefully. The challenge in future studies will be to distinguish empirically whether different ornaments are redundant or non-redundant by exploring the behaviour they elicit from a recipient (Partan and Marler 1999 ). Our results, although correlational, suggest that better performance during the reproductive season (i.e. regarded as higher body mass and/or less intense infections by blood parasites), may have important implications for the following breeding event. Blue tit males that were in better body condition at the highly demanding nestling provisioning stage were able to develop brighter white cheek feathers after the moult. This might have enabled them to find brighter females than those they paired with in the previous spring. Although limited to one between-year shift, we also offered the first correlational evidence that intense infections by Plasmodium during a costly reproductive stage might have consequences after the moult. A visual discrimination model confirmed that differences in colour could be perceived by conspecifics. This study sets the basis for further experimental studies on the carry-over effects of reproduction in ornamentation (but see Doutrelant et al. 2012 ) and mating patterns. Allocating resources efficiently during reproduction to immune defence and self-maintenance may increase the resources available for the moult and thus affect mating patterns in the following reproductive period.
Male blue tits with white cheeks are healthier and more likely to mate with higher quality partners than their counterparts with duller cheek feathers. Having purer white cheeks also indicates that a blue tit was better able to overcome an infection with parasites during the previous year. This is according to Elisa Pérez Badás of the Museo Nacional de Ciencias Naturales in Spain. She is lead author of a study published in Springer's journal The Science of Nature. Previous research has shown that the food consumed by a bird, as well as its general well-being, can influence the colour of its feathers. Scientists also know that hardships suffered by birds in one season can be carried over into the next. In this study, Badás and her research team wanted to test whether difficulties encountered by the blue tit (Cyanistes caeruleus) during the breeding season might influence the precise intensity of the new blue, white and yellow feathers growing once these birds have moulted. In the life cycle of this small bird, which is widespread in forests in Europe and Western Asia, moulting only happens once the breeding season is completed. Therefore, the birds show off their new plumage until the end of the next breeding season. To prove their assumptions, the research team monitored a population of blue tits living in a forest in central Spain over the course of two breeding seasons. In the first season, the researchers caught the birds and took blood samples to detect whether the blue tits suffered from parasitic infections. The team also used a spectrophotometer to gauge the spectrum of colour of the birds' feathers. These results were compared with the hues, levels of saturation and luminance that blue tits are known to see. In the following season, the researchers noted the birds' mating patterns, and how these were influenced by changes that might have occurred in particular birds' feather colours. Overall, the researchers found that males in a better physical condition (males that weighed more) during the highly demanding nestling provisioning stage sported brighter, whiter cheeks. Those who were not infected much by the malaria parasite Plasmodium while breeding also showed purer white cheek feathers in winter. According to Pérez Badás, this indicates that their feathers were of a better quality, and that intense parasitic infections can have an effect on a bird's life cycle. "In the following season, those males with brighter cheeks paired with females that had noticeably brighter cheek patches compared to the male's previous mate," adds Badás. The results therefore suggest that the conditions that male blue tits experienced during reproduction are likely to affect moult and thus feather colouration, at least in their white facial feathers. This, in turn, enables the stronger males to find brighter females than the partners they paired with in the previous spring. "Members of the same species were quite able to pick up such colour differences," notes Badás.
10.1007/s00114-018-1539-z